Test Report: KVM_Linux_crio 19468

                    
                      91a16964608358fea9174134e48bcab54b5c9be6:2024-08-19:35860
                    
                

Test fail (30/318)

Order failed test Duration
34 TestAddons/parallel/Ingress 153.12
36 TestAddons/parallel/MetricsServer 329.3
45 TestAddons/StoppedEnableDisable 154.4
82 TestFunctional/serial/ComponentHealth 2.09
164 TestMultiControlPlane/serial/StopSecondaryNode 142.1
166 TestMultiControlPlane/serial/RestartSecondaryNode 61.78
168 TestMultiControlPlane/serial/RestartClusterKeepsNodes 412.74
171 TestMultiControlPlane/serial/StopCluster 141.91
231 TestMultiNode/serial/RestartKeepsNodes 331.63
233 TestMultiNode/serial/StopMultiNode 141.38
240 TestPreload 271.11
248 TestKubernetesUpgrade 436.45
321 TestStartStop/group/old-k8s-version/serial/FirstStart 294.68
345 TestStartStop/group/no-preload/serial/Stop 139.16
349 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.1
351 TestStartStop/group/embed-certs/serial/Stop 139.03
352 TestStartStop/group/old-k8s-version/serial/DeployApp 0.49
353 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 96.1
354 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
356 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
357 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
362 TestStartStop/group/old-k8s-version/serial/SecondStart 749.65
363 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.41
364 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.28
365 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.29
366 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.6
367 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 442.24
368 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 388.47
369 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 330.73
370 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 117.54
x
+
TestAddons/parallel/Ingress (153.12s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-347256 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-347256 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-347256 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [9632e6a7-a0a4-4456-ab6f-c0eab065596d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [9632e6a7-a0a4-4456-ab6f-c0eab065596d] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.004165255s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-347256 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-347256 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.917765171s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-347256 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-347256 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.18
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-347256 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-347256 addons disable ingress-dns --alsologtostderr -v=1: (1.338197179s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-347256 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-347256 addons disable ingress --alsologtostderr -v=1: (7.72900665s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-347256 -n addons-347256
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-347256 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-347256 logs -n 25: (1.279191635s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-817469                                                                     | download-only-817469 | jenkins | v1.33.1 | 19 Aug 24 17:44 UTC | 19 Aug 24 17:44 UTC |
	| delete  | -p download-only-891667                                                                     | download-only-891667 | jenkins | v1.33.1 | 19 Aug 24 17:44 UTC | 19 Aug 24 17:44 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-807766 | jenkins | v1.33.1 | 19 Aug 24 17:44 UTC |                     |
	|         | binary-mirror-807766                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:38687                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-807766                                                                     | binary-mirror-807766 | jenkins | v1.33.1 | 19 Aug 24 17:44 UTC | 19 Aug 24 17:44 UTC |
	| addons  | disable dashboard -p                                                                        | addons-347256        | jenkins | v1.33.1 | 19 Aug 24 17:44 UTC |                     |
	|         | addons-347256                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-347256        | jenkins | v1.33.1 | 19 Aug 24 17:44 UTC |                     |
	|         | addons-347256                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-347256 --wait=true                                                                | addons-347256        | jenkins | v1.33.1 | 19 Aug 24 17:44 UTC | 19 Aug 24 17:47 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-347256 addons disable                                                                | addons-347256        | jenkins | v1.33.1 | 19 Aug 24 17:47 UTC | 19 Aug 24 17:47 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-347256 addons disable                                                                | addons-347256        | jenkins | v1.33.1 | 19 Aug 24 17:47 UTC | 19 Aug 24 17:47 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ssh     | addons-347256 ssh cat                                                                       | addons-347256        | jenkins | v1.33.1 | 19 Aug 24 17:47 UTC | 19 Aug 24 17:47 UTC |
	|         | /opt/local-path-provisioner/pvc-94a0ff27-15d3-467a-86db-027973dec176_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-347256 addons disable                                                                | addons-347256        | jenkins | v1.33.1 | 19 Aug 24 17:47 UTC | 19 Aug 24 17:47 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-347256 ip                                                                            | addons-347256        | jenkins | v1.33.1 | 19 Aug 24 17:47 UTC | 19 Aug 24 17:47 UTC |
	| addons  | addons-347256 addons disable                                                                | addons-347256        | jenkins | v1.33.1 | 19 Aug 24 17:47 UTC | 19 Aug 24 17:47 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-347256        | jenkins | v1.33.1 | 19 Aug 24 17:47 UTC | 19 Aug 24 17:47 UTC |
	|         | -p addons-347256                                                                            |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-347256        | jenkins | v1.33.1 | 19 Aug 24 17:47 UTC | 19 Aug 24 17:47 UTC |
	|         | addons-347256                                                                               |                      |         |         |                     |                     |
	| addons  | addons-347256 addons disable                                                                | addons-347256        | jenkins | v1.33.1 | 19 Aug 24 17:47 UTC | 19 Aug 24 17:47 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-347256        | jenkins | v1.33.1 | 19 Aug 24 17:48 UTC | 19 Aug 24 17:48 UTC |
	|         | addons-347256                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-347256        | jenkins | v1.33.1 | 19 Aug 24 17:48 UTC | 19 Aug 24 17:48 UTC |
	|         | -p addons-347256                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-347256 ssh curl -s                                                                   | addons-347256        | jenkins | v1.33.1 | 19 Aug 24 17:48 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-347256 addons disable                                                                | addons-347256        | jenkins | v1.33.1 | 19 Aug 24 17:48 UTC | 19 Aug 24 17:48 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-347256 addons                                                                        | addons-347256        | jenkins | v1.33.1 | 19 Aug 24 17:48 UTC | 19 Aug 24 17:48 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-347256 addons                                                                        | addons-347256        | jenkins | v1.33.1 | 19 Aug 24 17:48 UTC | 19 Aug 24 17:48 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-347256 ip                                                                            | addons-347256        | jenkins | v1.33.1 | 19 Aug 24 17:50 UTC | 19 Aug 24 17:50 UTC |
	| addons  | addons-347256 addons disable                                                                | addons-347256        | jenkins | v1.33.1 | 19 Aug 24 17:50 UTC | 19 Aug 24 17:50 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-347256 addons disable                                                                | addons-347256        | jenkins | v1.33.1 | 19 Aug 24 17:50 UTC | 19 Aug 24 17:50 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 17:44:53
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 17:44:53.053605  380723 out.go:345] Setting OutFile to fd 1 ...
	I0819 17:44:53.053816  380723 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 17:44:53.053825  380723 out.go:358] Setting ErrFile to fd 2...
	I0819 17:44:53.053829  380723 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 17:44:53.053984  380723 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19468-372744/.minikube/bin
	I0819 17:44:53.054561  380723 out.go:352] Setting JSON to false
	I0819 17:44:53.055529  380723 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":5236,"bootTime":1724084257,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 17:44:53.055588  380723 start.go:139] virtualization: kvm guest
	I0819 17:44:53.057502  380723 out.go:177] * [addons-347256] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 17:44:53.058661  380723 out.go:177]   - MINIKUBE_LOCATION=19468
	I0819 17:44:53.058673  380723 notify.go:220] Checking for updates...
	I0819 17:44:53.061327  380723 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 17:44:53.062544  380723 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19468-372744/kubeconfig
	I0819 17:44:53.063749  380723 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19468-372744/.minikube
	I0819 17:44:53.064862  380723 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 17:44:53.066072  380723 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 17:44:53.067543  380723 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 17:44:53.099698  380723 out.go:177] * Using the kvm2 driver based on user configuration
	I0819 17:44:53.101139  380723 start.go:297] selected driver: kvm2
	I0819 17:44:53.101170  380723 start.go:901] validating driver "kvm2" against <nil>
	I0819 17:44:53.101186  380723 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 17:44:53.101949  380723 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 17:44:53.102038  380723 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19468-372744/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 17:44:53.117529  380723 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 17:44:53.117602  380723 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 17:44:53.117831  380723 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 17:44:53.117895  380723 cni.go:84] Creating CNI manager for ""
	I0819 17:44:53.117908  380723 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 17:44:53.117915  380723 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 17:44:53.117968  380723 start.go:340] cluster config:
	{Name:addons-347256 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-347256 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 17:44:53.118081  380723 iso.go:125] acquiring lock: {Name:mk4c0ac1c3202b1a296739df622960e7a0bd8566 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 17:44:53.119877  380723 out.go:177] * Starting "addons-347256" primary control-plane node in "addons-347256" cluster
	I0819 17:44:53.121147  380723 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 17:44:53.121182  380723 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0819 17:44:53.121193  380723 cache.go:56] Caching tarball of preloaded images
	I0819 17:44:53.121260  380723 preload.go:172] Found /home/jenkins/minikube-integration/19468-372744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 17:44:53.121270  380723 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 17:44:53.121582  380723 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/config.json ...
	I0819 17:44:53.121602  380723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/config.json: {Name:mkfeca91554d7bf1aa95ccb29e2b8c6aa486d7f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:44:53.121742  380723 start.go:360] acquireMachinesLock for addons-347256: {Name:mk24ba67a747357e9ce40f1e460d2bb0bc59cc75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 17:44:53.121790  380723 start.go:364] duration metric: took 35.232µs to acquireMachinesLock for "addons-347256"
	I0819 17:44:53.121808  380723 start.go:93] Provisioning new machine with config: &{Name:addons-347256 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:addons-347256 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 17:44:53.121866  380723 start.go:125] createHost starting for "" (driver="kvm2")
	I0819 17:44:53.123421  380723 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0819 17:44:53.123561  380723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:44:53.123597  380723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:44:53.138179  380723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35243
	I0819 17:44:53.138753  380723 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:44:53.139343  380723 main.go:141] libmachine: Using API Version  1
	I0819 17:44:53.139366  380723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:44:53.139760  380723 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:44:53.139989  380723 main.go:141] libmachine: (addons-347256) Calling .GetMachineName
	I0819 17:44:53.140132  380723 main.go:141] libmachine: (addons-347256) Calling .DriverName
	I0819 17:44:53.140302  380723 start.go:159] libmachine.API.Create for "addons-347256" (driver="kvm2")
	I0819 17:44:53.140330  380723 client.go:168] LocalClient.Create starting
	I0819 17:44:53.140379  380723 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem
	I0819 17:44:53.336351  380723 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem
	I0819 17:44:53.702401  380723 main.go:141] libmachine: Running pre-create checks...
	I0819 17:44:53.702433  380723 main.go:141] libmachine: (addons-347256) Calling .PreCreateCheck
	I0819 17:44:53.703016  380723 main.go:141] libmachine: (addons-347256) Calling .GetConfigRaw
	I0819 17:44:53.703451  380723 main.go:141] libmachine: Creating machine...
	I0819 17:44:53.703470  380723 main.go:141] libmachine: (addons-347256) Calling .Create
	I0819 17:44:53.703647  380723 main.go:141] libmachine: (addons-347256) Creating KVM machine...
	I0819 17:44:53.704830  380723 main.go:141] libmachine: (addons-347256) DBG | found existing default KVM network
	I0819 17:44:53.705633  380723 main.go:141] libmachine: (addons-347256) DBG | I0819 17:44:53.705486  380745 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ad0}
	I0819 17:44:53.705663  380723 main.go:141] libmachine: (addons-347256) DBG | created network xml: 
	I0819 17:44:53.705682  380723 main.go:141] libmachine: (addons-347256) DBG | <network>
	I0819 17:44:53.705690  380723 main.go:141] libmachine: (addons-347256) DBG |   <name>mk-addons-347256</name>
	I0819 17:44:53.705697  380723 main.go:141] libmachine: (addons-347256) DBG |   <dns enable='no'/>
	I0819 17:44:53.705702  380723 main.go:141] libmachine: (addons-347256) DBG |   
	I0819 17:44:53.705708  380723 main.go:141] libmachine: (addons-347256) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0819 17:44:53.705714  380723 main.go:141] libmachine: (addons-347256) DBG |     <dhcp>
	I0819 17:44:53.705720  380723 main.go:141] libmachine: (addons-347256) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0819 17:44:53.705727  380723 main.go:141] libmachine: (addons-347256) DBG |     </dhcp>
	I0819 17:44:53.705732  380723 main.go:141] libmachine: (addons-347256) DBG |   </ip>
	I0819 17:44:53.705741  380723 main.go:141] libmachine: (addons-347256) DBG |   
	I0819 17:44:53.705749  380723 main.go:141] libmachine: (addons-347256) DBG | </network>
	I0819 17:44:53.705759  380723 main.go:141] libmachine: (addons-347256) DBG | 
	I0819 17:44:53.710875  380723 main.go:141] libmachine: (addons-347256) DBG | trying to create private KVM network mk-addons-347256 192.168.39.0/24...
	I0819 17:44:53.774194  380723 main.go:141] libmachine: (addons-347256) DBG | private KVM network mk-addons-347256 192.168.39.0/24 created
	I0819 17:44:53.774243  380723 main.go:141] libmachine: (addons-347256) Setting up store path in /home/jenkins/minikube-integration/19468-372744/.minikube/machines/addons-347256 ...
	I0819 17:44:53.774262  380723 main.go:141] libmachine: (addons-347256) DBG | I0819 17:44:53.774137  380745 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19468-372744/.minikube
	I0819 17:44:53.774279  380723 main.go:141] libmachine: (addons-347256) Building disk image from file:///home/jenkins/minikube-integration/19468-372744/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0819 17:44:53.774313  380723 main.go:141] libmachine: (addons-347256) Downloading /home/jenkins/minikube-integration/19468-372744/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19468-372744/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 17:44:54.046509  380723 main.go:141] libmachine: (addons-347256) DBG | I0819 17:44:54.046347  380745 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/addons-347256/id_rsa...
	I0819 17:44:54.180081  380723 main.go:141] libmachine: (addons-347256) DBG | I0819 17:44:54.179903  380745 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/addons-347256/addons-347256.rawdisk...
	I0819 17:44:54.180124  380723 main.go:141] libmachine: (addons-347256) DBG | Writing magic tar header
	I0819 17:44:54.180142  380723 main.go:141] libmachine: (addons-347256) DBG | Writing SSH key tar header
	I0819 17:44:54.180152  380723 main.go:141] libmachine: (addons-347256) DBG | I0819 17:44:54.180089  380745 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19468-372744/.minikube/machines/addons-347256 ...
	I0819 17:44:54.180245  380723 main.go:141] libmachine: (addons-347256) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/addons-347256
	I0819 17:44:54.180325  380723 main.go:141] libmachine: (addons-347256) Setting executable bit set on /home/jenkins/minikube-integration/19468-372744/.minikube/machines/addons-347256 (perms=drwx------)
	I0819 17:44:54.180361  380723 main.go:141] libmachine: (addons-347256) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19468-372744/.minikube/machines
	I0819 17:44:54.180372  380723 main.go:141] libmachine: (addons-347256) Setting executable bit set on /home/jenkins/minikube-integration/19468-372744/.minikube/machines (perms=drwxr-xr-x)
	I0819 17:44:54.180389  380723 main.go:141] libmachine: (addons-347256) Setting executable bit set on /home/jenkins/minikube-integration/19468-372744/.minikube (perms=drwxr-xr-x)
	I0819 17:44:54.180415  380723 main.go:141] libmachine: (addons-347256) Setting executable bit set on /home/jenkins/minikube-integration/19468-372744 (perms=drwxrwxr-x)
	I0819 17:44:54.180439  380723 main.go:141] libmachine: (addons-347256) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0819 17:44:54.180454  380723 main.go:141] libmachine: (addons-347256) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0819 17:44:54.180468  380723 main.go:141] libmachine: (addons-347256) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19468-372744/.minikube
	I0819 17:44:54.180483  380723 main.go:141] libmachine: (addons-347256) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19468-372744
	I0819 17:44:54.180497  380723 main.go:141] libmachine: (addons-347256) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0819 17:44:54.180512  380723 main.go:141] libmachine: (addons-347256) DBG | Checking permissions on dir: /home/jenkins
	I0819 17:44:54.180525  380723 main.go:141] libmachine: (addons-347256) DBG | Checking permissions on dir: /home
	I0819 17:44:54.180537  380723 main.go:141] libmachine: (addons-347256) DBG | Skipping /home - not owner
	I0819 17:44:54.180548  380723 main.go:141] libmachine: (addons-347256) Creating domain...
	I0819 17:44:54.181498  380723 main.go:141] libmachine: (addons-347256) define libvirt domain using xml: 
	I0819 17:44:54.181524  380723 main.go:141] libmachine: (addons-347256) <domain type='kvm'>
	I0819 17:44:54.181536  380723 main.go:141] libmachine: (addons-347256)   <name>addons-347256</name>
	I0819 17:44:54.181552  380723 main.go:141] libmachine: (addons-347256)   <memory unit='MiB'>4000</memory>
	I0819 17:44:54.181562  380723 main.go:141] libmachine: (addons-347256)   <vcpu>2</vcpu>
	I0819 17:44:54.181577  380723 main.go:141] libmachine: (addons-347256)   <features>
	I0819 17:44:54.181589  380723 main.go:141] libmachine: (addons-347256)     <acpi/>
	I0819 17:44:54.181596  380723 main.go:141] libmachine: (addons-347256)     <apic/>
	I0819 17:44:54.181605  380723 main.go:141] libmachine: (addons-347256)     <pae/>
	I0819 17:44:54.181612  380723 main.go:141] libmachine: (addons-347256)     
	I0819 17:44:54.181618  380723 main.go:141] libmachine: (addons-347256)   </features>
	I0819 17:44:54.181626  380723 main.go:141] libmachine: (addons-347256)   <cpu mode='host-passthrough'>
	I0819 17:44:54.181637  380723 main.go:141] libmachine: (addons-347256)   
	I0819 17:44:54.181649  380723 main.go:141] libmachine: (addons-347256)   </cpu>
	I0819 17:44:54.181680  380723 main.go:141] libmachine: (addons-347256)   <os>
	I0819 17:44:54.181702  380723 main.go:141] libmachine: (addons-347256)     <type>hvm</type>
	I0819 17:44:54.181718  380723 main.go:141] libmachine: (addons-347256)     <boot dev='cdrom'/>
	I0819 17:44:54.181734  380723 main.go:141] libmachine: (addons-347256)     <boot dev='hd'/>
	I0819 17:44:54.181748  380723 main.go:141] libmachine: (addons-347256)     <bootmenu enable='no'/>
	I0819 17:44:54.181757  380723 main.go:141] libmachine: (addons-347256)   </os>
	I0819 17:44:54.181768  380723 main.go:141] libmachine: (addons-347256)   <devices>
	I0819 17:44:54.181780  380723 main.go:141] libmachine: (addons-347256)     <disk type='file' device='cdrom'>
	I0819 17:44:54.181799  380723 main.go:141] libmachine: (addons-347256)       <source file='/home/jenkins/minikube-integration/19468-372744/.minikube/machines/addons-347256/boot2docker.iso'/>
	I0819 17:44:54.181811  380723 main.go:141] libmachine: (addons-347256)       <target dev='hdc' bus='scsi'/>
	I0819 17:44:54.181823  380723 main.go:141] libmachine: (addons-347256)       <readonly/>
	I0819 17:44:54.181839  380723 main.go:141] libmachine: (addons-347256)     </disk>
	I0819 17:44:54.181853  380723 main.go:141] libmachine: (addons-347256)     <disk type='file' device='disk'>
	I0819 17:44:54.181867  380723 main.go:141] libmachine: (addons-347256)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0819 17:44:54.181884  380723 main.go:141] libmachine: (addons-347256)       <source file='/home/jenkins/minikube-integration/19468-372744/.minikube/machines/addons-347256/addons-347256.rawdisk'/>
	I0819 17:44:54.181896  380723 main.go:141] libmachine: (addons-347256)       <target dev='hda' bus='virtio'/>
	I0819 17:44:54.181913  380723 main.go:141] libmachine: (addons-347256)     </disk>
	I0819 17:44:54.181924  380723 main.go:141] libmachine: (addons-347256)     <interface type='network'>
	I0819 17:44:54.181937  380723 main.go:141] libmachine: (addons-347256)       <source network='mk-addons-347256'/>
	I0819 17:44:54.181948  380723 main.go:141] libmachine: (addons-347256)       <model type='virtio'/>
	I0819 17:44:54.181957  380723 main.go:141] libmachine: (addons-347256)     </interface>
	I0819 17:44:54.181976  380723 main.go:141] libmachine: (addons-347256)     <interface type='network'>
	I0819 17:44:54.181989  380723 main.go:141] libmachine: (addons-347256)       <source network='default'/>
	I0819 17:44:54.182008  380723 main.go:141] libmachine: (addons-347256)       <model type='virtio'/>
	I0819 17:44:54.182020  380723 main.go:141] libmachine: (addons-347256)     </interface>
	I0819 17:44:54.182030  380723 main.go:141] libmachine: (addons-347256)     <serial type='pty'>
	I0819 17:44:54.182040  380723 main.go:141] libmachine: (addons-347256)       <target port='0'/>
	I0819 17:44:54.182050  380723 main.go:141] libmachine: (addons-347256)     </serial>
	I0819 17:44:54.182062  380723 main.go:141] libmachine: (addons-347256)     <console type='pty'>
	I0819 17:44:54.182078  380723 main.go:141] libmachine: (addons-347256)       <target type='serial' port='0'/>
	I0819 17:44:54.182090  380723 main.go:141] libmachine: (addons-347256)     </console>
	I0819 17:44:54.182101  380723 main.go:141] libmachine: (addons-347256)     <rng model='virtio'>
	I0819 17:44:54.182115  380723 main.go:141] libmachine: (addons-347256)       <backend model='random'>/dev/random</backend>
	I0819 17:44:54.182123  380723 main.go:141] libmachine: (addons-347256)     </rng>
	I0819 17:44:54.182135  380723 main.go:141] libmachine: (addons-347256)     
	I0819 17:44:54.182149  380723 main.go:141] libmachine: (addons-347256)     
	I0819 17:44:54.182161  380723 main.go:141] libmachine: (addons-347256)   </devices>
	I0819 17:44:54.182171  380723 main.go:141] libmachine: (addons-347256) </domain>
	I0819 17:44:54.182184  380723 main.go:141] libmachine: (addons-347256) 
	I0819 17:44:54.187984  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:53:c4:2e in network default
	I0819 17:44:54.188526  380723 main.go:141] libmachine: (addons-347256) Ensuring networks are active...
	I0819 17:44:54.188545  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:44:54.189160  380723 main.go:141] libmachine: (addons-347256) Ensuring network default is active
	I0819 17:44:54.189471  380723 main.go:141] libmachine: (addons-347256) Ensuring network mk-addons-347256 is active
	I0819 17:44:54.189930  380723 main.go:141] libmachine: (addons-347256) Getting domain xml...
	I0819 17:44:54.190558  380723 main.go:141] libmachine: (addons-347256) Creating domain...
	I0819 17:44:55.575338  380723 main.go:141] libmachine: (addons-347256) Waiting to get IP...
	I0819 17:44:55.576124  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:44:55.576562  380723 main.go:141] libmachine: (addons-347256) DBG | unable to find current IP address of domain addons-347256 in network mk-addons-347256
	I0819 17:44:55.576594  380723 main.go:141] libmachine: (addons-347256) DBG | I0819 17:44:55.576511  380745 retry.go:31] will retry after 295.150701ms: waiting for machine to come up
	I0819 17:44:55.872866  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:44:55.873329  380723 main.go:141] libmachine: (addons-347256) DBG | unable to find current IP address of domain addons-347256 in network mk-addons-347256
	I0819 17:44:55.873350  380723 main.go:141] libmachine: (addons-347256) DBG | I0819 17:44:55.873288  380745 retry.go:31] will retry after 287.211341ms: waiting for machine to come up
	I0819 17:44:56.161830  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:44:56.162615  380723 main.go:141] libmachine: (addons-347256) DBG | unable to find current IP address of domain addons-347256 in network mk-addons-347256
	I0819 17:44:56.162643  380723 main.go:141] libmachine: (addons-347256) DBG | I0819 17:44:56.162581  380745 retry.go:31] will retry after 377.259476ms: waiting for machine to come up
	I0819 17:44:56.541888  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:44:56.542314  380723 main.go:141] libmachine: (addons-347256) DBG | unable to find current IP address of domain addons-347256 in network mk-addons-347256
	I0819 17:44:56.542346  380723 main.go:141] libmachine: (addons-347256) DBG | I0819 17:44:56.542273  380745 retry.go:31] will retry after 519.651535ms: waiting for machine to come up
	I0819 17:44:57.065287  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:44:57.065704  380723 main.go:141] libmachine: (addons-347256) DBG | unable to find current IP address of domain addons-347256 in network mk-addons-347256
	I0819 17:44:57.065732  380723 main.go:141] libmachine: (addons-347256) DBG | I0819 17:44:57.065650  380745 retry.go:31] will retry after 553.174431ms: waiting for machine to come up
	I0819 17:44:57.620642  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:44:57.621087  380723 main.go:141] libmachine: (addons-347256) DBG | unable to find current IP address of domain addons-347256 in network mk-addons-347256
	I0819 17:44:57.621108  380723 main.go:141] libmachine: (addons-347256) DBG | I0819 17:44:57.621049  380745 retry.go:31] will retry after 898.791982ms: waiting for machine to come up
	I0819 17:44:58.521912  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:44:58.522296  380723 main.go:141] libmachine: (addons-347256) DBG | unable to find current IP address of domain addons-347256 in network mk-addons-347256
	I0819 17:44:58.522324  380723 main.go:141] libmachine: (addons-347256) DBG | I0819 17:44:58.522255  380745 retry.go:31] will retry after 929.252814ms: waiting for machine to come up
	I0819 17:44:59.453409  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:44:59.453776  380723 main.go:141] libmachine: (addons-347256) DBG | unable to find current IP address of domain addons-347256 in network mk-addons-347256
	I0819 17:44:59.453801  380723 main.go:141] libmachine: (addons-347256) DBG | I0819 17:44:59.453724  380745 retry.go:31] will retry after 1.314906411s: waiting for machine to come up
	I0819 17:45:00.770448  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:00.770972  380723 main.go:141] libmachine: (addons-347256) DBG | unable to find current IP address of domain addons-347256 in network mk-addons-347256
	I0819 17:45:00.771005  380723 main.go:141] libmachine: (addons-347256) DBG | I0819 17:45:00.770916  380745 retry.go:31] will retry after 1.678424852s: waiting for machine to come up
	I0819 17:45:02.450850  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:02.451285  380723 main.go:141] libmachine: (addons-347256) DBG | unable to find current IP address of domain addons-347256 in network mk-addons-347256
	I0819 17:45:02.451306  380723 main.go:141] libmachine: (addons-347256) DBG | I0819 17:45:02.451251  380745 retry.go:31] will retry after 2.169043026s: waiting for machine to come up
	I0819 17:45:04.622786  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:04.623275  380723 main.go:141] libmachine: (addons-347256) DBG | unable to find current IP address of domain addons-347256 in network mk-addons-347256
	I0819 17:45:04.623307  380723 main.go:141] libmachine: (addons-347256) DBG | I0819 17:45:04.623177  380745 retry.go:31] will retry after 2.403674314s: waiting for machine to come up
	I0819 17:45:07.029819  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:07.030317  380723 main.go:141] libmachine: (addons-347256) DBG | unable to find current IP address of domain addons-347256 in network mk-addons-347256
	I0819 17:45:07.030349  380723 main.go:141] libmachine: (addons-347256) DBG | I0819 17:45:07.030267  380745 retry.go:31] will retry after 3.135440118s: waiting for machine to come up
	I0819 17:45:10.168488  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:10.168888  380723 main.go:141] libmachine: (addons-347256) DBG | unable to find current IP address of domain addons-347256 in network mk-addons-347256
	I0819 17:45:10.168969  380723 main.go:141] libmachine: (addons-347256) DBG | I0819 17:45:10.168818  380745 retry.go:31] will retry after 3.383905861s: waiting for machine to come up
	I0819 17:45:13.554423  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:13.554863  380723 main.go:141] libmachine: (addons-347256) DBG | unable to find current IP address of domain addons-347256 in network mk-addons-347256
	I0819 17:45:13.554902  380723 main.go:141] libmachine: (addons-347256) DBG | I0819 17:45:13.554816  380745 retry.go:31] will retry after 3.910322903s: waiting for machine to come up
	I0819 17:45:17.469972  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:17.470466  380723 main.go:141] libmachine: (addons-347256) Found IP for machine: 192.168.39.18
	I0819 17:45:17.470499  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has current primary IP address 192.168.39.18 and MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:17.470509  380723 main.go:141] libmachine: (addons-347256) Reserving static IP address...
	I0819 17:45:17.470810  380723 main.go:141] libmachine: (addons-347256) DBG | unable to find host DHCP lease matching {name: "addons-347256", mac: "52:54:00:96:9a:be", ip: "192.168.39.18"} in network mk-addons-347256
	I0819 17:45:17.540970  380723 main.go:141] libmachine: (addons-347256) DBG | Getting to WaitForSSH function...
	I0819 17:45:17.541006  380723 main.go:141] libmachine: (addons-347256) Reserved static IP address: 192.168.39.18
	I0819 17:45:17.541019  380723 main.go:141] libmachine: (addons-347256) Waiting for SSH to be available...
	I0819 17:45:17.543574  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:17.544080  380723 main.go:141] libmachine: (addons-347256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:9a:be", ip: ""} in network mk-addons-347256: {Iface:virbr1 ExpiryTime:2024-08-19 18:45:08 +0000 UTC Type:0 Mac:52:54:00:96:9a:be Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:minikube Clientid:01:52:54:00:96:9a:be}
	I0819 17:45:17.544114  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined IP address 192.168.39.18 and MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:17.544262  380723 main.go:141] libmachine: (addons-347256) DBG | Using SSH client type: external
	I0819 17:45:17.544301  380723 main.go:141] libmachine: (addons-347256) DBG | Using SSH private key: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/addons-347256/id_rsa (-rw-------)
	I0819 17:45:17.544334  380723 main.go:141] libmachine: (addons-347256) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.18 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19468-372744/.minikube/machines/addons-347256/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 17:45:17.544349  380723 main.go:141] libmachine: (addons-347256) DBG | About to run SSH command:
	I0819 17:45:17.544365  380723 main.go:141] libmachine: (addons-347256) DBG | exit 0
	I0819 17:45:17.679691  380723 main.go:141] libmachine: (addons-347256) DBG | SSH cmd err, output: <nil>: 
	I0819 17:45:17.679999  380723 main.go:141] libmachine: (addons-347256) KVM machine creation complete!
	I0819 17:45:17.680361  380723 main.go:141] libmachine: (addons-347256) Calling .GetConfigRaw
	I0819 17:45:17.680946  380723 main.go:141] libmachine: (addons-347256) Calling .DriverName
	I0819 17:45:17.681177  380723 main.go:141] libmachine: (addons-347256) Calling .DriverName
	I0819 17:45:17.681380  380723 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0819 17:45:17.681395  380723 main.go:141] libmachine: (addons-347256) Calling .GetState
	I0819 17:45:17.682671  380723 main.go:141] libmachine: Detecting operating system of created instance...
	I0819 17:45:17.682689  380723 main.go:141] libmachine: Waiting for SSH to be available...
	I0819 17:45:17.682697  380723 main.go:141] libmachine: Getting to WaitForSSH function...
	I0819 17:45:17.682706  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHHostname
	I0819 17:45:17.684925  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:17.685212  380723 main.go:141] libmachine: (addons-347256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:9a:be", ip: ""} in network mk-addons-347256: {Iface:virbr1 ExpiryTime:2024-08-19 18:45:08 +0000 UTC Type:0 Mac:52:54:00:96:9a:be Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-347256 Clientid:01:52:54:00:96:9a:be}
	I0819 17:45:17.685241  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined IP address 192.168.39.18 and MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:17.685363  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHPort
	I0819 17:45:17.685526  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHKeyPath
	I0819 17:45:17.685686  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHKeyPath
	I0819 17:45:17.685818  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHUsername
	I0819 17:45:17.686012  380723 main.go:141] libmachine: Using SSH client type: native
	I0819 17:45:17.686209  380723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I0819 17:45:17.686221  380723 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0819 17:45:17.799076  380723 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 17:45:17.799100  380723 main.go:141] libmachine: Detecting the provisioner...
	I0819 17:45:17.799108  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHHostname
	I0819 17:45:17.801845  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:17.802151  380723 main.go:141] libmachine: (addons-347256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:9a:be", ip: ""} in network mk-addons-347256: {Iface:virbr1 ExpiryTime:2024-08-19 18:45:08 +0000 UTC Type:0 Mac:52:54:00:96:9a:be Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-347256 Clientid:01:52:54:00:96:9a:be}
	I0819 17:45:17.802182  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined IP address 192.168.39.18 and MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:17.802363  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHPort
	I0819 17:45:17.802601  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHKeyPath
	I0819 17:45:17.802763  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHKeyPath
	I0819 17:45:17.802894  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHUsername
	I0819 17:45:17.803028  380723 main.go:141] libmachine: Using SSH client type: native
	I0819 17:45:17.803215  380723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I0819 17:45:17.803227  380723 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0819 17:45:17.916749  380723 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0819 17:45:17.916839  380723 main.go:141] libmachine: found compatible host: buildroot
	I0819 17:45:17.916853  380723 main.go:141] libmachine: Provisioning with buildroot...
	I0819 17:45:17.916866  380723 main.go:141] libmachine: (addons-347256) Calling .GetMachineName
	I0819 17:45:17.917178  380723 buildroot.go:166] provisioning hostname "addons-347256"
	I0819 17:45:17.917209  380723 main.go:141] libmachine: (addons-347256) Calling .GetMachineName
	I0819 17:45:17.917393  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHHostname
	I0819 17:45:17.920321  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:17.920754  380723 main.go:141] libmachine: (addons-347256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:9a:be", ip: ""} in network mk-addons-347256: {Iface:virbr1 ExpiryTime:2024-08-19 18:45:08 +0000 UTC Type:0 Mac:52:54:00:96:9a:be Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-347256 Clientid:01:52:54:00:96:9a:be}
	I0819 17:45:17.920792  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined IP address 192.168.39.18 and MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:17.920987  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHPort
	I0819 17:45:17.921195  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHKeyPath
	I0819 17:45:17.921382  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHKeyPath
	I0819 17:45:17.921591  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHUsername
	I0819 17:45:17.921777  380723 main.go:141] libmachine: Using SSH client type: native
	I0819 17:45:17.921982  380723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I0819 17:45:17.921999  380723 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-347256 && echo "addons-347256" | sudo tee /etc/hostname
	I0819 17:45:18.050302  380723 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-347256
	
	I0819 17:45:18.050339  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHHostname
	I0819 17:45:18.053307  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:18.053686  380723 main.go:141] libmachine: (addons-347256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:9a:be", ip: ""} in network mk-addons-347256: {Iface:virbr1 ExpiryTime:2024-08-19 18:45:08 +0000 UTC Type:0 Mac:52:54:00:96:9a:be Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-347256 Clientid:01:52:54:00:96:9a:be}
	I0819 17:45:18.053764  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined IP address 192.168.39.18 and MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:18.053894  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHPort
	I0819 17:45:18.054109  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHKeyPath
	I0819 17:45:18.054294  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHKeyPath
	I0819 17:45:18.054473  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHUsername
	I0819 17:45:18.054668  380723 main.go:141] libmachine: Using SSH client type: native
	I0819 17:45:18.054888  380723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I0819 17:45:18.054906  380723 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-347256' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-347256/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-347256' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 17:45:18.177246  380723 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 17:45:18.177281  380723 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19468-372744/.minikube CaCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19468-372744/.minikube}
	I0819 17:45:18.177302  380723 buildroot.go:174] setting up certificates
	I0819 17:45:18.177315  380723 provision.go:84] configureAuth start
	I0819 17:45:18.177327  380723 main.go:141] libmachine: (addons-347256) Calling .GetMachineName
	I0819 17:45:18.177658  380723 main.go:141] libmachine: (addons-347256) Calling .GetIP
	I0819 17:45:18.180197  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:18.180554  380723 main.go:141] libmachine: (addons-347256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:9a:be", ip: ""} in network mk-addons-347256: {Iface:virbr1 ExpiryTime:2024-08-19 18:45:08 +0000 UTC Type:0 Mac:52:54:00:96:9a:be Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-347256 Clientid:01:52:54:00:96:9a:be}
	I0819 17:45:18.180585  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined IP address 192.168.39.18 and MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:18.180730  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHHostname
	I0819 17:45:18.182860  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:18.183193  380723 main.go:141] libmachine: (addons-347256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:9a:be", ip: ""} in network mk-addons-347256: {Iface:virbr1 ExpiryTime:2024-08-19 18:45:08 +0000 UTC Type:0 Mac:52:54:00:96:9a:be Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-347256 Clientid:01:52:54:00:96:9a:be}
	I0819 17:45:18.183218  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined IP address 192.168.39.18 and MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:18.183388  380723 provision.go:143] copyHostCerts
	I0819 17:45:18.183468  380723 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem (1082 bytes)
	I0819 17:45:18.183604  380723 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem (1123 bytes)
	I0819 17:45:18.183743  380723 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem (1675 bytes)
	I0819 17:45:18.183802  380723 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem org=jenkins.addons-347256 san=[127.0.0.1 192.168.39.18 addons-347256 localhost minikube]
	I0819 17:45:18.533128  380723 provision.go:177] copyRemoteCerts
	I0819 17:45:18.533192  380723 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 17:45:18.533218  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHHostname
	I0819 17:45:18.536191  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:18.536567  380723 main.go:141] libmachine: (addons-347256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:9a:be", ip: ""} in network mk-addons-347256: {Iface:virbr1 ExpiryTime:2024-08-19 18:45:08 +0000 UTC Type:0 Mac:52:54:00:96:9a:be Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-347256 Clientid:01:52:54:00:96:9a:be}
	I0819 17:45:18.536599  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined IP address 192.168.39.18 and MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:18.536802  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHPort
	I0819 17:45:18.537032  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHKeyPath
	I0819 17:45:18.537220  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHUsername
	I0819 17:45:18.537380  380723 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/addons-347256/id_rsa Username:docker}
	I0819 17:45:18.625766  380723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 17:45:18.650381  380723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0819 17:45:18.674543  380723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 17:45:18.698674  380723 provision.go:87] duration metric: took 521.340221ms to configureAuth
	I0819 17:45:18.698707  380723 buildroot.go:189] setting minikube options for container-runtime
	I0819 17:45:18.698915  380723 config.go:182] Loaded profile config "addons-347256": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 17:45:18.699022  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHHostname
	I0819 17:45:18.701748  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:18.702114  380723 main.go:141] libmachine: (addons-347256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:9a:be", ip: ""} in network mk-addons-347256: {Iface:virbr1 ExpiryTime:2024-08-19 18:45:08 +0000 UTC Type:0 Mac:52:54:00:96:9a:be Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-347256 Clientid:01:52:54:00:96:9a:be}
	I0819 17:45:18.702146  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined IP address 192.168.39.18 and MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:18.702339  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHPort
	I0819 17:45:18.702571  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHKeyPath
	I0819 17:45:18.702725  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHKeyPath
	I0819 17:45:18.702911  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHUsername
	I0819 17:45:18.703067  380723 main.go:141] libmachine: Using SSH client type: native
	I0819 17:45:18.703245  380723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I0819 17:45:18.703259  380723 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 17:45:18.970758  380723 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 17:45:18.970793  380723 main.go:141] libmachine: Checking connection to Docker...
	I0819 17:45:18.970811  380723 main.go:141] libmachine: (addons-347256) Calling .GetURL
	I0819 17:45:18.972103  380723 main.go:141] libmachine: (addons-347256) DBG | Using libvirt version 6000000
	I0819 17:45:18.974612  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:18.974955  380723 main.go:141] libmachine: (addons-347256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:9a:be", ip: ""} in network mk-addons-347256: {Iface:virbr1 ExpiryTime:2024-08-19 18:45:08 +0000 UTC Type:0 Mac:52:54:00:96:9a:be Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-347256 Clientid:01:52:54:00:96:9a:be}
	I0819 17:45:18.974983  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined IP address 192.168.39.18 and MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:18.975163  380723 main.go:141] libmachine: Docker is up and running!
	I0819 17:45:18.975176  380723 main.go:141] libmachine: Reticulating splines...
	I0819 17:45:18.975184  380723 client.go:171] duration metric: took 25.834843542s to LocalClient.Create
	I0819 17:45:18.975214  380723 start.go:167] duration metric: took 25.834912671s to libmachine.API.Create "addons-347256"
	I0819 17:45:18.975228  380723 start.go:293] postStartSetup for "addons-347256" (driver="kvm2")
	I0819 17:45:18.975243  380723 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 17:45:18.975261  380723 main.go:141] libmachine: (addons-347256) Calling .DriverName
	I0819 17:45:18.975517  380723 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 17:45:18.975552  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHHostname
	I0819 17:45:18.977677  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:18.977956  380723 main.go:141] libmachine: (addons-347256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:9a:be", ip: ""} in network mk-addons-347256: {Iface:virbr1 ExpiryTime:2024-08-19 18:45:08 +0000 UTC Type:0 Mac:52:54:00:96:9a:be Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-347256 Clientid:01:52:54:00:96:9a:be}
	I0819 17:45:18.977982  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined IP address 192.168.39.18 and MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:18.978127  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHPort
	I0819 17:45:18.978378  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHKeyPath
	I0819 17:45:18.978539  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHUsername
	I0819 17:45:18.978714  380723 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/addons-347256/id_rsa Username:docker}
	I0819 17:45:19.066463  380723 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 17:45:19.071232  380723 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 17:45:19.071265  380723 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-372744/.minikube/addons for local assets ...
	I0819 17:45:19.071342  380723 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-372744/.minikube/files for local assets ...
	I0819 17:45:19.071366  380723 start.go:296] duration metric: took 96.131784ms for postStartSetup
	I0819 17:45:19.071406  380723 main.go:141] libmachine: (addons-347256) Calling .GetConfigRaw
	I0819 17:45:19.072003  380723 main.go:141] libmachine: (addons-347256) Calling .GetIP
	I0819 17:45:19.074691  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:19.075061  380723 main.go:141] libmachine: (addons-347256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:9a:be", ip: ""} in network mk-addons-347256: {Iface:virbr1 ExpiryTime:2024-08-19 18:45:08 +0000 UTC Type:0 Mac:52:54:00:96:9a:be Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-347256 Clientid:01:52:54:00:96:9a:be}
	I0819 17:45:19.075089  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined IP address 192.168.39.18 and MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:19.075338  380723 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/config.json ...
	I0819 17:45:19.075548  380723 start.go:128] duration metric: took 25.953671356s to createHost
	I0819 17:45:19.075577  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHHostname
	I0819 17:45:19.077812  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:19.078129  380723 main.go:141] libmachine: (addons-347256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:9a:be", ip: ""} in network mk-addons-347256: {Iface:virbr1 ExpiryTime:2024-08-19 18:45:08 +0000 UTC Type:0 Mac:52:54:00:96:9a:be Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-347256 Clientid:01:52:54:00:96:9a:be}
	I0819 17:45:19.078152  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined IP address 192.168.39.18 and MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:19.078347  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHPort
	I0819 17:45:19.078529  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHKeyPath
	I0819 17:45:19.078689  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHKeyPath
	I0819 17:45:19.078801  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHUsername
	I0819 17:45:19.078958  380723 main.go:141] libmachine: Using SSH client type: native
	I0819 17:45:19.079123  380723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I0819 17:45:19.079133  380723 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 17:45:19.192391  380723 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724089519.168750525
	
	I0819 17:45:19.192417  380723 fix.go:216] guest clock: 1724089519.168750525
	I0819 17:45:19.192426  380723 fix.go:229] Guest: 2024-08-19 17:45:19.168750525 +0000 UTC Remote: 2024-08-19 17:45:19.075561803 +0000 UTC m=+26.056759756 (delta=93.188722ms)
	I0819 17:45:19.192479  380723 fix.go:200] guest clock delta is within tolerance: 93.188722ms
	I0819 17:45:19.192485  380723 start.go:83] releasing machines lock for "addons-347256", held for 26.070685533s
	I0819 17:45:19.192510  380723 main.go:141] libmachine: (addons-347256) Calling .DriverName
	I0819 17:45:19.192808  380723 main.go:141] libmachine: (addons-347256) Calling .GetIP
	I0819 17:45:19.195227  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:19.195544  380723 main.go:141] libmachine: (addons-347256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:9a:be", ip: ""} in network mk-addons-347256: {Iface:virbr1 ExpiryTime:2024-08-19 18:45:08 +0000 UTC Type:0 Mac:52:54:00:96:9a:be Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-347256 Clientid:01:52:54:00:96:9a:be}
	I0819 17:45:19.195576  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined IP address 192.168.39.18 and MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:19.195713  380723 main.go:141] libmachine: (addons-347256) Calling .DriverName
	I0819 17:45:19.196239  380723 main.go:141] libmachine: (addons-347256) Calling .DriverName
	I0819 17:45:19.196453  380723 main.go:141] libmachine: (addons-347256) Calling .DriverName
	I0819 17:45:19.196570  380723 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 17:45:19.196645  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHHostname
	I0819 17:45:19.196686  380723 ssh_runner.go:195] Run: cat /version.json
	I0819 17:45:19.196712  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHHostname
	I0819 17:45:19.199202  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:19.199534  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:19.199655  380723 main.go:141] libmachine: (addons-347256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:9a:be", ip: ""} in network mk-addons-347256: {Iface:virbr1 ExpiryTime:2024-08-19 18:45:08 +0000 UTC Type:0 Mac:52:54:00:96:9a:be Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-347256 Clientid:01:52:54:00:96:9a:be}
	I0819 17:45:19.199699  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined IP address 192.168.39.18 and MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:19.199864  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHPort
	I0819 17:45:19.199985  380723 main.go:141] libmachine: (addons-347256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:9a:be", ip: ""} in network mk-addons-347256: {Iface:virbr1 ExpiryTime:2024-08-19 18:45:08 +0000 UTC Type:0 Mac:52:54:00:96:9a:be Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-347256 Clientid:01:52:54:00:96:9a:be}
	I0819 17:45:19.200018  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined IP address 192.168.39.18 and MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:19.200044  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHKeyPath
	I0819 17:45:19.200223  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHUsername
	I0819 17:45:19.200229  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHPort
	I0819 17:45:19.200398  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHKeyPath
	I0819 17:45:19.200393  380723 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/addons-347256/id_rsa Username:docker}
	I0819 17:45:19.200559  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHUsername
	I0819 17:45:19.200705  380723 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/addons-347256/id_rsa Username:docker}
	I0819 17:45:19.303317  380723 ssh_runner.go:195] Run: systemctl --version
	I0819 17:45:19.309394  380723 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 17:45:19.469769  380723 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 17:45:19.475733  380723 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 17:45:19.475804  380723 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 17:45:19.492217  380723 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 17:45:19.492246  380723 start.go:495] detecting cgroup driver to use...
	I0819 17:45:19.492312  380723 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 17:45:19.512633  380723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 17:45:19.526666  380723 docker.go:217] disabling cri-docker service (if available) ...
	I0819 17:45:19.526723  380723 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 17:45:19.540412  380723 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 17:45:19.554050  380723 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 17:45:19.681052  380723 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 17:45:19.826760  380723 docker.go:233] disabling docker service ...
	I0819 17:45:19.826844  380723 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 17:45:19.841303  380723 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 17:45:19.854153  380723 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 17:45:19.980148  380723 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 17:45:20.114056  380723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 17:45:20.128089  380723 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 17:45:20.146365  380723 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 17:45:20.146431  380723 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:45:20.157135  380723 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 17:45:20.157211  380723 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:45:20.167642  380723 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:45:20.178347  380723 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:45:20.189041  380723 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 17:45:20.200449  380723 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:45:20.211135  380723 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:45:20.228424  380723 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:45:20.239113  380723 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 17:45:20.248596  380723 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 17:45:20.248657  380723 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 17:45:20.261895  380723 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 17:45:20.271193  380723 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 17:45:20.391778  380723 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 17:45:20.528119  380723 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 17:45:20.528214  380723 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 17:45:20.533144  380723 start.go:563] Will wait 60s for crictl version
	I0819 17:45:20.533227  380723 ssh_runner.go:195] Run: which crictl
	I0819 17:45:20.536823  380723 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 17:45:20.575052  380723 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 17:45:20.575136  380723 ssh_runner.go:195] Run: crio --version
	I0819 17:45:20.601890  380723 ssh_runner.go:195] Run: crio --version
	I0819 17:45:20.630807  380723 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 17:45:20.632144  380723 main.go:141] libmachine: (addons-347256) Calling .GetIP
	I0819 17:45:20.634767  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:20.635142  380723 main.go:141] libmachine: (addons-347256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:9a:be", ip: ""} in network mk-addons-347256: {Iface:virbr1 ExpiryTime:2024-08-19 18:45:08 +0000 UTC Type:0 Mac:52:54:00:96:9a:be Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-347256 Clientid:01:52:54:00:96:9a:be}
	I0819 17:45:20.635184  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined IP address 192.168.39.18 and MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:20.635375  380723 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 17:45:20.639550  380723 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 17:45:20.651906  380723 kubeadm.go:883] updating cluster {Name:addons-347256 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:addons-347256 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.18 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 17:45:20.652018  380723 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 17:45:20.652059  380723 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 17:45:20.685872  380723 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0819 17:45:20.685942  380723 ssh_runner.go:195] Run: which lz4
	I0819 17:45:20.690104  380723 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 17:45:20.694324  380723 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 17:45:20.694354  380723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0819 17:45:21.956220  380723 crio.go:462] duration metric: took 1.266150323s to copy over tarball
	I0819 17:45:21.956324  380723 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 17:45:24.072963  380723 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.11660057s)
	I0819 17:45:24.072995  380723 crio.go:469] duration metric: took 2.116739s to extract the tarball
	I0819 17:45:24.073004  380723 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 17:45:24.109933  380723 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 17:45:24.160419  380723 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 17:45:24.160454  380723 cache_images.go:84] Images are preloaded, skipping loading
	I0819 17:45:24.160466  380723 kubeadm.go:934] updating node { 192.168.39.18 8443 v1.31.0 crio true true} ...
	I0819 17:45:24.160628  380723 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-347256 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.18
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-347256 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 17:45:24.160755  380723 ssh_runner.go:195] Run: crio config
	I0819 17:45:24.216129  380723 cni.go:84] Creating CNI manager for ""
	I0819 17:45:24.216154  380723 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 17:45:24.216168  380723 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 17:45:24.216196  380723 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.18 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-347256 NodeName:addons-347256 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.18"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.18 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 17:45:24.216360  380723 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.18
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-347256"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.18
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.18"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 17:45:24.216427  380723 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 17:45:24.228695  380723 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 17:45:24.228770  380723 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 17:45:24.239098  380723 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0819 17:45:24.256669  380723 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 17:45:24.273434  380723 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0819 17:45:24.290431  380723 ssh_runner.go:195] Run: grep 192.168.39.18	control-plane.minikube.internal$ /etc/hosts
	I0819 17:45:24.294455  380723 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.18	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 17:45:24.307092  380723 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 17:45:24.437166  380723 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 17:45:24.454975  380723 certs.go:68] Setting up /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256 for IP: 192.168.39.18
	I0819 17:45:24.455003  380723 certs.go:194] generating shared ca certs ...
	I0819 17:45:24.455021  380723 certs.go:226] acquiring lock for ca certs: {Name:mk639e03f593e0bccac045f6e9f5ba3b96cc81e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:45:24.455160  380723 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key
	I0819 17:45:24.607373  380723 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt ...
	I0819 17:45:24.607406  380723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt: {Name:mk720863d1644f0a4aa6f75fb34905a83c015168 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:45:24.607614  380723 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key ...
	I0819 17:45:24.607629  380723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key: {Name:mkd3386fa062f8a0dfb5858759605de084d42867 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:45:24.607757  380723 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key
	I0819 17:45:24.692703  380723 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.crt ...
	I0819 17:45:24.692732  380723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.crt: {Name:mk1dc711d257e531e3c71c7d0984b6df867cfe02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:45:24.692930  380723 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key ...
	I0819 17:45:24.692951  380723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key: {Name:mk8e16aff6516c290adb78b092691391102b99e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:45:24.693049  380723 certs.go:256] generating profile certs ...
	I0819 17:45:24.693113  380723 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/client.key
	I0819 17:45:24.693139  380723 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/client.crt with IP's: []
	I0819 17:45:24.857181  380723 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/client.crt ...
	I0819 17:45:24.857214  380723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/client.crt: {Name:mk6a1a046e55814f12df6a0e42b22fdeb6c0d339 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:45:24.857408  380723 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/client.key ...
	I0819 17:45:24.857424  380723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/client.key: {Name:mk3097dd049f7745d2605bf1f16a97f955f21ed3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:45:24.857524  380723 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/apiserver.key.bc8d03ea
	I0819 17:45:24.857545  380723 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/apiserver.crt.bc8d03ea with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.18]
	I0819 17:45:25.217861  380723 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/apiserver.crt.bc8d03ea ...
	I0819 17:45:25.217894  380723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/apiserver.crt.bc8d03ea: {Name:mk39d188cf7bf6d5dd4f56ad5ff39f9b6bbaaf56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:45:25.218082  380723 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/apiserver.key.bc8d03ea ...
	I0819 17:45:25.218100  380723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/apiserver.key.bc8d03ea: {Name:mke2f1fe200569be9110b53c2b6e9c6316ac6de9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:45:25.218202  380723 certs.go:381] copying /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/apiserver.crt.bc8d03ea -> /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/apiserver.crt
	I0819 17:45:25.218284  380723 certs.go:385] copying /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/apiserver.key.bc8d03ea -> /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/apiserver.key
	I0819 17:45:25.218331  380723 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/proxy-client.key
	I0819 17:45:25.218349  380723 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/proxy-client.crt with IP's: []
	I0819 17:45:25.507812  380723 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/proxy-client.crt ...
	I0819 17:45:25.507852  380723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/proxy-client.crt: {Name:mkc9cb74c9901604fb7d3a8203fa6096a334239d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:45:25.508025  380723 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/proxy-client.key ...
	I0819 17:45:25.508038  380723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/proxy-client.key: {Name:mk6bd0a8aed7d4a5c3e994dc78890b950bdd72a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:45:25.508215  380723 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 17:45:25.508254  380723 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem (1082 bytes)
	I0819 17:45:25.508279  380723 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem (1123 bytes)
	I0819 17:45:25.508303  380723 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem (1675 bytes)
	I0819 17:45:25.508916  380723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 17:45:25.540333  380723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 17:45:25.564833  380723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 17:45:25.589155  380723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 17:45:25.613367  380723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0819 17:45:25.637037  380723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 17:45:25.661485  380723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 17:45:25.685131  380723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 17:45:25.709378  380723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 17:45:25.733248  380723 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 17:45:25.749801  380723 ssh_runner.go:195] Run: openssl version
	I0819 17:45:25.755506  380723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 17:45:25.766270  380723 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 17:45:25.770783  380723 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0819 17:45:25.770848  380723 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 17:45:25.776580  380723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 17:45:25.787227  380723 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 17:45:25.791427  380723 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 17:45:25.791480  380723 kubeadm.go:392] StartCluster: {Name:addons-347256 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:addons-347256 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.18 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 17:45:25.791641  380723 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 17:45:25.791747  380723 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 17:45:25.830591  380723 cri.go:89] found id: ""
	I0819 17:45:25.830683  380723 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 17:45:25.840513  380723 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 17:45:25.849805  380723 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 17:45:25.859085  380723 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 17:45:25.859110  380723 kubeadm.go:157] found existing configuration files:
	
	I0819 17:45:25.859155  380723 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 17:45:25.867614  380723 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 17:45:25.867707  380723 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 17:45:25.876869  380723 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 17:45:25.885771  380723 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 17:45:25.885837  380723 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 17:45:25.895004  380723 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 17:45:25.903555  380723 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 17:45:25.903610  380723 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 17:45:25.912939  380723 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 17:45:25.921561  380723 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 17:45:25.921622  380723 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 17:45:25.930854  380723 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 17:45:25.979274  380723 kubeadm.go:310] W0819 17:45:25.962183     839 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 17:45:25.979964  380723 kubeadm.go:310] W0819 17:45:25.962997     839 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 17:45:26.084588  380723 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 17:45:35.554082  380723 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0819 17:45:35.554153  380723 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 17:45:35.554220  380723 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 17:45:35.554378  380723 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 17:45:35.554535  380723 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0819 17:45:35.554613  380723 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 17:45:35.556110  380723 out.go:235]   - Generating certificates and keys ...
	I0819 17:45:35.556179  380723 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 17:45:35.556239  380723 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 17:45:35.556302  380723 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0819 17:45:35.556390  380723 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0819 17:45:35.556443  380723 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0819 17:45:35.556485  380723 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0819 17:45:35.556544  380723 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0819 17:45:35.556678  380723 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-347256 localhost] and IPs [192.168.39.18 127.0.0.1 ::1]
	I0819 17:45:35.556749  380723 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0819 17:45:35.556901  380723 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-347256 localhost] and IPs [192.168.39.18 127.0.0.1 ::1]
	I0819 17:45:35.556981  380723 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0819 17:45:35.557052  380723 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0819 17:45:35.557098  380723 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0819 17:45:35.557150  380723 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 17:45:35.557214  380723 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 17:45:35.557305  380723 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0819 17:45:35.557380  380723 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 17:45:35.557465  380723 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 17:45:35.557539  380723 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 17:45:35.557636  380723 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 17:45:35.557723  380723 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 17:45:35.559211  380723 out.go:235]   - Booting up control plane ...
	I0819 17:45:35.559286  380723 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 17:45:35.559345  380723 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 17:45:35.559400  380723 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 17:45:35.559479  380723 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 17:45:35.559591  380723 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 17:45:35.559654  380723 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 17:45:35.559820  380723 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0819 17:45:35.559942  380723 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0819 17:45:35.560037  380723 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.131627ms
	I0819 17:45:35.560127  380723 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0819 17:45:35.560201  380723 kubeadm.go:310] [api-check] The API server is healthy after 5.002168832s
	I0819 17:45:35.560313  380723 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 17:45:35.560426  380723 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 17:45:35.560490  380723 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 17:45:35.560694  380723 kubeadm.go:310] [mark-control-plane] Marking the node addons-347256 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 17:45:35.560743  380723 kubeadm.go:310] [bootstrap-token] Using token: 02k7t2.hl4r0htmlbvvfk0d
	I0819 17:45:35.562138  380723 out.go:235]   - Configuring RBAC rules ...
	I0819 17:45:35.562238  380723 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 17:45:35.562306  380723 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 17:45:35.562440  380723 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 17:45:35.562550  380723 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 17:45:35.562658  380723 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 17:45:35.562733  380723 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 17:45:35.562829  380723 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 17:45:35.562869  380723 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 17:45:35.562908  380723 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 17:45:35.562917  380723 kubeadm.go:310] 
	I0819 17:45:35.562969  380723 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 17:45:35.562975  380723 kubeadm.go:310] 
	I0819 17:45:35.563047  380723 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 17:45:35.563055  380723 kubeadm.go:310] 
	I0819 17:45:35.563078  380723 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 17:45:35.563150  380723 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 17:45:35.563203  380723 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 17:45:35.563210  380723 kubeadm.go:310] 
	I0819 17:45:35.563262  380723 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 17:45:35.563268  380723 kubeadm.go:310] 
	I0819 17:45:35.563327  380723 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 17:45:35.563337  380723 kubeadm.go:310] 
	I0819 17:45:35.563390  380723 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 17:45:35.563457  380723 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 17:45:35.563524  380723 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 17:45:35.563538  380723 kubeadm.go:310] 
	I0819 17:45:35.563639  380723 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 17:45:35.563744  380723 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 17:45:35.563753  380723 kubeadm.go:310] 
	I0819 17:45:35.563828  380723 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 02k7t2.hl4r0htmlbvvfk0d \
	I0819 17:45:35.563967  380723 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3fcbd90565c5acbc36a47b2db682cb22dce9b172c9bf3af21e506ebb67608039 \
	I0819 17:45:35.563998  380723 kubeadm.go:310] 	--control-plane 
	I0819 17:45:35.564011  380723 kubeadm.go:310] 
	I0819 17:45:35.564117  380723 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 17:45:35.564129  380723 kubeadm.go:310] 
	I0819 17:45:35.564239  380723 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 02k7t2.hl4r0htmlbvvfk0d \
	I0819 17:45:35.564383  380723 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3fcbd90565c5acbc36a47b2db682cb22dce9b172c9bf3af21e506ebb67608039 
	I0819 17:45:35.564398  380723 cni.go:84] Creating CNI manager for ""
	I0819 17:45:35.564405  380723 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 17:45:35.565906  380723 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 17:45:35.567045  380723 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 17:45:35.581957  380723 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 17:45:35.600228  380723 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 17:45:35.600321  380723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 17:45:35.600370  380723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-347256 minikube.k8s.io/updated_at=2024_08_19T17_45_35_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=9c2db9d51ec33b5c53a86e9ba3d384ee332e3411 minikube.k8s.io/name=addons-347256 minikube.k8s.io/primary=true
	I0819 17:45:35.757365  380723 ops.go:34] apiserver oom_adj: -16
	I0819 17:45:35.757451  380723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 17:45:36.258226  380723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 17:45:36.757575  380723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 17:45:37.257560  380723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 17:45:37.758488  380723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 17:45:38.257909  380723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 17:45:38.758330  380723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 17:45:39.258278  380723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 17:45:39.758389  380723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 17:45:39.849761  380723 kubeadm.go:1113] duration metric: took 4.249515717s to wait for elevateKubeSystemPrivileges
	I0819 17:45:39.849812  380723 kubeadm.go:394] duration metric: took 14.058337596s to StartCluster
	I0819 17:45:39.849843  380723 settings.go:142] acquiring lock: {Name:mk396fcf49a1d0e69583cf37ff3c819e37118163 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:45:39.850019  380723 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19468-372744/kubeconfig
	I0819 17:45:39.850726  380723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/kubeconfig: {Name:mk8e7b4e1bb7da665111d2acd83eb48882c66853 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:45:39.850943  380723 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0819 17:45:39.850995  380723 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.18 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 17:45:39.851061  380723 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0819 17:45:39.851182  380723 addons.go:69] Setting yakd=true in profile "addons-347256"
	I0819 17:45:39.851235  380723 addons.go:234] Setting addon yakd=true in "addons-347256"
	I0819 17:45:39.851259  380723 config.go:182] Loaded profile config "addons-347256": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 17:45:39.851268  380723 addons.go:69] Setting inspektor-gadget=true in profile "addons-347256"
	I0819 17:45:39.851287  380723 addons.go:69] Setting metrics-server=true in profile "addons-347256"
	I0819 17:45:39.851286  380723 addons.go:69] Setting gcp-auth=true in profile "addons-347256"
	I0819 17:45:39.851314  380723 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-347256"
	I0819 17:45:39.851321  380723 addons.go:69] Setting ingress=true in profile "addons-347256"
	I0819 17:45:39.851323  380723 addons.go:69] Setting volcano=true in profile "addons-347256"
	I0819 17:45:39.851338  380723 addons.go:234] Setting addon ingress=true in "addons-347256"
	I0819 17:45:39.851341  380723 addons.go:234] Setting addon volcano=true in "addons-347256"
	I0819 17:45:39.851330  380723 addons.go:69] Setting storage-provisioner=true in profile "addons-347256"
	I0819 17:45:39.851363  380723 host.go:66] Checking if "addons-347256" exists ...
	I0819 17:45:39.851363  380723 addons.go:69] Setting cloud-spanner=true in profile "addons-347256"
	I0819 17:45:39.851373  380723 addons.go:234] Setting addon storage-provisioner=true in "addons-347256"
	I0819 17:45:39.851377  380723 addons.go:69] Setting volumesnapshots=true in profile "addons-347256"
	I0819 17:45:39.851385  380723 host.go:66] Checking if "addons-347256" exists ...
	I0819 17:45:39.851391  380723 addons.go:234] Setting addon cloud-spanner=true in "addons-347256"
	I0819 17:45:39.851396  380723 addons.go:234] Setting addon volumesnapshots=true in "addons-347256"
	I0819 17:45:39.851406  380723 host.go:66] Checking if "addons-347256" exists ...
	I0819 17:45:39.851418  380723 host.go:66] Checking if "addons-347256" exists ...
	I0819 17:45:39.851428  380723 host.go:66] Checking if "addons-347256" exists ...
	I0819 17:45:39.851443  380723 addons.go:69] Setting ingress-dns=true in profile "addons-347256"
	I0819 17:45:39.851459  380723 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-347256"
	I0819 17:45:39.851476  380723 addons.go:234] Setting addon ingress-dns=true in "addons-347256"
	I0819 17:45:39.851494  380723 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-347256"
	I0819 17:45:39.851496  380723 addons.go:69] Setting registry=true in profile "addons-347256"
	I0819 17:45:39.851509  380723 host.go:66] Checking if "addons-347256" exists ...
	I0819 17:45:39.851516  380723 addons.go:234] Setting addon registry=true in "addons-347256"
	I0819 17:45:39.851520  380723 host.go:66] Checking if "addons-347256" exists ...
	I0819 17:45:39.851538  380723 host.go:66] Checking if "addons-347256" exists ...
	I0819 17:45:39.851364  380723 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-347256"
	I0819 17:45:39.851889  380723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:45:39.851891  380723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:45:39.851899  380723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:45:39.851907  380723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:45:39.851907  380723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:45:39.851341  380723 mustload.go:65] Loading cluster: addons-347256
	I0819 17:45:39.851918  380723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:45:39.851305  380723 addons.go:234] Setting addon metrics-server=true in "addons-347256"
	I0819 17:45:39.851896  380723 host.go:66] Checking if "addons-347256" exists ...
	I0819 17:45:39.851937  380723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:45:39.851949  380723 host.go:66] Checking if "addons-347256" exists ...
	I0819 17:45:39.851349  380723 addons.go:69] Setting default-storageclass=true in profile "addons-347256"
	I0819 17:45:39.851962  380723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:45:39.851983  380723 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-347256"
	I0819 17:45:39.852019  380723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:45:39.852029  380723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:45:39.852052  380723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:45:39.852057  380723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:45:39.852070  380723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:45:39.852072  380723 config.go:182] Loaded profile config "addons-347256": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 17:45:39.851888  380723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:45:39.852177  380723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:45:39.851314  380723 addons.go:69] Setting helm-tiller=true in profile "addons-347256"
	I0819 17:45:39.851278  380723 host.go:66] Checking if "addons-347256" exists ...
	I0819 17:45:39.852237  380723 addons.go:234] Setting addon helm-tiller=true in "addons-347256"
	I0819 17:45:39.851307  380723 addons.go:234] Setting addon inspektor-gadget=true in "addons-347256"
	I0819 17:45:39.851314  380723 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-347256"
	I0819 17:45:39.852356  380723 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-347256"
	I0819 17:45:39.852360  380723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:45:39.852381  380723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:45:39.852387  380723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:45:39.852400  380723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:45:39.852407  380723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:45:39.852421  380723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:45:39.852484  380723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:45:39.852495  380723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:45:39.852542  380723 host.go:66] Checking if "addons-347256" exists ...
	I0819 17:45:39.852547  380723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:45:39.851913  380723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:45:39.852566  380723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:45:39.852782  380723 out.go:177] * Verifying Kubernetes components...
	I0819 17:45:39.853019  380723 host.go:66] Checking if "addons-347256" exists ...
	I0819 17:45:39.853387  380723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:45:39.853429  380723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:45:39.868465  380723 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 17:45:39.872955  380723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39907
	I0819 17:45:39.873131  380723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43693
	I0819 17:45:39.873222  380723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33873
	I0819 17:45:39.873419  380723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44057
	I0819 17:45:39.873709  380723 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:45:39.873817  380723 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:45:39.873869  380723 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:45:39.873910  380723 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:45:39.874431  380723 main.go:141] libmachine: Using API Version  1
	I0819 17:45:39.874455  380723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:45:39.874561  380723 main.go:141] libmachine: Using API Version  1
	I0819 17:45:39.874564  380723 main.go:141] libmachine: Using API Version  1
	I0819 17:45:39.874576  380723 main.go:141] libmachine: Using API Version  1
	I0819 17:45:39.874583  380723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:45:39.874582  380723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:45:39.874569  380723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:45:39.874943  380723 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:45:39.874985  380723 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:45:39.874997  380723 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:45:39.875503  380723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:45:39.875542  380723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:45:39.875757  380723 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:45:39.876434  380723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:45:39.876471  380723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:45:39.880110  380723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:45:39.880123  380723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:45:39.880139  380723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:45:39.880156  380723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:45:39.884494  380723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:45:39.884524  380723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:45:39.885117  380723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:45:39.885143  380723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:45:39.889026  380723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40431
	I0819 17:45:39.889217  380723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41289
	I0819 17:45:39.889316  380723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45711
	I0819 17:45:39.889712  380723 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:45:39.889819  380723 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:45:39.890047  380723 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:45:39.890370  380723 main.go:141] libmachine: Using API Version  1
	I0819 17:45:39.890389  380723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:45:39.890953  380723 main.go:141] libmachine: Using API Version  1
	I0819 17:45:39.890970  380723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:45:39.891024  380723 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:45:39.891736  380723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:45:39.891780  380723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:45:39.892135  380723 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:45:39.892752  380723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:45:39.892781  380723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:45:39.893313  380723 main.go:141] libmachine: Using API Version  1
	I0819 17:45:39.893333  380723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:45:39.893697  380723 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:45:39.894275  380723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:45:39.894312  380723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:45:39.902176  380723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35607
	I0819 17:45:39.905054  380723 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:45:39.905726  380723 main.go:141] libmachine: Using API Version  1
	I0819 17:45:39.905749  380723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:45:39.906324  380723 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:45:39.906537  380723 main.go:141] libmachine: (addons-347256) Calling .GetState
	I0819 17:45:39.907569  380723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43753
	I0819 17:45:39.909630  380723 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:45:39.910233  380723 main.go:141] libmachine: Using API Version  1
	I0819 17:45:39.910252  380723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:45:39.910710  380723 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:45:39.911329  380723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:45:39.911379  380723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:45:39.911629  380723 main.go:141] libmachine: (addons-347256) Calling .DriverName
	I0819 17:45:39.912137  380723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46673
	I0819 17:45:39.912812  380723 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:45:39.913142  380723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41051
	I0819 17:45:39.913403  380723 main.go:141] libmachine: Using API Version  1
	I0819 17:45:39.913419  380723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:45:39.913534  380723 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0819 17:45:39.913764  380723 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:45:39.913877  380723 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:45:39.914070  380723 main.go:141] libmachine: (addons-347256) Calling .GetState
	I0819 17:45:39.914737  380723 main.go:141] libmachine: Using API Version  1
	I0819 17:45:39.914756  380723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:45:39.914857  380723 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0819 17:45:39.914891  380723 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0819 17:45:39.914912  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHHostname
	I0819 17:45:39.915733  380723 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:45:39.916166  380723 main.go:141] libmachine: (addons-347256) Calling .GetState
	I0819 17:45:39.919248  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:39.919709  380723 main.go:141] libmachine: (addons-347256) Calling .DriverName
	I0819 17:45:39.919830  380723 main.go:141] libmachine: (addons-347256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:9a:be", ip: ""} in network mk-addons-347256: {Iface:virbr1 ExpiryTime:2024-08-19 18:45:08 +0000 UTC Type:0 Mac:52:54:00:96:9a:be Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-347256 Clientid:01:52:54:00:96:9a:be}
	I0819 17:45:39.919866  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined IP address 192.168.39.18 and MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:39.920129  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHPort
	I0819 17:45:39.920325  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHKeyPath
	I0819 17:45:39.920397  380723 addons.go:234] Setting addon default-storageclass=true in "addons-347256"
	I0819 17:45:39.920444  380723 host.go:66] Checking if "addons-347256" exists ...
	I0819 17:45:39.920452  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHUsername
	I0819 17:45:39.920548  380723 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/addons-347256/id_rsa Username:docker}
	I0819 17:45:39.920838  380723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:45:39.920859  380723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:45:39.921251  380723 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0819 17:45:39.922614  380723 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0819 17:45:39.922638  380723 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0819 17:45:39.922657  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHHostname
	I0819 17:45:39.923257  380723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32985
	I0819 17:45:39.923931  380723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32947
	I0819 17:45:39.924521  380723 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:45:39.925068  380723 main.go:141] libmachine: Using API Version  1
	I0819 17:45:39.925091  380723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:45:39.925440  380723 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:45:39.925629  380723 main.go:141] libmachine: (addons-347256) Calling .GetState
	I0819 17:45:39.926093  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:39.926882  380723 main.go:141] libmachine: (addons-347256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:9a:be", ip: ""} in network mk-addons-347256: {Iface:virbr1 ExpiryTime:2024-08-19 18:45:08 +0000 UTC Type:0 Mac:52:54:00:96:9a:be Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-347256 Clientid:01:52:54:00:96:9a:be}
	I0819 17:45:39.926905  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined IP address 192.168.39.18 and MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:39.927416  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHPort
	I0819 17:45:39.927607  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHKeyPath
	I0819 17:45:39.927781  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHUsername
	I0819 17:45:39.927922  380723 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/addons-347256/id_rsa Username:docker}
	I0819 17:45:39.928203  380723 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:45:39.928335  380723 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-347256"
	I0819 17:45:39.928374  380723 host.go:66] Checking if "addons-347256" exists ...
	I0819 17:45:39.928677  380723 main.go:141] libmachine: Using API Version  1
	I0819 17:45:39.928692  380723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:45:39.928736  380723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:45:39.928781  380723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:45:39.929091  380723 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:45:39.929613  380723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:45:39.929657  380723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:45:39.933497  380723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41927
	I0819 17:45:39.934174  380723 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:45:39.934747  380723 main.go:141] libmachine: Using API Version  1
	I0819 17:45:39.934774  380723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:45:39.935212  380723 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:45:39.935409  380723 main.go:141] libmachine: (addons-347256) Calling .GetState
	I0819 17:45:39.936450  380723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43597
	I0819 17:45:39.936796  380723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45409
	I0819 17:45:39.937443  380723 main.go:141] libmachine: (addons-347256) Calling .DriverName
	I0819 17:45:39.937446  380723 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:45:39.937944  380723 main.go:141] libmachine: Using API Version  1
	I0819 17:45:39.937961  380723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:45:39.938372  380723 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:45:39.938589  380723 main.go:141] libmachine: (addons-347256) Calling .GetState
	I0819 17:45:39.939305  380723 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:45:39.939775  380723 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0819 17:45:39.939986  380723 main.go:141] libmachine: Using API Version  1
	I0819 17:45:39.940002  380723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:45:39.940528  380723 main.go:141] libmachine: (addons-347256) Calling .DriverName
	I0819 17:45:39.940706  380723 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:45:39.941103  380723 main.go:141] libmachine: (addons-347256) Calling .GetState
	I0819 17:45:39.941822  380723 out.go:177]   - Using image docker.io/registry:2.8.3
	I0819 17:45:39.941822  380723 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0819 17:45:39.942884  380723 host.go:66] Checking if "addons-347256" exists ...
	I0819 17:45:39.943281  380723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:45:39.943318  380723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:45:39.943521  380723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32801
	I0819 17:45:39.944035  380723 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:45:39.944692  380723 main.go:141] libmachine: Using API Version  1
	I0819 17:45:39.944719  380723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:45:39.945059  380723 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0819 17:45:39.945085  380723 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:45:39.945110  380723 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0819 17:45:39.945595  380723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:45:39.945645  380723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:45:39.946547  380723 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0819 17:45:39.946566  380723 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0819 17:45:39.946586  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHHostname
	I0819 17:45:39.946648  380723 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0819 17:45:39.946659  380723 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0819 17:45:39.946672  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHHostname
	I0819 17:45:39.950458  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:39.952481  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:39.953001  380723 main.go:141] libmachine: (addons-347256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:9a:be", ip: ""} in network mk-addons-347256: {Iface:virbr1 ExpiryTime:2024-08-19 18:45:08 +0000 UTC Type:0 Mac:52:54:00:96:9a:be Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-347256 Clientid:01:52:54:00:96:9a:be}
	I0819 17:45:39.953110  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined IP address 192.168.39.18 and MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:39.953305  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHPort
	I0819 17:45:39.953504  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHKeyPath
	I0819 17:45:39.953573  380723 main.go:141] libmachine: (addons-347256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:9a:be", ip: ""} in network mk-addons-347256: {Iface:virbr1 ExpiryTime:2024-08-19 18:45:08 +0000 UTC Type:0 Mac:52:54:00:96:9a:be Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-347256 Clientid:01:52:54:00:96:9a:be}
	I0819 17:45:39.953586  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined IP address 192.168.39.18 and MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:39.953611  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHUsername
	I0819 17:45:39.953708  380723 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/addons-347256/id_rsa Username:docker}
	I0819 17:45:39.954056  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHPort
	I0819 17:45:39.954244  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHKeyPath
	I0819 17:45:39.954369  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHUsername
	I0819 17:45:39.954489  380723 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/addons-347256/id_rsa Username:docker}
	I0819 17:45:39.959581  380723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36901
	I0819 17:45:39.960341  380723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37721
	I0819 17:45:39.960837  380723 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:45:39.961427  380723 main.go:141] libmachine: Using API Version  1
	I0819 17:45:39.961448  380723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:45:39.961886  380723 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:45:39.962469  380723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:45:39.962498  380723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:45:39.962691  380723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45431
	I0819 17:45:39.962864  380723 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:45:39.963240  380723 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:45:39.963818  380723 main.go:141] libmachine: Using API Version  1
	I0819 17:45:39.963833  380723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:45:39.963899  380723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45435
	I0819 17:45:39.964161  380723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40119
	I0819 17:45:39.964594  380723 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:45:39.964754  380723 main.go:141] libmachine: Using API Version  1
	I0819 17:45:39.964767  380723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:45:39.964828  380723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36989
	I0819 17:45:39.964961  380723 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:45:39.965163  380723 main.go:141] libmachine: (addons-347256) Calling .GetState
	I0819 17:45:39.965235  380723 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:45:39.965327  380723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44383
	I0819 17:45:39.965549  380723 main.go:141] libmachine: Using API Version  1
	I0819 17:45:39.965577  380723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:45:39.965603  380723 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:45:39.965670  380723 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:45:39.965721  380723 main.go:141] libmachine: Using API Version  1
	I0819 17:45:39.965757  380723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:45:39.966117  380723 main.go:141] libmachine: Using API Version  1
	I0819 17:45:39.966135  380723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:45:39.966212  380723 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:45:39.966235  380723 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:45:39.966581  380723 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:45:39.966690  380723 main.go:141] libmachine: Using API Version  1
	I0819 17:45:39.966710  380723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:45:39.966770  380723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:45:39.966812  380723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:45:39.966773  380723 main.go:141] libmachine: (addons-347256) Calling .GetState
	I0819 17:45:39.967090  380723 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:45:39.967149  380723 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:45:39.967860  380723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:45:39.967902  380723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:45:39.968126  380723 main.go:141] libmachine: (addons-347256) Calling .DriverName
	I0819 17:45:39.968147  380723 main.go:141] libmachine: (addons-347256) Calling .GetState
	I0819 17:45:39.968276  380723 main.go:141] libmachine: (addons-347256) Calling .DriverName
	I0819 17:45:39.969729  380723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42117
	I0819 17:45:39.969757  380723 main.go:141] libmachine: (addons-347256) Calling .DriverName
	I0819 17:45:39.970087  380723 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:45:39.970578  380723 main.go:141] libmachine: Using API Version  1
	I0819 17:45:39.970604  380723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:45:39.970945  380723 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:45:39.971078  380723 main.go:141] libmachine: (addons-347256) Calling .DriverName
	I0819 17:45:39.971479  380723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:45:39.971978  380723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:45:39.972189  380723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45299
	I0819 17:45:39.972241  380723 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0819 17:45:39.972594  380723 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:45:39.973134  380723 main.go:141] libmachine: Using API Version  1
	I0819 17:45:39.973158  380723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:45:39.973244  380723 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0819 17:45:39.973274  380723 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0819 17:45:39.973503  380723 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:45:39.974030  380723 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0819 17:45:39.974058  380723 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0819 17:45:39.974082  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHHostname
	I0819 17:45:39.974086  380723 main.go:141] libmachine: (addons-347256) Calling .GetState
	I0819 17:45:39.974893  380723 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0819 17:45:39.974912  380723 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0819 17:45:39.974930  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHHostname
	I0819 17:45:39.976282  380723 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0819 17:45:39.976391  380723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46183
	I0819 17:45:39.976851  380723 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:45:39.977462  380723 main.go:141] libmachine: Using API Version  1
	I0819 17:45:39.977479  380723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:45:39.977551  380723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42651
	I0819 17:45:39.977886  380723 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:45:39.978551  380723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:45:39.978592  380723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:45:39.978660  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:39.978753  380723 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0819 17:45:39.978864  380723 main.go:141] libmachine: (addons-347256) Calling .DriverName
	I0819 17:45:39.978936  380723 main.go:141] libmachine: (addons-347256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:9a:be", ip: ""} in network mk-addons-347256: {Iface:virbr1 ExpiryTime:2024-08-19 18:45:08 +0000 UTC Type:0 Mac:52:54:00:96:9a:be Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-347256 Clientid:01:52:54:00:96:9a:be}
	I0819 17:45:39.978952  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined IP address 192.168.39.18 and MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:39.978995  380723 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:45:39.979088  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHPort
	I0819 17:45:39.979317  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHKeyPath
	I0819 17:45:39.979443  380723 main.go:141] libmachine: Using API Version  1
	I0819 17:45:39.979610  380723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:45:39.979764  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHUsername
	I0819 17:45:39.980016  380723 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/addons-347256/id_rsa Username:docker}
	I0819 17:45:39.980120  380723 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:45:39.980747  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:39.980833  380723 main.go:141] libmachine: (addons-347256) Calling .GetState
	I0819 17:45:39.980849  380723 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0819 17:45:39.981294  380723 main.go:141] libmachine: (addons-347256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:9a:be", ip: ""} in network mk-addons-347256: {Iface:virbr1 ExpiryTime:2024-08-19 18:45:08 +0000 UTC Type:0 Mac:52:54:00:96:9a:be Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-347256 Clientid:01:52:54:00:96:9a:be}
	I0819 17:45:39.981315  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined IP address 192.168.39.18 and MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:39.981493  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHPort
	I0819 17:45:39.981673  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHKeyPath
	I0819 17:45:39.981715  380723 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0819 17:45:39.981849  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHUsername
	I0819 17:45:39.982219  380723 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/addons-347256/id_rsa Username:docker}
	I0819 17:45:39.982355  380723 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0819 17:45:39.982371  380723 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0819 17:45:39.982389  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHHostname
	I0819 17:45:39.984121  380723 main.go:141] libmachine: (addons-347256) Calling .DriverName
	I0819 17:45:39.984392  380723 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0819 17:45:39.985630  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:39.985739  380723 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0819 17:45:39.986003  380723 main.go:141] libmachine: (addons-347256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:9a:be", ip: ""} in network mk-addons-347256: {Iface:virbr1 ExpiryTime:2024-08-19 18:45:08 +0000 UTC Type:0 Mac:52:54:00:96:9a:be Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-347256 Clientid:01:52:54:00:96:9a:be}
	I0819 17:45:39.986035  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined IP address 192.168.39.18 and MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:39.986211  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHPort
	I0819 17:45:39.986384  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHKeyPath
	I0819 17:45:39.986532  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHUsername
	I0819 17:45:39.986652  380723 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/addons-347256/id_rsa Username:docker}
	I0819 17:45:39.986774  380723 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0819 17:45:39.986779  380723 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0819 17:45:39.986793  380723 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0819 17:45:39.986811  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHHostname
	I0819 17:45:39.987978  380723 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0819 17:45:39.989093  380723 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0819 17:45:39.989545  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:39.989944  380723 main.go:141] libmachine: (addons-347256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:9a:be", ip: ""} in network mk-addons-347256: {Iface:virbr1 ExpiryTime:2024-08-19 18:45:08 +0000 UTC Type:0 Mac:52:54:00:96:9a:be Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-347256 Clientid:01:52:54:00:96:9a:be}
	I0819 17:45:39.989964  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined IP address 192.168.39.18 and MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:39.990125  380723 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0819 17:45:39.990136  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHPort
	I0819 17:45:39.990142  380723 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0819 17:45:39.990157  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHHostname
	I0819 17:45:39.990792  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHKeyPath
	I0819 17:45:39.990985  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHUsername
	I0819 17:45:39.991156  380723 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/addons-347256/id_rsa Username:docker}
	I0819 17:45:39.992033  380723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46249
	I0819 17:45:39.992365  380723 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:45:39.992804  380723 main.go:141] libmachine: Using API Version  1
	I0819 17:45:39.992820  380723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:45:39.993418  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:39.993667  380723 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:45:39.993732  380723 main.go:141] libmachine: (addons-347256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:9a:be", ip: ""} in network mk-addons-347256: {Iface:virbr1 ExpiryTime:2024-08-19 18:45:08 +0000 UTC Type:0 Mac:52:54:00:96:9a:be Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-347256 Clientid:01:52:54:00:96:9a:be}
	I0819 17:45:39.993746  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined IP address 192.168.39.18 and MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:39.993913  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHPort
	I0819 17:45:39.994105  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHKeyPath
	I0819 17:45:39.994161  380723 main.go:141] libmachine: (addons-347256) Calling .GetState
	I0819 17:45:39.994202  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHUsername
	I0819 17:45:39.994293  380723 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/addons-347256/id_rsa Username:docker}
	I0819 17:45:39.995574  380723 main.go:141] libmachine: (addons-347256) Calling .DriverName
	I0819 17:45:39.999725  380723 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0819 17:45:39.999759  380723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35859
	I0819 17:45:40.000273  380723 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:45:40.000749  380723 main.go:141] libmachine: Using API Version  1
	I0819 17:45:40.000769  380723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:45:40.001185  380723 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:45:40.001318  380723 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0819 17:45:40.001332  380723 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0819 17:45:40.001348  380723 main.go:141] libmachine: (addons-347256) Calling .GetState
	I0819 17:45:40.001352  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHHostname
	I0819 17:45:40.003498  380723 main.go:141] libmachine: (addons-347256) Calling .DriverName
	I0819 17:45:40.004460  380723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38705
	I0819 17:45:40.005131  380723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35121
	I0819 17:45:40.005217  380723 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 17:45:40.005299  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:40.005637  380723 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:45:40.005700  380723 main.go:141] libmachine: (addons-347256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:9a:be", ip: ""} in network mk-addons-347256: {Iface:virbr1 ExpiryTime:2024-08-19 18:45:08 +0000 UTC Type:0 Mac:52:54:00:96:9a:be Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-347256 Clientid:01:52:54:00:96:9a:be}
	I0819 17:45:40.005717  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined IP address 192.168.39.18 and MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:40.005842  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHPort
	I0819 17:45:40.006041  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHKeyPath
	I0819 17:45:40.006175  380723 main.go:141] libmachine: Using API Version  1
	I0819 17:45:40.006191  380723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:45:40.006200  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHUsername
	I0819 17:45:40.006543  380723 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:45:40.006560  380723 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 17:45:40.006575  380723 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 17:45:40.006592  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHHostname
	I0819 17:45:40.006593  380723 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/addons-347256/id_rsa Username:docker}
	I0819 17:45:40.006547  380723 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:45:40.006879  380723 main.go:141] libmachine: (addons-347256) Calling .GetState
	I0819 17:45:40.007081  380723 main.go:141] libmachine: Using API Version  1
	I0819 17:45:40.007266  380723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:45:40.008163  380723 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:45:40.008353  380723 main.go:141] libmachine: (addons-347256) Calling .GetState
	I0819 17:45:40.009168  380723 main.go:141] libmachine: (addons-347256) Calling .DriverName
	I0819 17:45:40.009981  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:40.010608  380723 main.go:141] libmachine: (addons-347256) Calling .DriverName
	I0819 17:45:40.010686  380723 main.go:141] libmachine: (addons-347256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:9a:be", ip: ""} in network mk-addons-347256: {Iface:virbr1 ExpiryTime:2024-08-19 18:45:08 +0000 UTC Type:0 Mac:52:54:00:96:9a:be Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-347256 Clientid:01:52:54:00:96:9a:be}
	I0819 17:45:40.010703  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined IP address 192.168.39.18 and MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:40.010847  380723 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0819 17:45:40.010912  380723 main.go:141] libmachine: Making call to close driver server
	I0819 17:45:40.010925  380723 main.go:141] libmachine: (addons-347256) Calling .Close
	I0819 17:45:40.010965  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHPort
	I0819 17:45:40.011912  380723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43141
	I0819 17:45:40.011917  380723 main.go:141] libmachine: (addons-347256) DBG | Closing plugin on server side
	I0819 17:45:40.011947  380723 main.go:141] libmachine: Successfully made call to close driver server
	I0819 17:45:40.011953  380723 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 17:45:40.011962  380723 main.go:141] libmachine: Making call to close driver server
	I0819 17:45:40.011971  380723 main.go:141] libmachine: (addons-347256) Calling .Close
	I0819 17:45:40.012014  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHKeyPath
	I0819 17:45:40.012110  380723 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0819 17:45:40.012125  380723 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0819 17:45:40.012144  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHHostname
	I0819 17:45:40.012187  380723 main.go:141] libmachine: Successfully made call to close driver server
	I0819 17:45:40.012198  380723 main.go:141] libmachine: Making call to close connection to plugin binary
	W0819 17:45:40.012305  380723 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0819 17:45:40.013040  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHUsername
	I0819 17:45:40.013068  380723 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:45:40.013293  380723 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/addons-347256/id_rsa Username:docker}
	I0819 17:45:40.013658  380723 main.go:141] libmachine: Using API Version  1
	I0819 17:45:40.013676  380723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:45:40.014000  380723 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:45:40.014197  380723 main.go:141] libmachine: (addons-347256) Calling .GetState
	I0819 17:45:40.015527  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:40.015997  380723 main.go:141] libmachine: (addons-347256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:9a:be", ip: ""} in network mk-addons-347256: {Iface:virbr1 ExpiryTime:2024-08-19 18:45:08 +0000 UTC Type:0 Mac:52:54:00:96:9a:be Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-347256 Clientid:01:52:54:00:96:9a:be}
	I0819 17:45:40.016035  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined IP address 192.168.39.18 and MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:40.016119  380723 main.go:141] libmachine: (addons-347256) Calling .DriverName
	I0819 17:45:40.016500  380723 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 17:45:40.016509  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHPort
	I0819 17:45:40.016520  380723 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 17:45:40.016540  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHHostname
	I0819 17:45:40.016688  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHKeyPath
	I0819 17:45:40.016839  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHUsername
	I0819 17:45:40.016970  380723 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/addons-347256/id_rsa Username:docker}
	I0819 17:45:40.019485  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:40.019989  380723 main.go:141] libmachine: (addons-347256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:9a:be", ip: ""} in network mk-addons-347256: {Iface:virbr1 ExpiryTime:2024-08-19 18:45:08 +0000 UTC Type:0 Mac:52:54:00:96:9a:be Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-347256 Clientid:01:52:54:00:96:9a:be}
	I0819 17:45:40.020016  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined IP address 192.168.39.18 and MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:40.020188  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHPort
	I0819 17:45:40.020377  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHKeyPath
	I0819 17:45:40.020490  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHUsername
	I0819 17:45:40.020592  380723 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/addons-347256/id_rsa Username:docker}
	I0819 17:45:40.027726  380723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34013
	I0819 17:45:40.028177  380723 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:45:40.028568  380723 main.go:141] libmachine: Using API Version  1
	I0819 17:45:40.028581  380723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:45:40.028933  380723 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:45:40.029072  380723 main.go:141] libmachine: (addons-347256) Calling .GetState
	I0819 17:45:40.030568  380723 main.go:141] libmachine: (addons-347256) Calling .DriverName
	I0819 17:45:40.032286  380723 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0819 17:45:40.033646  380723 out.go:177]   - Using image docker.io/busybox:stable
	I0819 17:45:40.034899  380723 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0819 17:45:40.034916  380723 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0819 17:45:40.034933  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHHostname
	I0819 17:45:40.037472  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:40.037796  380723 main.go:141] libmachine: (addons-347256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:9a:be", ip: ""} in network mk-addons-347256: {Iface:virbr1 ExpiryTime:2024-08-19 18:45:08 +0000 UTC Type:0 Mac:52:54:00:96:9a:be Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-347256 Clientid:01:52:54:00:96:9a:be}
	I0819 17:45:40.037822  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined IP address 192.168.39.18 and MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:40.037987  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHPort
	I0819 17:45:40.038159  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHKeyPath
	I0819 17:45:40.038289  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHUsername
	I0819 17:45:40.038428  380723 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/addons-347256/id_rsa Username:docker}
	I0819 17:45:40.371649  380723 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 17:45:40.371721  380723 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0819 17:45:40.396603  380723 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0819 17:45:40.396632  380723 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0819 17:45:40.397254  380723 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0819 17:45:40.397274  380723 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0819 17:45:40.466741  380723 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0819 17:45:40.466773  380723 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0819 17:45:40.488284  380723 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0819 17:45:40.488320  380723 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0819 17:45:40.500237  380723 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0819 17:45:40.502036  380723 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0819 17:45:40.502068  380723 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0819 17:45:40.531483  380723 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0819 17:45:40.558724  380723 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0819 17:45:40.558747  380723 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0819 17:45:40.560447  380723 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0819 17:45:40.560465  380723 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0819 17:45:40.563990  380723 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0819 17:45:40.564007  380723 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0819 17:45:40.565471  380723 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0819 17:45:40.569573  380723 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0819 17:45:40.601794  380723 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0819 17:45:40.601825  380723 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0819 17:45:40.603693  380723 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 17:45:40.619428  380723 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0819 17:45:40.653004  380723 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0819 17:45:40.653035  380723 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0819 17:45:40.654545  380723 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0819 17:45:40.654562  380723 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0819 17:45:40.668442  380723 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0819 17:45:40.668473  380723 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0819 17:45:40.682845  380723 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 17:45:40.729278  380723 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0819 17:45:40.729310  380723 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0819 17:45:40.737396  380723 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0819 17:45:40.737422  380723 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0819 17:45:40.738692  380723 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0819 17:45:40.798959  380723 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0819 17:45:40.798991  380723 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0819 17:45:40.812153  380723 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0819 17:45:40.812185  380723 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0819 17:45:40.840237  380723 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0819 17:45:40.840264  380723 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0819 17:45:40.938995  380723 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0819 17:45:40.969479  380723 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0819 17:45:40.969502  380723 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0819 17:45:40.989420  380723 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 17:45:40.989454  380723 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0819 17:45:41.026265  380723 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0819 17:45:41.026301  380723 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0819 17:45:41.084753  380723 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0819 17:45:41.084780  380723 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0819 17:45:41.088560  380723 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0819 17:45:41.088580  380723 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0819 17:45:41.092243  380723 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0819 17:45:41.118957  380723 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0819 17:45:41.118990  380723 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0819 17:45:41.159935  380723 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 17:45:41.253316  380723 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0819 17:45:41.253347  380723 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0819 17:45:41.262531  380723 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0819 17:45:41.262559  380723 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0819 17:45:41.283884  380723 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0819 17:45:41.283906  380723 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0819 17:45:41.503618  380723 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0819 17:45:41.503651  380723 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0819 17:45:41.594582  380723 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0819 17:45:41.594631  380723 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0819 17:45:41.602212  380723 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0819 17:45:41.760294  380723 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0819 17:45:41.760317  380723 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0819 17:45:41.833334  380723 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0819 17:45:41.833368  380723 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0819 17:45:42.042869  380723 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0819 17:45:42.120663  380723 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0819 17:45:42.120709  380723 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0819 17:45:42.465964  380723 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0819 17:45:42.465992  380723 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0819 17:45:42.794515  380723 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.422757477s)
	I0819 17:45:42.794561  380723 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0819 17:45:42.794534  380723 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.422853317s)
	I0819 17:45:42.795353  380723 node_ready.go:35] waiting up to 6m0s for node "addons-347256" to be "Ready" ...
	I0819 17:45:42.803483  380723 node_ready.go:49] node "addons-347256" has status "Ready":"True"
	I0819 17:45:42.803514  380723 node_ready.go:38] duration metric: took 8.11951ms for node "addons-347256" to be "Ready" ...
	I0819 17:45:42.803529  380723 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 17:45:42.833996  380723 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-77256" in "kube-system" namespace to be "Ready" ...
	I0819 17:45:42.853452  380723 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0819 17:45:42.853482  380723 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0819 17:45:43.311446  380723 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-347256" context rescaled to 1 replicas
	I0819 17:45:43.320040  380723 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0819 17:45:44.880986  380723 pod_ready.go:103] pod "coredns-6f6b679f8f-77256" in "kube-system" namespace has status "Ready":"False"
	I0819 17:45:46.985038  380723 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0819 17:45:46.985086  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHHostname
	I0819 17:45:46.988084  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:46.988625  380723 main.go:141] libmachine: (addons-347256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:9a:be", ip: ""} in network mk-addons-347256: {Iface:virbr1 ExpiryTime:2024-08-19 18:45:08 +0000 UTC Type:0 Mac:52:54:00:96:9a:be Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-347256 Clientid:01:52:54:00:96:9a:be}
	I0819 17:45:46.988658  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined IP address 192.168.39.18 and MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:46.988864  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHPort
	I0819 17:45:46.989113  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHKeyPath
	I0819 17:45:46.989319  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHUsername
	I0819 17:45:46.989515  380723 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/addons-347256/id_rsa Username:docker}
	I0819 17:45:47.301963  380723 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0819 17:45:47.394293  380723 addons.go:234] Setting addon gcp-auth=true in "addons-347256"
	I0819 17:45:47.394362  380723 host.go:66] Checking if "addons-347256" exists ...
	I0819 17:45:47.394793  380723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:45:47.394830  380723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:45:47.411543  380723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41651
	I0819 17:45:47.412021  380723 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:45:47.412617  380723 main.go:141] libmachine: Using API Version  1
	I0819 17:45:47.412646  380723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:45:47.412998  380723 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:45:47.413486  380723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:45:47.413512  380723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:45:47.427830  380723 pod_ready.go:103] pod "coredns-6f6b679f8f-77256" in "kube-system" namespace has status "Ready":"False"
	I0819 17:45:47.429244  380723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45003
	I0819 17:45:47.429805  380723 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:45:47.430451  380723 main.go:141] libmachine: Using API Version  1
	I0819 17:45:47.430480  380723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:45:47.430836  380723 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:45:47.431043  380723 main.go:141] libmachine: (addons-347256) Calling .GetState
	I0819 17:45:47.432699  380723 main.go:141] libmachine: (addons-347256) Calling .DriverName
	I0819 17:45:47.432977  380723 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0819 17:45:47.433001  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHHostname
	I0819 17:45:47.436157  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:47.436568  380723 main.go:141] libmachine: (addons-347256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:9a:be", ip: ""} in network mk-addons-347256: {Iface:virbr1 ExpiryTime:2024-08-19 18:45:08 +0000 UTC Type:0 Mac:52:54:00:96:9a:be Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-347256 Clientid:01:52:54:00:96:9a:be}
	I0819 17:45:47.436601  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined IP address 192.168.39.18 and MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:47.436821  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHPort
	I0819 17:45:47.437039  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHKeyPath
	I0819 17:45:47.437285  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHUsername
	I0819 17:45:47.437461  380723 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/addons-347256/id_rsa Username:docker}
	I0819 17:45:47.656742  380723 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.156468649s)
	I0819 17:45:47.656796  380723 main.go:141] libmachine: Making call to close driver server
	I0819 17:45:47.656813  380723 main.go:141] libmachine: (addons-347256) Calling .Close
	I0819 17:45:47.656861  380723 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.125342383s)
	I0819 17:45:47.656917  380723 main.go:141] libmachine: Making call to close driver server
	I0819 17:45:47.656930  380723 main.go:141] libmachine: (addons-347256) Calling .Close
	I0819 17:45:47.656941  380723 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.091435532s)
	I0819 17:45:47.656965  380723 main.go:141] libmachine: Making call to close driver server
	I0819 17:45:47.656976  380723 main.go:141] libmachine: (addons-347256) Calling .Close
	I0819 17:45:47.656979  380723 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.087378027s)
	I0819 17:45:47.657037  380723 main.go:141] libmachine: Making call to close driver server
	I0819 17:45:47.657049  380723 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.053335955s)
	I0819 17:45:47.657074  380723 main.go:141] libmachine: Making call to close driver server
	I0819 17:45:47.657085  380723 main.go:141] libmachine: (addons-347256) Calling .Close
	I0819 17:45:47.657124  380723 main.go:141] libmachine: Successfully made call to close driver server
	I0819 17:45:47.657141  380723 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 17:45:47.657154  380723 main.go:141] libmachine: Making call to close driver server
	I0819 17:45:47.657164  380723 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.974295861s)
	I0819 17:45:47.657140  380723 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.03768664s)
	I0819 17:45:47.657193  380723 main.go:141] libmachine: Making call to close driver server
	I0819 17:45:47.657199  380723 main.go:141] libmachine: Making call to close driver server
	I0819 17:45:47.657205  380723 main.go:141] libmachine: (addons-347256) Calling .Close
	I0819 17:45:47.657211  380723 main.go:141] libmachine: (addons-347256) Calling .Close
	I0819 17:45:47.657242  380723 main.go:141] libmachine: Successfully made call to close driver server
	I0819 17:45:47.657258  380723 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 17:45:47.657268  380723 main.go:141] libmachine: Making call to close driver server
	I0819 17:45:47.657278  380723 main.go:141] libmachine: (addons-347256) Calling .Close
	I0819 17:45:47.657054  380723 main.go:141] libmachine: (addons-347256) Calling .Close
	I0819 17:45:47.657285  380723 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (6.918569289s)
	I0819 17:45:47.657306  380723 main.go:141] libmachine: Making call to close driver server
	I0819 17:45:47.657316  380723 main.go:141] libmachine: (addons-347256) Calling .Close
	I0819 17:45:47.657169  380723 main.go:141] libmachine: (addons-347256) Calling .Close
	I0819 17:45:47.657401  380723 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.718378622s)
	I0819 17:45:47.657417  380723 main.go:141] libmachine: Making call to close driver server
	I0819 17:45:47.657424  380723 main.go:141] libmachine: (addons-347256) Calling .Close
	I0819 17:45:47.657640  380723 main.go:141] libmachine: (addons-347256) DBG | Closing plugin on server side
	I0819 17:45:47.657677  380723 main.go:141] libmachine: Successfully made call to close driver server
	I0819 17:45:47.657684  380723 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 17:45:47.657691  380723 main.go:141] libmachine: Making call to close driver server
	I0819 17:45:47.657697  380723 main.go:141] libmachine: (addons-347256) Calling .Close
	I0819 17:45:47.657881  380723 main.go:141] libmachine: (addons-347256) DBG | Closing plugin on server side
	I0819 17:45:47.657918  380723 main.go:141] libmachine: (addons-347256) DBG | Closing plugin on server side
	I0819 17:45:47.657933  380723 main.go:141] libmachine: (addons-347256) DBG | Closing plugin on server side
	I0819 17:45:47.657952  380723 main.go:141] libmachine: Successfully made call to close driver server
	I0819 17:45:47.657958  380723 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 17:45:47.657966  380723 main.go:141] libmachine: Making call to close driver server
	I0819 17:45:47.657983  380723 main.go:141] libmachine: (addons-347256) Calling .Close
	I0819 17:45:47.658033  380723 main.go:141] libmachine: Successfully made call to close driver server
	I0819 17:45:47.658041  380723 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 17:45:47.658300  380723 main.go:141] libmachine: (addons-347256) DBG | Closing plugin on server side
	I0819 17:45:47.658333  380723 main.go:141] libmachine: Successfully made call to close driver server
	I0819 17:45:47.658341  380723 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 17:45:47.658539  380723 main.go:141] libmachine: (addons-347256) DBG | Closing plugin on server side
	I0819 17:45:47.658560  380723 main.go:141] libmachine: Successfully made call to close driver server
	I0819 17:45:47.658566  380723 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 17:45:47.658574  380723 main.go:141] libmachine: Making call to close driver server
	I0819 17:45:47.658581  380723 main.go:141] libmachine: (addons-347256) Calling .Close
	I0819 17:45:47.659359  380723 main.go:141] libmachine: (addons-347256) DBG | Closing plugin on server side
	I0819 17:45:47.659411  380723 main.go:141] libmachine: (addons-347256) DBG | Closing plugin on server side
	I0819 17:45:47.659428  380723 main.go:141] libmachine: (addons-347256) DBG | Closing plugin on server side
	I0819 17:45:47.659450  380723 main.go:141] libmachine: Successfully made call to close driver server
	I0819 17:45:47.659458  380723 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 17:45:47.659656  380723 main.go:141] libmachine: Successfully made call to close driver server
	I0819 17:45:47.659667  380723 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 17:45:47.659691  380723 addons.go:475] Verifying addon ingress=true in "addons-347256"
	I0819 17:45:47.659839  380723 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.567571493s)
	I0819 17:45:47.659867  380723 main.go:141] libmachine: Making call to close driver server
	I0819 17:45:47.659879  380723 main.go:141] libmachine: (addons-347256) Calling .Close
	I0819 17:45:47.659981  380723 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.500015623s)
	I0819 17:45:47.659995  380723 main.go:141] libmachine: Making call to close driver server
	I0819 17:45:47.660004  380723 main.go:141] libmachine: (addons-347256) Calling .Close
	I0819 17:45:47.660217  380723 main.go:141] libmachine: Successfully made call to close driver server
	I0819 17:45:47.660264  380723 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 17:45:47.660363  380723 main.go:141] libmachine: (addons-347256) DBG | Closing plugin on server side
	I0819 17:45:47.660400  380723 main.go:141] libmachine: Successfully made call to close driver server
	I0819 17:45:47.660414  380723 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 17:45:47.660422  380723 main.go:141] libmachine: Making call to close driver server
	I0819 17:45:47.660429  380723 main.go:141] libmachine: (addons-347256) Calling .Close
	I0819 17:45:47.660483  380723 main.go:141] libmachine: (addons-347256) DBG | Closing plugin on server side
	I0819 17:45:47.660504  380723 main.go:141] libmachine: Successfully made call to close driver server
	I0819 17:45:47.660511  380723 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 17:45:47.660522  380723 main.go:141] libmachine: Making call to close driver server
	I0819 17:45:47.660530  380723 main.go:141] libmachine: (addons-347256) Calling .Close
	I0819 17:45:47.660631  380723 main.go:141] libmachine: (addons-347256) DBG | Closing plugin on server side
	I0819 17:45:47.660676  380723 main.go:141] libmachine: Successfully made call to close driver server
	I0819 17:45:47.660687  380723 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 17:45:47.660696  380723 addons.go:475] Verifying addon metrics-server=true in "addons-347256"
	I0819 17:45:47.659764  380723 main.go:141] libmachine: Successfully made call to close driver server
	I0819 17:45:47.660878  380723 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 17:45:47.660890  380723 main.go:141] libmachine: Making call to close driver server
	I0819 17:45:47.660899  380723 main.go:141] libmachine: (addons-347256) Calling .Close
	I0819 17:45:47.661224  380723 main.go:141] libmachine: (addons-347256) DBG | Closing plugin on server side
	I0819 17:45:47.661248  380723 main.go:141] libmachine: (addons-347256) DBG | Closing plugin on server side
	I0819 17:45:47.661277  380723 main.go:141] libmachine: Successfully made call to close driver server
	I0819 17:45:47.661284  380723 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 17:45:47.661292  380723 main.go:141] libmachine: Making call to close driver server
	I0819 17:45:47.661312  380723 main.go:141] libmachine: (addons-347256) Calling .Close
	I0819 17:45:47.661382  380723 main.go:141] libmachine: Successfully made call to close driver server
	I0819 17:45:47.661391  380723 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 17:45:47.661398  380723 main.go:141] libmachine: Making call to close driver server
	I0819 17:45:47.661405  380723 main.go:141] libmachine: (addons-347256) Calling .Close
	I0819 17:45:47.661450  380723 main.go:141] libmachine: (addons-347256) DBG | Closing plugin on server side
	I0819 17:45:47.661473  380723 main.go:141] libmachine: Successfully made call to close driver server
	I0819 17:45:47.661480  380723 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 17:45:47.661487  380723 main.go:141] libmachine: Making call to close driver server
	I0819 17:45:47.661493  380723 main.go:141] libmachine: (addons-347256) Calling .Close
	I0819 17:45:47.661739  380723 main.go:141] libmachine: (addons-347256) DBG | Closing plugin on server side
	I0819 17:45:47.661772  380723 main.go:141] libmachine: Successfully made call to close driver server
	I0819 17:45:47.661779  380723 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 17:45:47.662068  380723 main.go:141] libmachine: (addons-347256) DBG | Closing plugin on server side
	I0819 17:45:47.662103  380723 main.go:141] libmachine: Successfully made call to close driver server
	I0819 17:45:47.662114  380723 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 17:45:47.662122  380723 addons.go:475] Verifying addon registry=true in "addons-347256"
	I0819 17:45:47.662821  380723 main.go:141] libmachine: (addons-347256) DBG | Closing plugin on server side
	I0819 17:45:47.662877  380723 main.go:141] libmachine: Successfully made call to close driver server
	I0819 17:45:47.662899  380723 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 17:45:47.662942  380723 main.go:141] libmachine: (addons-347256) DBG | Closing plugin on server side
	I0819 17:45:47.662986  380723 main.go:141] libmachine: Successfully made call to close driver server
	I0819 17:45:47.662993  380723 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 17:45:47.663262  380723 main.go:141] libmachine: Successfully made call to close driver server
	I0819 17:45:47.663293  380723 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 17:45:47.664263  380723 out.go:177] * Verifying ingress addon...
	I0819 17:45:47.664639  380723 out.go:177] * Verifying registry addon...
	I0819 17:45:47.664710  380723 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-347256 service yakd-dashboard -n yakd-dashboard
	
	I0819 17:45:47.666530  380723 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0819 17:45:47.667096  380723 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0819 17:45:47.682207  380723 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0819 17:45:47.682235  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:45:47.682342  380723 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0819 17:45:47.682359  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:45:47.707450  380723 main.go:141] libmachine: Making call to close driver server
	I0819 17:45:47.707475  380723 main.go:141] libmachine: (addons-347256) Calling .Close
	I0819 17:45:47.707769  380723 main.go:141] libmachine: Successfully made call to close driver server
	I0819 17:45:47.707792  380723 main.go:141] libmachine: Making call to close connection to plugin binary
	W0819 17:45:47.707895  380723 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0819 17:45:47.719955  380723 main.go:141] libmachine: Making call to close driver server
	I0819 17:45:47.719986  380723 main.go:141] libmachine: (addons-347256) Calling .Close
	I0819 17:45:47.720314  380723 main.go:141] libmachine: Successfully made call to close driver server
	I0819 17:45:47.720338  380723 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 17:45:48.199440  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:45:48.199617  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:45:48.508355  380723 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.906084869s)
	W0819 17:45:48.508413  380723 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0819 17:45:48.508418  380723 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.465499848s)
	I0819 17:45:48.508465  380723 main.go:141] libmachine: Making call to close driver server
	I0819 17:45:48.508481  380723 main.go:141] libmachine: (addons-347256) Calling .Close
	I0819 17:45:48.508477  380723 retry.go:31] will retry after 209.756832ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0819 17:45:48.508776  380723 main.go:141] libmachine: (addons-347256) DBG | Closing plugin on server side
	I0819 17:45:48.508832  380723 main.go:141] libmachine: Successfully made call to close driver server
	I0819 17:45:48.508842  380723 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 17:45:48.508858  380723 main.go:141] libmachine: Making call to close driver server
	I0819 17:45:48.508870  380723 main.go:141] libmachine: (addons-347256) Calling .Close
	I0819 17:45:48.509113  380723 main.go:141] libmachine: Successfully made call to close driver server
	I0819 17:45:48.509131  380723 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 17:45:48.699192  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:45:48.700051  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:45:48.719273  380723 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0819 17:45:49.171415  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:45:49.171614  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:45:49.671515  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:45:49.671797  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:45:49.861006  380723 pod_ready.go:103] pod "coredns-6f6b679f8f-77256" in "kube-system" namespace has status "Ready":"False"
	I0819 17:45:50.141545  380723 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.821438478s)
	I0819 17:45:50.141559  380723 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.708555525s)
	I0819 17:45:50.141605  380723 main.go:141] libmachine: Making call to close driver server
	I0819 17:45:50.141621  380723 main.go:141] libmachine: (addons-347256) Calling .Close
	I0819 17:45:50.141915  380723 main.go:141] libmachine: Successfully made call to close driver server
	I0819 17:45:50.141973  380723 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 17:45:50.141988  380723 main.go:141] libmachine: Making call to close driver server
	I0819 17:45:50.142001  380723 main.go:141] libmachine: (addons-347256) Calling .Close
	I0819 17:45:50.142004  380723 main.go:141] libmachine: (addons-347256) DBG | Closing plugin on server side
	I0819 17:45:50.142353  380723 main.go:141] libmachine: Successfully made call to close driver server
	I0819 17:45:50.142374  380723 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 17:45:50.142386  380723 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-347256"
	I0819 17:45:50.142391  380723 main.go:141] libmachine: (addons-347256) DBG | Closing plugin on server side
	I0819 17:45:50.143335  380723 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0819 17:45:50.144127  380723 out.go:177] * Verifying csi-hostpath-driver addon...
	I0819 17:45:50.145589  380723 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0819 17:45:50.146391  380723 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0819 17:45:50.146960  380723 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0819 17:45:50.146976  380723 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0819 17:45:50.160892  380723 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0819 17:45:50.160916  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:45:50.190635  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:45:50.191469  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:45:50.260757  380723 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0819 17:45:50.260790  380723 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0819 17:45:50.308446  380723 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0819 17:45:50.308477  380723 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0819 17:45:50.342349  380723 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0819 17:45:50.652779  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:45:50.671144  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:45:50.671420  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:45:51.019310  380723 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.299985163s)
	I0819 17:45:51.019376  380723 main.go:141] libmachine: Making call to close driver server
	I0819 17:45:51.019393  380723 main.go:141] libmachine: (addons-347256) Calling .Close
	I0819 17:45:51.019725  380723 main.go:141] libmachine: Successfully made call to close driver server
	I0819 17:45:51.019743  380723 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 17:45:51.019753  380723 main.go:141] libmachine: Making call to close driver server
	I0819 17:45:51.019761  380723 main.go:141] libmachine: (addons-347256) Calling .Close
	I0819 17:45:51.020011  380723 main.go:141] libmachine: Successfully made call to close driver server
	I0819 17:45:51.020071  380723 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 17:45:51.020044  380723 main.go:141] libmachine: (addons-347256) DBG | Closing plugin on server side
	I0819 17:45:51.151527  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:45:51.170353  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:45:51.171932  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:45:51.666628  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:45:51.715004  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:45:51.716612  380723 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.374218153s)
	I0819 17:45:51.716676  380723 main.go:141] libmachine: Making call to close driver server
	I0819 17:45:51.716690  380723 main.go:141] libmachine: (addons-347256) Calling .Close
	I0819 17:45:51.717029  380723 main.go:141] libmachine: Successfully made call to close driver server
	I0819 17:45:51.717067  380723 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 17:45:51.717077  380723 main.go:141] libmachine: Making call to close driver server
	I0819 17:45:51.717085  380723 main.go:141] libmachine: (addons-347256) Calling .Close
	I0819 17:45:51.717088  380723 main.go:141] libmachine: (addons-347256) DBG | Closing plugin on server side
	I0819 17:45:51.717349  380723 main.go:141] libmachine: Successfully made call to close driver server
	I0819 17:45:51.717366  380723 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 17:45:51.719498  380723 addons.go:475] Verifying addon gcp-auth=true in "addons-347256"
	I0819 17:45:51.721361  380723 out.go:177] * Verifying gcp-auth addon...
	I0819 17:45:51.723644  380723 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0819 17:45:51.740101  380723 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0819 17:45:51.740140  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:45:51.740242  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:45:52.154370  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:45:52.172650  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:45:52.173017  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:45:52.228241  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:45:52.341085  380723 pod_ready.go:103] pod "coredns-6f6b679f8f-77256" in "kube-system" namespace has status "Ready":"False"
	I0819 17:45:52.652512  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:45:52.754360  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:45:52.754367  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:45:52.754453  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:45:53.152493  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:45:53.252309  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:45:53.252343  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:45:53.252454  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:45:53.652072  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:45:53.672284  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:45:53.672579  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:45:53.751534  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:45:54.152001  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:45:54.170809  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:45:54.171084  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:45:54.227395  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:45:54.650657  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:45:54.670176  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:45:54.672237  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:45:54.728246  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:45:54.840058  380723 pod_ready.go:103] pod "coredns-6f6b679f8f-77256" in "kube-system" namespace has status "Ready":"False"
	I0819 17:45:55.478775  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:45:55.579781  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:45:55.580421  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:45:55.580563  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:45:55.651874  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:45:55.670109  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:45:55.671684  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:45:55.727482  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:45:56.151248  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:45:56.171644  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:45:56.172624  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:45:56.227229  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:45:56.650498  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:45:56.670050  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:45:56.672432  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:45:56.727304  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:45:56.841046  380723 pod_ready.go:103] pod "coredns-6f6b679f8f-77256" in "kube-system" namespace has status "Ready":"False"
	I0819 17:45:57.151254  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:45:57.171006  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:45:57.171916  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:45:57.227769  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:45:57.340949  380723 pod_ready.go:98] pod "coredns-6f6b679f8f-77256" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-19 17:45:57 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-19 17:45:40 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-19 17:45:40 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-19 17:45:40 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-19 17:45:40 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.18 HostIPs:[{IP:192.168.39.
18}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-08-19 17:45:40 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-08-19 17:45:45 +0000 UTC,FinishedAt:2024-08-19 17:45:55 +0000 UTC,ContainerID:cri-o://de82d178296b49a61386a18d626e8a0b47d3af5002f63b18fb061ff4fdcb95b7,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://de82d178296b49a61386a18d626e8a0b47d3af5002f63b18fb061ff4fdcb95b7 Started:0xc002938f20 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0021394f0} {Name:kube-api-access-l97x8 MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc002139500}] User:nil
AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0819 17:45:57.340988  380723 pod_ready.go:82] duration metric: took 14.506958093s for pod "coredns-6f6b679f8f-77256" in "kube-system" namespace to be "Ready" ...
	E0819 17:45:57.341009  380723 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-6f6b679f8f-77256" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-19 17:45:57 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-19 17:45:40 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-19 17:45:40 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-19 17:45:40 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-19 17:45:40 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.18 HostIPs:[{IP:192.168.39.18}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-08-19 17:45:40 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-08-19 17:45:45 +0000 UTC,FinishedAt:2024-08-19 17:45:55 +0000 UTC,ContainerID:cri-o://de82d178296b49a61386a18d626e8a0b47d3af5002f63b18fb061ff4fdcb95b7,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://de82d178296b49a61386a18d626e8a0b47d3af5002f63b18fb061ff4fdcb95b7 Started:0xc002938f20 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0021394f0} {Name:kube-api-access-l97x8 MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRead
Only:0xc002139500}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0819 17:45:57.341023  380723 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-tljrk" in "kube-system" namespace to be "Ready" ...
	I0819 17:45:57.652833  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:45:57.672404  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:45:57.678834  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:45:57.727705  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:45:58.376382  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:45:58.381054  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:45:58.381283  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:45:58.381831  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:45:58.651051  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:45:58.670751  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:45:58.670963  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:45:58.727819  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:45:59.151295  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:45:59.171726  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:45:59.171891  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:45:59.227466  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:45:59.347011  380723 pod_ready.go:103] pod "coredns-6f6b679f8f-tljrk" in "kube-system" namespace has status "Ready":"False"
	I0819 17:45:59.652459  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:45:59.671098  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:45:59.671353  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:45:59.728006  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:00.152041  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:00.170975  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:00.171289  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:00.227975  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:00.650734  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:00.670879  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:00.671408  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:00.726968  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:01.151702  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:01.172147  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:01.172158  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:01.228076  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:01.347891  380723 pod_ready.go:103] pod "coredns-6f6b679f8f-tljrk" in "kube-system" namespace has status "Ready":"False"
	I0819 17:46:01.651067  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:01.672043  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:01.672352  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:01.727112  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:02.151784  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:02.251520  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:02.251589  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:02.251868  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:02.650887  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:02.670498  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:02.671190  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:02.727894  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:03.152373  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:03.173355  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:03.173608  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:03.252079  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:03.348701  380723 pod_ready.go:103] pod "coredns-6f6b679f8f-tljrk" in "kube-system" namespace has status "Ready":"False"
	I0819 17:46:03.650560  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:03.670584  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:03.671619  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:03.727971  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:04.152279  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:04.171901  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:04.171926  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:04.227790  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:04.652016  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:04.670906  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:04.671259  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:04.727600  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:05.151415  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:05.171056  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:05.171854  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:05.226843  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:05.650500  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:05.671853  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:05.671876  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:05.727481  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:05.847042  380723 pod_ready.go:103] pod "coredns-6f6b679f8f-tljrk" in "kube-system" namespace has status "Ready":"False"
	I0819 17:46:06.154376  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:06.171330  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:06.171791  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:06.228140  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:06.651358  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:06.671968  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:06.672611  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:06.728119  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:07.151345  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:07.171482  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:07.172033  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:07.227995  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:07.651724  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:07.671238  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:07.672833  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:07.727648  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:07.847271  380723 pod_ready.go:103] pod "coredns-6f6b679f8f-tljrk" in "kube-system" namespace has status "Ready":"False"
	I0819 17:46:08.152418  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:08.171219  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:08.171858  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:08.227301  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:08.650958  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:08.671623  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:08.672167  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:08.732311  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:09.152242  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:09.171787  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:09.174068  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:09.227398  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:09.651149  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:09.679634  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:09.679800  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:09.727099  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:09.847978  380723 pod_ready.go:103] pod "coredns-6f6b679f8f-tljrk" in "kube-system" namespace has status "Ready":"False"
	I0819 17:46:10.152007  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:10.171537  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:10.172252  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:10.227414  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:10.651497  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:10.671074  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:10.671824  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:10.727372  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:11.151927  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:11.171319  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:11.171642  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:11.227402  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:11.650959  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:11.671712  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:11.671895  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:11.727046  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:12.151166  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:12.171267  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:12.171870  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:12.227798  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:12.347832  380723 pod_ready.go:103] pod "coredns-6f6b679f8f-tljrk" in "kube-system" namespace has status "Ready":"False"
	I0819 17:46:12.651180  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:12.672301  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:12.672716  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:12.727790  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:13.150794  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:13.172858  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:13.173312  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:13.228256  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:13.651353  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:13.671206  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:13.671329  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:13.727206  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:14.151619  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:14.170410  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:14.170777  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:14.227605  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:14.650707  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:14.670630  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:14.671536  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:14.727350  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:14.847694  380723 pod_ready.go:103] pod "coredns-6f6b679f8f-tljrk" in "kube-system" namespace has status "Ready":"False"
	I0819 17:46:15.152079  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:15.171719  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:15.172125  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:15.227190  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:15.651919  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:15.672079  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:15.672193  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:15.751805  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:16.150888  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:16.171221  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:16.171358  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:16.227481  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:16.652037  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:16.670480  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:16.672101  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:16.727645  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:17.151453  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:17.170375  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:17.171355  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:17.227957  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:17.356864  380723 pod_ready.go:103] pod "coredns-6f6b679f8f-tljrk" in "kube-system" namespace has status "Ready":"False"
	I0819 17:46:17.652314  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:17.671961  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:17.672105  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:17.728109  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:18.151472  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:18.171635  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:18.172010  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:18.227342  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:18.650748  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:18.670967  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:18.672046  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:18.727710  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:19.152581  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:19.171075  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:19.171371  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:19.226952  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:19.651790  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:19.670669  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:19.672411  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:19.753947  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:19.849046  380723 pod_ready.go:103] pod "coredns-6f6b679f8f-tljrk" in "kube-system" namespace has status "Ready":"False"
	I0819 17:46:20.151555  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:20.170964  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:20.172003  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:20.228416  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:20.804995  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:20.805383  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:20.807245  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:20.807464  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:21.151765  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:21.170601  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:21.172594  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:21.228518  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:21.652639  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:21.670790  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:21.671737  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:21.751457  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:22.151170  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:22.170469  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:22.172432  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:22.226945  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:22.348721  380723 pod_ready.go:103] pod "coredns-6f6b679f8f-tljrk" in "kube-system" namespace has status "Ready":"False"
	I0819 17:46:22.651978  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:22.671791  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:22.672170  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:22.728108  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:23.153420  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:23.172246  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:23.172323  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:23.507494  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:23.507503  380723 pod_ready.go:93] pod "coredns-6f6b679f8f-tljrk" in "kube-system" namespace has status "Ready":"True"
	I0819 17:46:23.507555  380723 pod_ready.go:82] duration metric: took 26.166518536s for pod "coredns-6f6b679f8f-tljrk" in "kube-system" namespace to be "Ready" ...
	I0819 17:46:23.507570  380723 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-347256" in "kube-system" namespace to be "Ready" ...
	I0819 17:46:23.543198  380723 pod_ready.go:93] pod "etcd-addons-347256" in "kube-system" namespace has status "Ready":"True"
	I0819 17:46:23.543227  380723 pod_ready.go:82] duration metric: took 35.64844ms for pod "etcd-addons-347256" in "kube-system" namespace to be "Ready" ...
	I0819 17:46:23.543242  380723 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-347256" in "kube-system" namespace to be "Ready" ...
	I0819 17:46:23.550189  380723 pod_ready.go:93] pod "kube-apiserver-addons-347256" in "kube-system" namespace has status "Ready":"True"
	I0819 17:46:23.550219  380723 pod_ready.go:82] duration metric: took 6.968452ms for pod "kube-apiserver-addons-347256" in "kube-system" namespace to be "Ready" ...
	I0819 17:46:23.550233  380723 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-347256" in "kube-system" namespace to be "Ready" ...
	I0819 17:46:23.564849  380723 pod_ready.go:93] pod "kube-controller-manager-addons-347256" in "kube-system" namespace has status "Ready":"True"
	I0819 17:46:23.564880  380723 pod_ready.go:82] duration metric: took 14.637248ms for pod "kube-controller-manager-addons-347256" in "kube-system" namespace to be "Ready" ...
	I0819 17:46:23.564900  380723 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-72dbf" in "kube-system" namespace to be "Ready" ...
	I0819 17:46:23.575024  380723 pod_ready.go:93] pod "kube-proxy-72dbf" in "kube-system" namespace has status "Ready":"True"
	I0819 17:46:23.575057  380723 pod_ready.go:82] duration metric: took 10.14737ms for pod "kube-proxy-72dbf" in "kube-system" namespace to be "Ready" ...
	I0819 17:46:23.575070  380723 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-347256" in "kube-system" namespace to be "Ready" ...
	I0819 17:46:23.651861  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:23.675201  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:23.675635  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:23.750701  380723 pod_ready.go:93] pod "kube-scheduler-addons-347256" in "kube-system" namespace has status "Ready":"True"
	I0819 17:46:23.750736  380723 pod_ready.go:82] duration metric: took 175.65538ms for pod "kube-scheduler-addons-347256" in "kube-system" namespace to be "Ready" ...
	I0819 17:46:23.750751  380723 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-x924x" in "kube-system" namespace to be "Ready" ...
	I0819 17:46:23.752728  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:24.146209  380723 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-x924x" in "kube-system" namespace has status "Ready":"True"
	I0819 17:46:24.146236  380723 pod_ready.go:82] duration metric: took 395.476606ms for pod "nvidia-device-plugin-daemonset-x924x" in "kube-system" namespace to be "Ready" ...
	I0819 17:46:24.146247  380723 pod_ready.go:39] duration metric: took 41.342703446s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 17:46:24.146269  380723 api_server.go:52] waiting for apiserver process to appear ...
	I0819 17:46:24.146364  380723 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 17:46:24.160374  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:24.166910  380723 api_server.go:72] duration metric: took 44.315873738s to wait for apiserver process to appear ...
	I0819 17:46:24.166938  380723 api_server.go:88] waiting for apiserver healthz status ...
	I0819 17:46:24.166961  380723 api_server.go:253] Checking apiserver healthz at https://192.168.39.18:8443/healthz ...
	I0819 17:46:24.172887  380723 api_server.go:279] https://192.168.39.18:8443/healthz returned 200:
	ok
	I0819 17:46:24.173144  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:24.173923  380723 api_server.go:141] control plane version: v1.31.0
	I0819 17:46:24.173944  380723 api_server.go:131] duration metric: took 6.998235ms to wait for apiserver health ...
	I0819 17:46:24.173952  380723 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 17:46:24.174500  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:24.227338  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:24.349491  380723 system_pods.go:59] 18 kube-system pods found
	I0819 17:46:24.349525  380723 system_pods.go:61] "coredns-6f6b679f8f-tljrk" [6c9217a4-6879-4b7e-a6b5-78dfa1b85ee4] Running
	I0819 17:46:24.349533  380723 system_pods.go:61] "csi-hostpath-attacher-0" [e128fb19-e720-44a6-a1e9-c5f242968b55] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0819 17:46:24.349540  380723 system_pods.go:61] "csi-hostpath-resizer-0" [734bcf24-7c89-469e-9020-fdb24d47cb83] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0819 17:46:24.349550  380723 system_pods.go:61] "csi-hostpathplugin-hkr5d" [16796ce0-7f87-46f8-a9a7-0afa96f3f575] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0819 17:46:24.349555  380723 system_pods.go:61] "etcd-addons-347256" [e9c774cf-14f4-433f-8c4b-96d30f1b8f0f] Running
	I0819 17:46:24.349559  380723 system_pods.go:61] "kube-apiserver-addons-347256" [e35199f6-4a80-4d84-9a30-6e285696f02e] Running
	I0819 17:46:24.349562  380723 system_pods.go:61] "kube-controller-manager-addons-347256" [b9b2d2d8-7f8f-4373-a0a7-cb3dc9d46969] Running
	I0819 17:46:24.349566  380723 system_pods.go:61] "kube-ingress-dns-minikube" [44cd9847-645d-4375-b58a-d153a852f2c7] Running
	I0819 17:46:24.349572  380723 system_pods.go:61] "kube-proxy-72dbf" [a50d76ee-c7cb-4141-9bc3-2b530cb531e3] Running
	I0819 17:46:24.349578  380723 system_pods.go:61] "kube-scheduler-addons-347256" [0367e97e-fee8-48cf-bebc-b3d55381da8f] Running
	I0819 17:46:24.349586  380723 system_pods.go:61] "metrics-server-8988944d9-xkj9p" [2cb192e0-5048-46b0-b74e-86ad5e4d39ea] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 17:46:24.349597  380723 system_pods.go:61] "nvidia-device-plugin-daemonset-x924x" [b28534d9-e3b6-474a-90ca-04048cd59d85] Running
	I0819 17:46:24.349603  380723 system_pods.go:61] "registry-6fb4cdfc84-szv4z" [9388e4e2-9cbc-4408-8be6-ec9be4b5737f] Running
	I0819 17:46:24.349613  380723 system_pods.go:61] "registry-proxy-9q2l4" [73b6c461-1963-4b13-bb12-e75024c4c5d7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0819 17:46:24.349623  380723 system_pods.go:61] "snapshot-controller-56fcc65765-4jtx2" [bcc4eb99-92c0-4fe4-815c-ef9576839c9c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0819 17:46:24.349633  380723 system_pods.go:61] "snapshot-controller-56fcc65765-d7mhz" [2d8e7bbb-d917-42da-9c13-63cfd7e933ce] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0819 17:46:24.349637  380723 system_pods.go:61] "storage-provisioner" [8349a726-cf5d-472f-aec7-5dc582e1d9db] Running
	I0819 17:46:24.349643  380723 system_pods.go:61] "tiller-deploy-b48cc5f79-bqbr9" [801ad1ee-bac9-4f5e-9d38-655f7fbf1779] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0819 17:46:24.349651  380723 system_pods.go:74] duration metric: took 175.691658ms to wait for pod list to return data ...
	I0819 17:46:24.349661  380723 default_sa.go:34] waiting for default service account to be created ...
	I0819 17:46:24.546382  380723 default_sa.go:45] found service account: "default"
	I0819 17:46:24.546414  380723 default_sa.go:55] duration metric: took 196.745659ms for default service account to be created ...
	I0819 17:46:24.546423  380723 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 17:46:24.652817  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:24.672680  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:24.672755  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:24.729568  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:24.752623  380723 system_pods.go:86] 18 kube-system pods found
	I0819 17:46:24.752651  380723 system_pods.go:89] "coredns-6f6b679f8f-tljrk" [6c9217a4-6879-4b7e-a6b5-78dfa1b85ee4] Running
	I0819 17:46:24.752662  380723 system_pods.go:89] "csi-hostpath-attacher-0" [e128fb19-e720-44a6-a1e9-c5f242968b55] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0819 17:46:24.752668  380723 system_pods.go:89] "csi-hostpath-resizer-0" [734bcf24-7c89-469e-9020-fdb24d47cb83] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0819 17:46:24.752676  380723 system_pods.go:89] "csi-hostpathplugin-hkr5d" [16796ce0-7f87-46f8-a9a7-0afa96f3f575] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0819 17:46:24.752681  380723 system_pods.go:89] "etcd-addons-347256" [e9c774cf-14f4-433f-8c4b-96d30f1b8f0f] Running
	I0819 17:46:24.752685  380723 system_pods.go:89] "kube-apiserver-addons-347256" [e35199f6-4a80-4d84-9a30-6e285696f02e] Running
	I0819 17:46:24.752688  380723 system_pods.go:89] "kube-controller-manager-addons-347256" [b9b2d2d8-7f8f-4373-a0a7-cb3dc9d46969] Running
	I0819 17:46:24.752694  380723 system_pods.go:89] "kube-ingress-dns-minikube" [44cd9847-645d-4375-b58a-d153a852f2c7] Running
	I0819 17:46:24.752697  380723 system_pods.go:89] "kube-proxy-72dbf" [a50d76ee-c7cb-4141-9bc3-2b530cb531e3] Running
	I0819 17:46:24.752701  380723 system_pods.go:89] "kube-scheduler-addons-347256" [0367e97e-fee8-48cf-bebc-b3d55381da8f] Running
	I0819 17:46:24.752705  380723 system_pods.go:89] "metrics-server-8988944d9-xkj9p" [2cb192e0-5048-46b0-b74e-86ad5e4d39ea] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 17:46:24.752709  380723 system_pods.go:89] "nvidia-device-plugin-daemonset-x924x" [b28534d9-e3b6-474a-90ca-04048cd59d85] Running
	I0819 17:46:24.752714  380723 system_pods.go:89] "registry-6fb4cdfc84-szv4z" [9388e4e2-9cbc-4408-8be6-ec9be4b5737f] Running
	I0819 17:46:24.752719  380723 system_pods.go:89] "registry-proxy-9q2l4" [73b6c461-1963-4b13-bb12-e75024c4c5d7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0819 17:46:24.752729  380723 system_pods.go:89] "snapshot-controller-56fcc65765-4jtx2" [bcc4eb99-92c0-4fe4-815c-ef9576839c9c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0819 17:46:24.752736  380723 system_pods.go:89] "snapshot-controller-56fcc65765-d7mhz" [2d8e7bbb-d917-42da-9c13-63cfd7e933ce] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0819 17:46:24.752740  380723 system_pods.go:89] "storage-provisioner" [8349a726-cf5d-472f-aec7-5dc582e1d9db] Running
	I0819 17:46:24.752745  380723 system_pods.go:89] "tiller-deploy-b48cc5f79-bqbr9" [801ad1ee-bac9-4f5e-9d38-655f7fbf1779] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0819 17:46:24.752752  380723 system_pods.go:126] duration metric: took 206.324075ms to wait for k8s-apps to be running ...
	I0819 17:46:24.752759  380723 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 17:46:24.752807  380723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 17:46:24.768631  380723 system_svc.go:56] duration metric: took 15.858708ms WaitForService to wait for kubelet
	I0819 17:46:24.768665  380723 kubeadm.go:582] duration metric: took 44.917633684s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 17:46:24.768695  380723 node_conditions.go:102] verifying NodePressure condition ...
	I0819 17:46:24.950905  380723 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 17:46:24.950956  380723 node_conditions.go:123] node cpu capacity is 2
	I0819 17:46:24.950975  380723 node_conditions.go:105] duration metric: took 182.272659ms to run NodePressure ...
	I0819 17:46:24.950993  380723 start.go:241] waiting for startup goroutines ...
	I0819 17:46:25.152346  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:25.171137  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:25.171736  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:25.227848  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:25.651876  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:25.671621  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:25.671859  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:25.727593  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:26.151523  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:26.171339  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:26.172176  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:26.227416  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:26.650772  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:26.673202  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:26.678854  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:26.727222  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:27.151790  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:27.171755  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:27.172033  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:27.228058  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:27.651657  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:27.671825  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:27.672050  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:27.727727  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:28.151579  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:28.171703  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:28.172403  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:28.227528  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:28.651619  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:28.671562  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:28.672259  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:28.727897  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:29.462650  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:29.462697  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:29.463082  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:29.463205  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:29.651360  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:29.672494  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:29.673186  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:29.726693  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:30.151664  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:30.171907  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:30.172130  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:30.227706  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:30.652539  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:30.671926  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:30.672391  380723 kapi.go:107] duration metric: took 43.005296913s to wait for kubernetes.io/minikube-addons=registry ...
	I0819 17:46:30.727591  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:31.150621  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:31.170289  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:31.227938  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:31.654479  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:31.671392  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:31.727020  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:32.151987  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:32.171114  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:32.227372  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:32.650785  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:32.670669  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:32.726990  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:33.151420  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:33.172387  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:33.226685  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:33.651026  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:33.678844  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:33.778453  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:34.152053  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:34.171162  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:34.227559  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:34.650950  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:34.670597  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:34.726609  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:35.151207  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:35.171222  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:35.227752  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:35.651071  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:35.672496  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:35.727005  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:36.151590  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:36.170849  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:36.227904  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:36.710733  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:36.711248  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:36.807655  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:37.150513  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:37.170696  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:37.226955  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:37.651784  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:37.670589  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:37.726918  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:38.153916  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:38.171307  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:38.227822  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:38.651401  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:38.671400  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:38.727081  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:39.152252  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:39.171083  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:39.227576  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:39.651504  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:39.670614  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:39.727660  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:40.152872  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:40.251575  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:40.252365  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:40.651382  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:40.671484  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:40.726959  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:41.151615  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:41.170458  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:41.229137  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:41.651234  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:41.671132  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:41.727803  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:42.151183  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:42.171510  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:42.227845  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:42.652299  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:42.672296  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:42.727758  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:43.155214  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:43.174258  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:43.227090  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:43.653881  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:43.671931  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:43.727249  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:44.153406  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:44.176632  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:44.252028  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:44.652214  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:44.672680  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:44.727362  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:45.154451  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:45.258011  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:45.258234  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:45.652045  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:45.671573  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:45.727496  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:46.155034  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:46.170732  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:46.227544  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:46.650568  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:46.675037  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:47.055482  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:47.155921  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:47.257747  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:47.260662  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:47.651018  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:47.672151  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:47.727157  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:48.151887  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:48.170997  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:48.227878  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:48.651832  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:48.751776  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:48.752506  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:49.151553  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:49.171317  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:49.227745  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:49.651278  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:49.670993  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:49.727982  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:50.152335  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:50.253793  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:50.254255  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:50.653415  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:50.671449  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:50.726947  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:51.156947  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:51.181024  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:51.228759  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:51.652066  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:51.673383  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:51.727472  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:52.153170  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:52.172807  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:52.264190  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:52.654284  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:52.671264  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:52.727796  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:53.151624  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:53.172024  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:53.228206  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:53.653092  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:53.670767  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:53.727401  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:54.354971  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:54.363069  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:54.363342  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:54.651263  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:54.671100  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:54.727388  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:55.151830  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:55.170325  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:55.228037  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:55.651345  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:55.671479  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:55.726830  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:56.151792  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:56.170475  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:56.226773  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:56.651262  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:56.672679  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:56.752678  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:57.592115  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:57.592901  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:57.593319  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:57.651733  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:57.671151  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:57.728530  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:58.151662  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:58.171346  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:58.226660  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:58.651302  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:58.671375  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:58.726804  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:59.154735  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:59.171044  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:59.254147  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:59.651638  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:59.671555  380723 kapi.go:107] duration metric: took 1m12.00502431s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0819 17:46:59.727840  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:47:00.155878  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:47:00.228822  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:47:00.734679  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:47:00.734978  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:47:01.151888  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:47:01.227923  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:47:01.652806  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:47:01.727415  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:47:02.151078  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:47:02.227416  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:47:02.651090  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:47:02.727617  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:47:03.151881  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:47:03.227555  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:47:03.650892  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:47:03.728049  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:47:04.151906  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:47:04.227326  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:47:04.652026  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:47:04.756014  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:47:05.151650  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:47:05.251760  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:47:05.651390  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:47:05.728199  380723 kapi.go:107] duration metric: took 1m14.004546899s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0819 17:47:05.730105  380723 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-347256 cluster.
	I0819 17:47:05.731619  380723 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0819 17:47:05.732836  380723 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0819 17:47:06.152029  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:47:06.656148  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:47:07.152018  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:47:07.651959  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:47:08.152802  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:47:08.651467  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:47:09.153047  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:47:09.652133  380723 kapi.go:107] duration metric: took 1m19.505736593s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0819 17:47:09.654184  380723 out.go:177] * Enabled addons: ingress-dns, helm-tiller, metrics-server, cloud-spanner, nvidia-device-plugin, storage-provisioner, yakd, default-storageclass, inspektor-gadget, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0819 17:47:09.655607  380723 addons.go:510] duration metric: took 1m29.804551273s for enable addons: enabled=[ingress-dns helm-tiller metrics-server cloud-spanner nvidia-device-plugin storage-provisioner yakd default-storageclass inspektor-gadget volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0819 17:47:09.655666  380723 start.go:246] waiting for cluster config update ...
	I0819 17:47:09.655707  380723 start.go:255] writing updated cluster config ...
	I0819 17:47:09.656070  380723 ssh_runner.go:195] Run: rm -f paused
	I0819 17:47:09.712496  380723 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 17:47:09.714377  380723 out.go:177] * Done! kubectl is now configured to use "addons-347256" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 19 17:50:28 addons-347256 crio[688]: time="2024-08-19 17:50:28.507502285Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724089828507470334,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a015321b-3cf2-4966-b363-094ba62bef08 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 17:50:28 addons-347256 crio[688]: time="2024-08-19 17:50:28.508237580Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=635ef810-4b44-4016-9631-8a5a7ac3cce1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:50:28 addons-347256 crio[688]: time="2024-08-19 17:50:28.508350701Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=635ef810-4b44-4016-9631-8a5a7ac3cce1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:50:28 addons-347256 crio[688]: time="2024-08-19 17:50:28.508775249Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8476ea9b84c5677585c184c2a19989966218130ebd87014f16b5d47b610a7bf8,PodSandboxId:3779df5f1e98a011b2ea48e7cbe08ffddc7b5d281e7a8764c73a15dc3a6f7517,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1724089821391952812,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-8qm2m,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5af6036b-6c99-4583-8178-c1691586b4ac,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e7d5836b553936a23f393970ad8deefd6feed3820d8bde258550dfc206c2757,PodSandboxId:2d699655663d41aaf47ea8ea106b572637b60af2d425235bb90695e5764e6820,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1724089681587967061,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9632e6a7-a0a4-4456-ab6f-c0eab065596d,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0d586f0f8bb9c527dd638f71ad853fe948c4a48d287293667d737a19e672a35,PodSandboxId:08bede11788c378bfb268308132a6da06e96de1be20084257a399f82227c9f67,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724089633212039725,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 718e1c4f-05f0-49bc-b
fc2-7dda02db3d8e,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:887dea0f933966b112e9ad70efc1f044446757f75739c5cb255b68b3b222b0f0,PodSandboxId:fc42703b1bc9312e9a19b23d2462a4272399d00ff50f79f8621469dfdb946e15,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1724089600803634886,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-fp4q2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 329ebd23-58f1-421b-b957-bded5b3b7dfa,},Anno
tations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3805318d44f907dab4638a586c06d1cb6542a78589e190efa31843d900e89ac2,PodSandboxId:fa0328f7a39d54d66491af4c63cb10ce4fe05967b1b9fb8b7c29d9b6851f1efe,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1724089600027936707,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-6vt66,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e8f78
0ae-412a-4d24-a835-e0ab77d71426,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8d158637619c277ef9f9387b183af1877f4383af2f22aaddac3e7716ede8a08,PodSandboxId:3eaf4c04776d5d54255f718eb3e644edd9c7a10fe8ba78ea9e2654d6d547990b,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1724089592959813692,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-dtqhx,io.kube
rnetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: eb249463-f4d8-4b25-812f-c1e2f481cffd,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0ca4eb985ce7b35dd9be14e75a8600b776dd7af34e703f47903f17d58fe8638,PodSandboxId:6d360799061341a4737e1d4f98f5669097f47bf25108d2d14f200cb506dec1cc,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1724089583620988219,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes
.pod.name: metrics-server-8988944d9-xkj9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cb192e0-5048-46b0-b74e-86ad5e4d39ea,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07d88fd518a67130b2f237c0f5c0e12a105bb8b22bb8cf868165a3ab5c86352d,PodSandboxId:cd735c6b862bd1c732fb68674d6cd41ed8978073a26332dc9a9d99ed0d624e6e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:17240895794
84877734,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8349a726-cf5d-472f-aec7-5dc582e1d9db,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f50bdc241404bd0f8b1a2869be786334bc01a4037c0b7eb743716d47d703a708,PodSandboxId:cd735c6b862bd1c732fb68674d6cd41ed8978073a26332dc9a9d99ed0d624e6e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724089547450120897,Labe
ls:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8349a726-cf5d-472f-aec7-5dc582e1d9db,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:817e8e7f4f6f81e0aa32cdefd4ad54c86e041eae7b0332ae4b220e0e9677f3d1,PodSandboxId:b9ef181e4bc2fb5d786c49ce9f20820cd5cd872892e88631e663df6d9859b1e0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724089545288887438,Labels:map[string]string{io.
kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-tljrk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c9217a4-6879-4b7e-a6b5-78dfa1b85ee4,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9dd9c82747258473cc2ae88bf2e75164e2fbd3d2a2a5328ce9d086eb5cb4b4f2,PodSandboxId:ce7c12bfd63c75fe8a79e1405a6266f4a0c6d99e7c466ad7b948ae213dd82f9c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]strin
g{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724089543193544008,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-72dbf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a50d76ee-c7cb-4141-9bc3-2b530cb531e3,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b291bb577fbcf3d0301dc0987ddfdd1f8dfb5a7f0993ecb1d0ef1f47343437cd,PodSandboxId:27479f4e0074355c624d9156652214795ce8aca3f7a41bfd5ec5bbc60b915d11,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724089529692719593,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-347256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e13dfe28931d8226b47ef9afcba9b2fd,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14ae317fb30359ba2162a6f9a30ed7046710d5c6d66c44e99c36726de0db7be6,PodSandboxId:3668e32c15f5ec68dd6cda9b7bd2395a2ebc5b147a52395661b1b3eca5df2e5b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Image
Ref:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724089529683588593,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-347256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7b56cf9e890da5718f47889ffe5ca70,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dd9c53de4477b97df49b744ad39714d3fcc1e7ae85e213bdde3870d7bcc820d,PodSandboxId:a096a9ccc820f95f0a8edcb6822851979b2988c8a35ffa8a7ed8f86d94e49f26,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0457
33566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724089529733666189,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-347256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0045d8f687a843989cf184298e3dad56,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a11a191406026de18aba4209c1728adf2b7209255477f04b146aedbeda0efda6,PodSandboxId:ab5cb0c3935f4f7e1b3abd77178d77768afe8585fb1730e95646bb8797e29e76,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06
e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724089529643074083,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-347256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0aab542bc468b2cc945d8e0cb0ebc09,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=635ef810-4b44-4016-9631-8a5a7ac3cce1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:50:28 addons-347256 crio[688]: time="2024-08-19 17:50:28.547903325Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e3ad97a5-2dd7-4eb3-98a4-0101fd37bf60 name=/runtime.v1.RuntimeService/Version
	Aug 19 17:50:28 addons-347256 crio[688]: time="2024-08-19 17:50:28.547977986Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e3ad97a5-2dd7-4eb3-98a4-0101fd37bf60 name=/runtime.v1.RuntimeService/Version
	Aug 19 17:50:28 addons-347256 crio[688]: time="2024-08-19 17:50:28.549250598Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=98be9d58-d52e-4f85-bb5b-9433503e9cd7 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 17:50:28 addons-347256 crio[688]: time="2024-08-19 17:50:28.550526108Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724089828550494232,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=98be9d58-d52e-4f85-bb5b-9433503e9cd7 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 17:50:28 addons-347256 crio[688]: time="2024-08-19 17:50:28.551142114Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5d18f2f8-221b-4600-a04e-58e5b23f4810 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:50:28 addons-347256 crio[688]: time="2024-08-19 17:50:28.551304783Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5d18f2f8-221b-4600-a04e-58e5b23f4810 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:50:28 addons-347256 crio[688]: time="2024-08-19 17:50:28.551806256Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8476ea9b84c5677585c184c2a19989966218130ebd87014f16b5d47b610a7bf8,PodSandboxId:3779df5f1e98a011b2ea48e7cbe08ffddc7b5d281e7a8764c73a15dc3a6f7517,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1724089821391952812,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-8qm2m,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5af6036b-6c99-4583-8178-c1691586b4ac,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e7d5836b553936a23f393970ad8deefd6feed3820d8bde258550dfc206c2757,PodSandboxId:2d699655663d41aaf47ea8ea106b572637b60af2d425235bb90695e5764e6820,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1724089681587967061,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9632e6a7-a0a4-4456-ab6f-c0eab065596d,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0d586f0f8bb9c527dd638f71ad853fe948c4a48d287293667d737a19e672a35,PodSandboxId:08bede11788c378bfb268308132a6da06e96de1be20084257a399f82227c9f67,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724089633212039725,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 718e1c4f-05f0-49bc-b
fc2-7dda02db3d8e,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:887dea0f933966b112e9ad70efc1f044446757f75739c5cb255b68b3b222b0f0,PodSandboxId:fc42703b1bc9312e9a19b23d2462a4272399d00ff50f79f8621469dfdb946e15,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1724089600803634886,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-fp4q2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 329ebd23-58f1-421b-b957-bded5b3b7dfa,},Anno
tations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3805318d44f907dab4638a586c06d1cb6542a78589e190efa31843d900e89ac2,PodSandboxId:fa0328f7a39d54d66491af4c63cb10ce4fe05967b1b9fb8b7c29d9b6851f1efe,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1724089600027936707,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-6vt66,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e8f78
0ae-412a-4d24-a835-e0ab77d71426,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8d158637619c277ef9f9387b183af1877f4383af2f22aaddac3e7716ede8a08,PodSandboxId:3eaf4c04776d5d54255f718eb3e644edd9c7a10fe8ba78ea9e2654d6d547990b,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1724089592959813692,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-dtqhx,io.kube
rnetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: eb249463-f4d8-4b25-812f-c1e2f481cffd,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0ca4eb985ce7b35dd9be14e75a8600b776dd7af34e703f47903f17d58fe8638,PodSandboxId:6d360799061341a4737e1d4f98f5669097f47bf25108d2d14f200cb506dec1cc,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1724089583620988219,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes
.pod.name: metrics-server-8988944d9-xkj9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cb192e0-5048-46b0-b74e-86ad5e4d39ea,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07d88fd518a67130b2f237c0f5c0e12a105bb8b22bb8cf868165a3ab5c86352d,PodSandboxId:cd735c6b862bd1c732fb68674d6cd41ed8978073a26332dc9a9d99ed0d624e6e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:17240895794
84877734,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8349a726-cf5d-472f-aec7-5dc582e1d9db,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f50bdc241404bd0f8b1a2869be786334bc01a4037c0b7eb743716d47d703a708,PodSandboxId:cd735c6b862bd1c732fb68674d6cd41ed8978073a26332dc9a9d99ed0d624e6e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724089547450120897,Labe
ls:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8349a726-cf5d-472f-aec7-5dc582e1d9db,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:817e8e7f4f6f81e0aa32cdefd4ad54c86e041eae7b0332ae4b220e0e9677f3d1,PodSandboxId:b9ef181e4bc2fb5d786c49ce9f20820cd5cd872892e88631e663df6d9859b1e0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724089545288887438,Labels:map[string]string{io.
kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-tljrk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c9217a4-6879-4b7e-a6b5-78dfa1b85ee4,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9dd9c82747258473cc2ae88bf2e75164e2fbd3d2a2a5328ce9d086eb5cb4b4f2,PodSandboxId:ce7c12bfd63c75fe8a79e1405a6266f4a0c6d99e7c466ad7b948ae213dd82f9c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]strin
g{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724089543193544008,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-72dbf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a50d76ee-c7cb-4141-9bc3-2b530cb531e3,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b291bb577fbcf3d0301dc0987ddfdd1f8dfb5a7f0993ecb1d0ef1f47343437cd,PodSandboxId:27479f4e0074355c624d9156652214795ce8aca3f7a41bfd5ec5bbc60b915d11,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724089529692719593,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-347256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e13dfe28931d8226b47ef9afcba9b2fd,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14ae317fb30359ba2162a6f9a30ed7046710d5c6d66c44e99c36726de0db7be6,PodSandboxId:3668e32c15f5ec68dd6cda9b7bd2395a2ebc5b147a52395661b1b3eca5df2e5b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Image
Ref:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724089529683588593,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-347256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7b56cf9e890da5718f47889ffe5ca70,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dd9c53de4477b97df49b744ad39714d3fcc1e7ae85e213bdde3870d7bcc820d,PodSandboxId:a096a9ccc820f95f0a8edcb6822851979b2988c8a35ffa8a7ed8f86d94e49f26,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0457
33566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724089529733666189,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-347256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0045d8f687a843989cf184298e3dad56,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a11a191406026de18aba4209c1728adf2b7209255477f04b146aedbeda0efda6,PodSandboxId:ab5cb0c3935f4f7e1b3abd77178d77768afe8585fb1730e95646bb8797e29e76,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06
e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724089529643074083,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-347256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0aab542bc468b2cc945d8e0cb0ebc09,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5d18f2f8-221b-4600-a04e-58e5b23f4810 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:50:28 addons-347256 crio[688]: time="2024-08-19 17:50:28.598065927Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=85592476-ce22-4240-be96-4c80c951babd name=/runtime.v1.RuntimeService/Version
	Aug 19 17:50:28 addons-347256 crio[688]: time="2024-08-19 17:50:28.598142556Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=85592476-ce22-4240-be96-4c80c951babd name=/runtime.v1.RuntimeService/Version
	Aug 19 17:50:28 addons-347256 crio[688]: time="2024-08-19 17:50:28.599626443Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=99ab2cf5-571e-4041-b6b6-739567af7ed1 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 17:50:28 addons-347256 crio[688]: time="2024-08-19 17:50:28.600850945Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724089828600820828,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=99ab2cf5-571e-4041-b6b6-739567af7ed1 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 17:50:28 addons-347256 crio[688]: time="2024-08-19 17:50:28.601707132Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d121671d-491c-4ff2-a13c-30b1ddaafa47 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:50:28 addons-347256 crio[688]: time="2024-08-19 17:50:28.601763895Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d121671d-491c-4ff2-a13c-30b1ddaafa47 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:50:28 addons-347256 crio[688]: time="2024-08-19 17:50:28.602436331Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8476ea9b84c5677585c184c2a19989966218130ebd87014f16b5d47b610a7bf8,PodSandboxId:3779df5f1e98a011b2ea48e7cbe08ffddc7b5d281e7a8764c73a15dc3a6f7517,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1724089821391952812,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-8qm2m,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5af6036b-6c99-4583-8178-c1691586b4ac,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e7d5836b553936a23f393970ad8deefd6feed3820d8bde258550dfc206c2757,PodSandboxId:2d699655663d41aaf47ea8ea106b572637b60af2d425235bb90695e5764e6820,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1724089681587967061,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9632e6a7-a0a4-4456-ab6f-c0eab065596d,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0d586f0f8bb9c527dd638f71ad853fe948c4a48d287293667d737a19e672a35,PodSandboxId:08bede11788c378bfb268308132a6da06e96de1be20084257a399f82227c9f67,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724089633212039725,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 718e1c4f-05f0-49bc-b
fc2-7dda02db3d8e,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:887dea0f933966b112e9ad70efc1f044446757f75739c5cb255b68b3b222b0f0,PodSandboxId:fc42703b1bc9312e9a19b23d2462a4272399d00ff50f79f8621469dfdb946e15,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1724089600803634886,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-fp4q2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 329ebd23-58f1-421b-b957-bded5b3b7dfa,},Anno
tations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3805318d44f907dab4638a586c06d1cb6542a78589e190efa31843d900e89ac2,PodSandboxId:fa0328f7a39d54d66491af4c63cb10ce4fe05967b1b9fb8b7c29d9b6851f1efe,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1724089600027936707,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-6vt66,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e8f78
0ae-412a-4d24-a835-e0ab77d71426,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8d158637619c277ef9f9387b183af1877f4383af2f22aaddac3e7716ede8a08,PodSandboxId:3eaf4c04776d5d54255f718eb3e644edd9c7a10fe8ba78ea9e2654d6d547990b,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1724089592959813692,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-dtqhx,io.kube
rnetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: eb249463-f4d8-4b25-812f-c1e2f481cffd,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0ca4eb985ce7b35dd9be14e75a8600b776dd7af34e703f47903f17d58fe8638,PodSandboxId:6d360799061341a4737e1d4f98f5669097f47bf25108d2d14f200cb506dec1cc,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1724089583620988219,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes
.pod.name: metrics-server-8988944d9-xkj9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cb192e0-5048-46b0-b74e-86ad5e4d39ea,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07d88fd518a67130b2f237c0f5c0e12a105bb8b22bb8cf868165a3ab5c86352d,PodSandboxId:cd735c6b862bd1c732fb68674d6cd41ed8978073a26332dc9a9d99ed0d624e6e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:17240895794
84877734,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8349a726-cf5d-472f-aec7-5dc582e1d9db,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f50bdc241404bd0f8b1a2869be786334bc01a4037c0b7eb743716d47d703a708,PodSandboxId:cd735c6b862bd1c732fb68674d6cd41ed8978073a26332dc9a9d99ed0d624e6e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724089547450120897,Labe
ls:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8349a726-cf5d-472f-aec7-5dc582e1d9db,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:817e8e7f4f6f81e0aa32cdefd4ad54c86e041eae7b0332ae4b220e0e9677f3d1,PodSandboxId:b9ef181e4bc2fb5d786c49ce9f20820cd5cd872892e88631e663df6d9859b1e0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724089545288887438,Labels:map[string]string{io.
kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-tljrk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c9217a4-6879-4b7e-a6b5-78dfa1b85ee4,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9dd9c82747258473cc2ae88bf2e75164e2fbd3d2a2a5328ce9d086eb5cb4b4f2,PodSandboxId:ce7c12bfd63c75fe8a79e1405a6266f4a0c6d99e7c466ad7b948ae213dd82f9c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]strin
g{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724089543193544008,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-72dbf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a50d76ee-c7cb-4141-9bc3-2b530cb531e3,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b291bb577fbcf3d0301dc0987ddfdd1f8dfb5a7f0993ecb1d0ef1f47343437cd,PodSandboxId:27479f4e0074355c624d9156652214795ce8aca3f7a41bfd5ec5bbc60b915d11,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724089529692719593,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-347256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e13dfe28931d8226b47ef9afcba9b2fd,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14ae317fb30359ba2162a6f9a30ed7046710d5c6d66c44e99c36726de0db7be6,PodSandboxId:3668e32c15f5ec68dd6cda9b7bd2395a2ebc5b147a52395661b1b3eca5df2e5b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Image
Ref:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724089529683588593,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-347256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7b56cf9e890da5718f47889ffe5ca70,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dd9c53de4477b97df49b744ad39714d3fcc1e7ae85e213bdde3870d7bcc820d,PodSandboxId:a096a9ccc820f95f0a8edcb6822851979b2988c8a35ffa8a7ed8f86d94e49f26,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0457
33566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724089529733666189,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-347256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0045d8f687a843989cf184298e3dad56,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a11a191406026de18aba4209c1728adf2b7209255477f04b146aedbeda0efda6,PodSandboxId:ab5cb0c3935f4f7e1b3abd77178d77768afe8585fb1730e95646bb8797e29e76,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06
e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724089529643074083,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-347256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0aab542bc468b2cc945d8e0cb0ebc09,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d121671d-491c-4ff2-a13c-30b1ddaafa47 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:50:28 addons-347256 crio[688]: time="2024-08-19 17:50:28.637771788Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9bcc6f4d-1260-43e6-bb9d-4a85d80269dc name=/runtime.v1.RuntimeService/Version
	Aug 19 17:50:28 addons-347256 crio[688]: time="2024-08-19 17:50:28.637844248Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9bcc6f4d-1260-43e6-bb9d-4a85d80269dc name=/runtime.v1.RuntimeService/Version
	Aug 19 17:50:28 addons-347256 crio[688]: time="2024-08-19 17:50:28.639477758Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9f85bfed-942a-4965-9d5d-93352f85d0c8 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 17:50:28 addons-347256 crio[688]: time="2024-08-19 17:50:28.641150876Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724089828641119146,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9f85bfed-942a-4965-9d5d-93352f85d0c8 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 17:50:28 addons-347256 crio[688]: time="2024-08-19 17:50:28.641949939Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=508edd7a-f8f3-4781-988e-2043637f0cdf name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:50:28 addons-347256 crio[688]: time="2024-08-19 17:50:28.642026243Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=508edd7a-f8f3-4781-988e-2043637f0cdf name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:50:28 addons-347256 crio[688]: time="2024-08-19 17:50:28.642391403Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8476ea9b84c5677585c184c2a19989966218130ebd87014f16b5d47b610a7bf8,PodSandboxId:3779df5f1e98a011b2ea48e7cbe08ffddc7b5d281e7a8764c73a15dc3a6f7517,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1724089821391952812,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-8qm2m,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5af6036b-6c99-4583-8178-c1691586b4ac,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e7d5836b553936a23f393970ad8deefd6feed3820d8bde258550dfc206c2757,PodSandboxId:2d699655663d41aaf47ea8ea106b572637b60af2d425235bb90695e5764e6820,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1724089681587967061,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9632e6a7-a0a4-4456-ab6f-c0eab065596d,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0d586f0f8bb9c527dd638f71ad853fe948c4a48d287293667d737a19e672a35,PodSandboxId:08bede11788c378bfb268308132a6da06e96de1be20084257a399f82227c9f67,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724089633212039725,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 718e1c4f-05f0-49bc-b
fc2-7dda02db3d8e,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:887dea0f933966b112e9ad70efc1f044446757f75739c5cb255b68b3b222b0f0,PodSandboxId:fc42703b1bc9312e9a19b23d2462a4272399d00ff50f79f8621469dfdb946e15,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1724089600803634886,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-fp4q2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 329ebd23-58f1-421b-b957-bded5b3b7dfa,},Anno
tations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3805318d44f907dab4638a586c06d1cb6542a78589e190efa31843d900e89ac2,PodSandboxId:fa0328f7a39d54d66491af4c63cb10ce4fe05967b1b9fb8b7c29d9b6851f1efe,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1724089600027936707,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-6vt66,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e8f78
0ae-412a-4d24-a835-e0ab77d71426,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8d158637619c277ef9f9387b183af1877f4383af2f22aaddac3e7716ede8a08,PodSandboxId:3eaf4c04776d5d54255f718eb3e644edd9c7a10fe8ba78ea9e2654d6d547990b,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1724089592959813692,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-dtqhx,io.kube
rnetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: eb249463-f4d8-4b25-812f-c1e2f481cffd,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0ca4eb985ce7b35dd9be14e75a8600b776dd7af34e703f47903f17d58fe8638,PodSandboxId:6d360799061341a4737e1d4f98f5669097f47bf25108d2d14f200cb506dec1cc,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1724089583620988219,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes
.pod.name: metrics-server-8988944d9-xkj9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cb192e0-5048-46b0-b74e-86ad5e4d39ea,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07d88fd518a67130b2f237c0f5c0e12a105bb8b22bb8cf868165a3ab5c86352d,PodSandboxId:cd735c6b862bd1c732fb68674d6cd41ed8978073a26332dc9a9d99ed0d624e6e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:17240895794
84877734,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8349a726-cf5d-472f-aec7-5dc582e1d9db,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f50bdc241404bd0f8b1a2869be786334bc01a4037c0b7eb743716d47d703a708,PodSandboxId:cd735c6b862bd1c732fb68674d6cd41ed8978073a26332dc9a9d99ed0d624e6e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724089547450120897,Labe
ls:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8349a726-cf5d-472f-aec7-5dc582e1d9db,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:817e8e7f4f6f81e0aa32cdefd4ad54c86e041eae7b0332ae4b220e0e9677f3d1,PodSandboxId:b9ef181e4bc2fb5d786c49ce9f20820cd5cd872892e88631e663df6d9859b1e0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724089545288887438,Labels:map[string]string{io.
kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-tljrk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c9217a4-6879-4b7e-a6b5-78dfa1b85ee4,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9dd9c82747258473cc2ae88bf2e75164e2fbd3d2a2a5328ce9d086eb5cb4b4f2,PodSandboxId:ce7c12bfd63c75fe8a79e1405a6266f4a0c6d99e7c466ad7b948ae213dd82f9c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]strin
g{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724089543193544008,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-72dbf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a50d76ee-c7cb-4141-9bc3-2b530cb531e3,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b291bb577fbcf3d0301dc0987ddfdd1f8dfb5a7f0993ecb1d0ef1f47343437cd,PodSandboxId:27479f4e0074355c624d9156652214795ce8aca3f7a41bfd5ec5bbc60b915d11,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724089529692719593,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-347256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e13dfe28931d8226b47ef9afcba9b2fd,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14ae317fb30359ba2162a6f9a30ed7046710d5c6d66c44e99c36726de0db7be6,PodSandboxId:3668e32c15f5ec68dd6cda9b7bd2395a2ebc5b147a52395661b1b3eca5df2e5b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Image
Ref:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724089529683588593,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-347256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7b56cf9e890da5718f47889ffe5ca70,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dd9c53de4477b97df49b744ad39714d3fcc1e7ae85e213bdde3870d7bcc820d,PodSandboxId:a096a9ccc820f95f0a8edcb6822851979b2988c8a35ffa8a7ed8f86d94e49f26,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0457
33566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724089529733666189,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-347256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0045d8f687a843989cf184298e3dad56,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a11a191406026de18aba4209c1728adf2b7209255477f04b146aedbeda0efda6,PodSandboxId:ab5cb0c3935f4f7e1b3abd77178d77768afe8585fb1730e95646bb8797e29e76,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06
e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724089529643074083,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-347256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0aab542bc468b2cc945d8e0cb0ebc09,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=508edd7a-f8f3-4781-988e-2043637f0cdf name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8476ea9b84c56       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        7 seconds ago       Running             hello-world-app           0                   3779df5f1e98a       hello-world-app-55bf9c44b4-8qm2m
	1e7d5836b5539       docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0                              2 minutes ago       Running             nginx                     0                   2d699655663d4       nginx
	f0d586f0f8bb9       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   08bede11788c3       busybox
	887dea0f93396       ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242                                                             3 minutes ago       Exited              patch                     1                   fc42703b1bc93       ingress-nginx-admission-patch-fp4q2
	3805318d44f90       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   3 minutes ago       Exited              create                    0                   fa0328f7a39d5       ingress-nginx-admission-create-6vt66
	b8d158637619c       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             3 minutes ago       Running             local-path-provisioner    0                   3eaf4c04776d5       local-path-provisioner-86d989889c-dtqhx
	e0ca4eb985ce7       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872        4 minutes ago       Running             metrics-server            0                   6d36079906134       metrics-server-8988944d9-xkj9p
	07d88fd518a67       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       1                   cd735c6b862bd       storage-provisioner
	f50bdc241404b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Exited              storage-provisioner       0                   cd735c6b862bd       storage-provisioner
	817e8e7f4f6f8       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                             4 minutes ago       Running             coredns                   0                   b9ef181e4bc2f       coredns-6f6b679f8f-tljrk
	9dd9c82747258       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                                             4 minutes ago       Running             kube-proxy                0                   ce7c12bfd63c7       kube-proxy-72dbf
	5dd9c53de4477       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                                             4 minutes ago       Running             kube-controller-manager   0                   a096a9ccc820f       kube-controller-manager-addons-347256
	b291bb577fbcf       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                                             4 minutes ago       Running             kube-apiserver            0                   27479f4e00743       kube-apiserver-addons-347256
	14ae317fb3035       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                                             4 minutes ago       Running             kube-scheduler            0                   3668e32c15f5e       kube-scheduler-addons-347256
	a11a191406026       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             4 minutes ago       Running             etcd                      0                   ab5cb0c3935f4       etcd-addons-347256
	
	
	==> coredns [817e8e7f4f6f81e0aa32cdefd4ad54c86e041eae7b0332ae4b220e0e9677f3d1] <==
	[INFO] 10.244.0.7:43484 - 18645 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000301851s
	[INFO] 10.244.0.7:48952 - 36607 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000093621s
	[INFO] 10.244.0.7:48952 - 39906 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000083434s
	[INFO] 10.244.0.7:40146 - 55082 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000072444s
	[INFO] 10.244.0.7:40146 - 29992 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000073758s
	[INFO] 10.244.0.7:36128 - 19863 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000110019s
	[INFO] 10.244.0.7:36128 - 17809 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000125615s
	[INFO] 10.244.0.7:43453 - 51962 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000083457s
	[INFO] 10.244.0.7:43453 - 28152 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000062594s
	[INFO] 10.244.0.7:50170 - 32131 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000091098s
	[INFO] 10.244.0.7:50170 - 46214 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000107393s
	[INFO] 10.244.0.7:47291 - 64507 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000034975s
	[INFO] 10.244.0.7:47291 - 2554 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000130142s
	[INFO] 10.244.0.7:37877 - 27398 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000071341s
	[INFO] 10.244.0.7:37877 - 27904 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000136128s
	[INFO] 10.244.0.22:50041 - 46218 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000339654s
	[INFO] 10.244.0.22:56671 - 22756 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00027197s
	[INFO] 10.244.0.22:51830 - 41490 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000346503s
	[INFO] 10.244.0.22:37631 - 51261 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000138901s
	[INFO] 10.244.0.22:39029 - 38277 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000200513s
	[INFO] 10.244.0.22:58825 - 34536 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000117903s
	[INFO] 10.244.0.22:35318 - 16680 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.001008787s
	[INFO] 10.244.0.22:37761 - 15849 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000848032s
	[INFO] 10.244.0.26:41999 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000777692s
	[INFO] 10.244.0.26:44907 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000201588s
	
	
	==> describe nodes <==
	Name:               addons-347256
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-347256
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9c2db9d51ec33b5c53a86e9ba3d384ee332e3411
	                    minikube.k8s.io/name=addons-347256
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T17_45_35_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-347256
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 17:45:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-347256
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 17:50:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 17:48:38 +0000   Mon, 19 Aug 2024 17:45:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 17:48:38 +0000   Mon, 19 Aug 2024 17:45:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 17:48:38 +0000   Mon, 19 Aug 2024 17:45:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 17:48:38 +0000   Mon, 19 Aug 2024 17:45:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.18
	  Hostname:    addons-347256
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 35472ba804c549e2b72c7b2d4f9a9d4d
	  System UUID:                35472ba8-04c5-49e2-b72c-7b2d4f9a9d4d
	  Boot ID:                    6579c4ae-0068-42a1-8c4f-735c1b3576dd
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m19s
	  default                     hello-world-app-55bf9c44b4-8qm2m           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  default                     nginx                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m31s
	  kube-system                 coredns-6f6b679f8f-tljrk                   100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     4m48s
	  kube-system                 etcd-addons-347256                         100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         4m54s
	  kube-system                 kube-apiserver-addons-347256               250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m54s
	  kube-system                 kube-controller-manager-addons-347256      200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m54s
	  kube-system                 kube-proxy-72dbf                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m48s
	  kube-system                 kube-scheduler-addons-347256               100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m54s
	  kube-system                 metrics-server-8988944d9-xkj9p             100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         4m43s
	  kube-system                 storage-provisioner                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m43s
	  local-path-storage          local-path-provisioner-86d989889c-dtqhx    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             370Mi (9%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m39s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m59s (x8 over 4m59s)  kubelet          Node addons-347256 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m59s (x8 over 4m59s)  kubelet          Node addons-347256 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m59s (x7 over 4m59s)  kubelet          Node addons-347256 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m59s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m54s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m54s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m53s                  kubelet          Node addons-347256 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m53s                  kubelet          Node addons-347256 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m53s                  kubelet          Node addons-347256 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m52s                  kubelet          Node addons-347256 status is now: NodeReady
	  Normal  RegisteredNode           4m49s                  node-controller  Node addons-347256 event: Registered Node addons-347256 in Controller
	
	
	==> dmesg <==
	[  +6.434381] kauditd_printk_skb: 54 callbacks suppressed
	[Aug19 17:46] kauditd_printk_skb: 5 callbacks suppressed
	[ +19.176114] kauditd_printk_skb: 2 callbacks suppressed
	[  +8.782463] kauditd_printk_skb: 6 callbacks suppressed
	[ +11.926528] kauditd_printk_skb: 44 callbacks suppressed
	[  +5.205048] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.026315] kauditd_printk_skb: 60 callbacks suppressed
	[  +5.395662] kauditd_printk_skb: 17 callbacks suppressed
	[Aug19 17:47] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.151355] kauditd_printk_skb: 44 callbacks suppressed
	[ +13.843201] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.923942] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.192477] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.005470] kauditd_printk_skb: 55 callbacks suppressed
	[  +5.126435] kauditd_printk_skb: 44 callbacks suppressed
	[  +6.278976] kauditd_printk_skb: 27 callbacks suppressed
	[  +5.594347] kauditd_printk_skb: 15 callbacks suppressed
	[Aug19 17:48] kauditd_printk_skb: 23 callbacks suppressed
	[  +8.389893] kauditd_printk_skb: 18 callbacks suppressed
	[  +7.296726] kauditd_printk_skb: 11 callbacks suppressed
	[  +8.750720] kauditd_printk_skb: 7 callbacks suppressed
	[ +14.570377] kauditd_printk_skb: 7 callbacks suppressed
	[  +8.304267] kauditd_printk_skb: 33 callbacks suppressed
	[Aug19 17:50] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.228384] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [a11a191406026de18aba4209c1728adf2b7209255477f04b146aedbeda0efda6] <==
	{"level":"warn","ts":"2024-08-19T17:47:00.708904Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T17:47:00.377462Z","time spent":"331.432132ms","remote":"127.0.0.1:53006","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1136,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	{"level":"warn","ts":"2024-08-19T17:47:00.714434Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T17:47:00.240889Z","time spent":"466.531973ms","remote":"127.0.0.1:53094","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":482,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/snapshot-controller-leader\" mod_revision:1120 > success:<request_put:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" value_size:419 >> failure:<request_range:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" > >"}
	{"level":"info","ts":"2024-08-19T17:47:39.134963Z","caller":"traceutil/trace.go:171","msg":"trace[1961973197] linearizableReadLoop","detail":"{readStateIndex:1437; appliedIndex:1436; }","duration":"162.911752ms","start":"2024-08-19T17:47:38.972027Z","end":"2024-08-19T17:47:39.134938Z","steps":["trace[1961973197] 'read index received'  (duration: 162.798404ms)","trace[1961973197] 'applied index is now lower than readState.Index'  (duration: 112.865µs)"],"step_count":2}
	{"level":"info","ts":"2024-08-19T17:47:39.135173Z","caller":"traceutil/trace.go:171","msg":"trace[1694973947] transaction","detail":"{read_only:false; response_revision:1393; number_of_response:1; }","duration":"195.398548ms","start":"2024-08-19T17:47:38.939763Z","end":"2024-08-19T17:47:39.135162Z","steps":["trace[1694973947] 'process raft request'  (duration: 195.099287ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T17:47:39.135355Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"126.642251ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T17:47:39.135456Z","caller":"traceutil/trace.go:171","msg":"trace[1782506604] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1393; }","duration":"126.813933ms","start":"2024-08-19T17:47:39.008632Z","end":"2024-08-19T17:47:39.135446Z","steps":["trace[1782506604] 'agreement among raft nodes before linearized reading'  (duration: 126.597125ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T17:47:39.135630Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"163.654065ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-08-19T17:47:39.135678Z","caller":"traceutil/trace.go:171","msg":"trace[1312877723] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1393; }","duration":"163.717625ms","start":"2024-08-19T17:47:38.971951Z","end":"2024-08-19T17:47:39.135668Z","steps":["trace[1312877723] 'agreement among raft nodes before linearized reading'  (duration: 163.612739ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T17:47:39.136068Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"115.798609ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/snapshot.storage.k8s.io/volumesnapshotclasses/\" range_end:\"/registry/snapshot.storage.k8s.io/volumesnapshotclasses0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-08-19T17:47:39.137935Z","caller":"traceutil/trace.go:171","msg":"trace[1064821462] range","detail":"{range_begin:/registry/snapshot.storage.k8s.io/volumesnapshotclasses/; range_end:/registry/snapshot.storage.k8s.io/volumesnapshotclasses0; response_count:0; response_revision:1393; }","duration":"117.666562ms","start":"2024-08-19T17:47:39.020253Z","end":"2024-08-19T17:47:39.137919Z","steps":["trace[1064821462] 'agreement among raft nodes before linearized reading'  (duration: 115.780855ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T17:47:39.138076Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.323354ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/default/hpvc\" ","response":"range_response_count:1 size:822"}
	{"level":"info","ts":"2024-08-19T17:47:39.138134Z","caller":"traceutil/trace.go:171","msg":"trace[1843115512] range","detail":"{range_begin:/registry/persistentvolumeclaims/default/hpvc; range_end:; response_count:1; response_revision:1393; }","duration":"124.377661ms","start":"2024-08-19T17:47:39.013740Z","end":"2024-08-19T17:47:39.138118Z","steps":["trace[1843115512] 'agreement among raft nodes before linearized reading'  (duration: 124.260806ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T17:48:01.461905Z","caller":"traceutil/trace.go:171","msg":"trace[11828694] transaction","detail":"{read_only:false; response_revision:1566; number_of_response:1; }","duration":"144.808741ms","start":"2024-08-19T17:48:01.317070Z","end":"2024-08-19T17:48:01.461879Z","steps":["trace[11828694] 'process raft request'  (duration: 144.718123ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T17:48:09.629024Z","caller":"traceutil/trace.go:171","msg":"trace[1950804823] linearizableReadLoop","detail":"{readStateIndex:1689; appliedIndex:1688; }","duration":"131.807394ms","start":"2024-08-19T17:48:09.497201Z","end":"2024-08-19T17:48:09.629009Z","steps":["trace[1950804823] 'read index received'  (duration: 70.793225ms)","trace[1950804823] 'applied index is now lower than readState.Index'  (duration: 61.012927ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-19T17:48:09.629696Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"132.303298ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-08-19T17:48:09.629940Z","caller":"traceutil/trace.go:171","msg":"trace[1035633527] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1632; }","duration":"132.726889ms","start":"2024-08-19T17:48:09.497197Z","end":"2024-08-19T17:48:09.629924Z","steps":["trace[1035633527] 'agreement among raft nodes before linearized reading'  (duration: 132.022215ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T17:48:17.222506Z","caller":"traceutil/trace.go:171","msg":"trace[146959868] transaction","detail":"{read_only:false; response_revision:1655; number_of_response:1; }","duration":"430.624283ms","start":"2024-08-19T17:48:16.791861Z","end":"2024-08-19T17:48:17.222485Z","steps":["trace[146959868] 'process raft request'  (duration: 430.431459ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T17:48:17.222785Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T17:48:16.791841Z","time spent":"430.794325ms","remote":"127.0.0.1:53094","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":539,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" mod_revision:1648 > success:<request_put:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" value_size:452 >> failure:<request_range:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" > >"}
	{"level":"info","ts":"2024-08-19T17:48:17.223157Z","caller":"traceutil/trace.go:171","msg":"trace[784703065] linearizableReadLoop","detail":"{readStateIndex:1713; appliedIndex:1712; }","duration":"214.749179ms","start":"2024-08-19T17:48:17.008400Z","end":"2024-08-19T17:48:17.223149Z","steps":["trace[784703065] 'read index received'  (duration: 213.830506ms)","trace[784703065] 'applied index is now lower than readState.Index'  (duration: 917.789µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-19T17:48:17.244061Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"235.641873ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T17:48:17.244127Z","caller":"traceutil/trace.go:171","msg":"trace[1618827778] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1655; }","duration":"235.721741ms","start":"2024-08-19T17:48:17.008392Z","end":"2024-08-19T17:48:17.244114Z","steps":["trace[1618827778] 'agreement among raft nodes before linearized reading'  (duration: 215.718599ms)","trace[1618827778] 'range keys from in-memory index tree'  (duration: 19.90863ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-19T17:48:17.244453Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"197.181071ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T17:48:17.244478Z","caller":"traceutil/trace.go:171","msg":"trace[1829104736] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1655; }","duration":"197.211814ms","start":"2024-08-19T17:48:17.047258Z","end":"2024-08-19T17:48:17.244470Z","steps":["trace[1829104736] 'agreement among raft nodes before linearized reading'  (duration: 176.882024ms)","trace[1829104736] 'range keys from in-memory index tree'  (duration: 20.285588ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-19T17:48:17.245573Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"118.113069ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T17:48:17.245655Z","caller":"traceutil/trace.go:171","msg":"trace[1580977739] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1655; }","duration":"118.201402ms","start":"2024-08-19T17:48:17.127445Z","end":"2024-08-19T17:48:17.245646Z","steps":["trace[1580977739] 'agreement among raft nodes before linearized reading'  (duration: 96.703234ms)","trace[1580977739] 'range keys from in-memory index tree'  (duration: 21.391611ms)"],"step_count":2}
	
	
	==> kernel <==
	 17:50:29 up 5 min,  0 users,  load average: 0.51, 1.16, 0.62
	Linux addons-347256 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [b291bb577fbcf3d0301dc0987ddfdd1f8dfb5a7f0993ecb1d0ef1f47343437cd] <==
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0819 17:47:25.898229       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.167.193:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.167.193:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.167.193:443: connect: connection refused" logger="UnhandledError"
	E0819 17:47:25.905201       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.167.193:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.167.193:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.167.193:443: connect: connection refused" logger="UnhandledError"
	I0819 17:47:25.971666       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0819 17:47:52.009142       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0819 17:47:53.042030       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0819 17:47:57.028448       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0819 17:47:57.236173       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.101.225.232"}
	I0819 17:48:04.952667       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.100.254.220"}
	I0819 17:48:25.195432       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0819 17:48:48.987259       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 17:48:48.991505       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0819 17:48:49.028010       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 17:48:49.028083       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0819 17:48:49.081406       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 17:48:49.081566       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0819 17:48:49.128525       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 17:48:49.128672       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0819 17:48:49.156622       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 17:48:49.156672       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0819 17:48:50.128687       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0819 17:48:50.157431       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0819 17:48:50.243303       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0819 17:50:18.500584       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.111.72.73"}
	
	
	==> kube-controller-manager [5dd9c53de4477b97df49b744ad39714d3fcc1e7ae85e213bdde3870d7bcc820d] <==
	W0819 17:49:20.904129       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 17:49:20.904454       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 17:49:25.361245       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 17:49:25.361289       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 17:49:28.374649       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 17:49:28.374702       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 17:49:46.843356       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 17:49:46.843476       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 17:49:54.362848       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 17:49:54.362979       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 17:50:07.921548       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 17:50:07.921717       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 17:50:12.211274       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 17:50:12.211350       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0819 17:50:18.315762       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="35.281532ms"
	I0819 17:50:18.343247       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="27.420054ms"
	I0819 17:50:18.352968       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="9.668085ms"
	I0819 17:50:18.353055       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="34.902µs"
	I0819 17:50:20.683791       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I0819 17:50:20.689438       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="5.618µs"
	I0819 17:50:20.693132       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	I0819 17:50:22.038276       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="9.570089ms"
	I0819 17:50:22.038569       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="77.88µs"
	W0819 17:50:28.487523       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 17:50:28.487559       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [9dd9c82747258473cc2ae88bf2e75164e2fbd3d2a2a5328ce9d086eb5cb4b4f2] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 17:45:48.713427       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0819 17:45:48.772286       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.18"]
	E0819 17:45:48.773016       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 17:45:48.931067       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 17:45:48.931190       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 17:45:48.931217       1 server_linux.go:169] "Using iptables Proxier"
	I0819 17:45:48.935507       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 17:45:48.935737       1 server.go:483] "Version info" version="v1.31.0"
	I0819 17:45:48.935766       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 17:45:49.011363       1 config.go:197] "Starting service config controller"
	I0819 17:45:49.013605       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 17:45:49.013796       1 config.go:104] "Starting endpoint slice config controller"
	I0819 17:45:49.013804       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 17:45:49.022834       1 config.go:326] "Starting node config controller"
	I0819 17:45:49.022867       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 17:45:49.113787       1 shared_informer.go:320] Caches are synced for service config
	I0819 17:45:49.113873       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0819 17:45:49.124284       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [14ae317fb30359ba2162a6f9a30ed7046710d5c6d66c44e99c36726de0db7be6] <==
	W0819 17:45:32.371521       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 17:45:32.371539       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 17:45:32.371594       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0819 17:45:32.371631       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 17:45:32.371753       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0819 17:45:32.371783       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 17:45:32.371880       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0819 17:45:32.371908       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 17:45:32.371965       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0819 17:45:32.371993       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 17:45:32.372462       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0819 17:45:32.372551       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 17:45:33.208177       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0819 17:45:33.208249       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 17:45:33.283557       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0819 17:45:33.283814       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 17:45:33.377512       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0819 17:45:33.377562       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 17:45:33.389127       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0819 17:45:33.389259       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 17:45:33.550958       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0819 17:45:33.551012       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 17:45:33.815203       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0819 17:45:33.815368       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0819 17:45:36.659223       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 19 17:50:18 addons-347256 kubelet[1228]: I0819 17:50:18.312254    1228 memory_manager.go:354] "RemoveStaleState removing state" podUID="16796ce0-7f87-46f8-a9a7-0afa96f3f575" containerName="liveness-probe"
	Aug 19 17:50:18 addons-347256 kubelet[1228]: I0819 17:50:18.418165    1228 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8256g\" (UniqueName: \"kubernetes.io/projected/5af6036b-6c99-4583-8178-c1691586b4ac-kube-api-access-8256g\") pod \"hello-world-app-55bf9c44b4-8qm2m\" (UID: \"5af6036b-6c99-4583-8178-c1691586b4ac\") " pod="default/hello-world-app-55bf9c44b4-8qm2m"
	Aug 19 17:50:19 addons-347256 kubelet[1228]: I0819 17:50:19.427439    1228 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qlm5d\" (UniqueName: \"kubernetes.io/projected/44cd9847-645d-4375-b58a-d153a852f2c7-kube-api-access-qlm5d\") pod \"44cd9847-645d-4375-b58a-d153a852f2c7\" (UID: \"44cd9847-645d-4375-b58a-d153a852f2c7\") "
	Aug 19 17:50:19 addons-347256 kubelet[1228]: I0819 17:50:19.431200    1228 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44cd9847-645d-4375-b58a-d153a852f2c7-kube-api-access-qlm5d" (OuterVolumeSpecName: "kube-api-access-qlm5d") pod "44cd9847-645d-4375-b58a-d153a852f2c7" (UID: "44cd9847-645d-4375-b58a-d153a852f2c7"). InnerVolumeSpecName "kube-api-access-qlm5d". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 19 17:50:19 addons-347256 kubelet[1228]: I0819 17:50:19.527940    1228 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-qlm5d\" (UniqueName: \"kubernetes.io/projected/44cd9847-645d-4375-b58a-d153a852f2c7-kube-api-access-qlm5d\") on node \"addons-347256\" DevicePath \"\""
	Aug 19 17:50:20 addons-347256 kubelet[1228]: I0819 17:50:20.003157    1228 scope.go:117] "RemoveContainer" containerID="9b48b1a61458f0e62f8ee33238d3b77998c2fdfa0562127ee0cb03222de32d78"
	Aug 19 17:50:20 addons-347256 kubelet[1228]: I0819 17:50:20.033515    1228 scope.go:117] "RemoveContainer" containerID="9b48b1a61458f0e62f8ee33238d3b77998c2fdfa0562127ee0cb03222de32d78"
	Aug 19 17:50:20 addons-347256 kubelet[1228]: E0819 17:50:20.034667    1228 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b48b1a61458f0e62f8ee33238d3b77998c2fdfa0562127ee0cb03222de32d78\": container with ID starting with 9b48b1a61458f0e62f8ee33238d3b77998c2fdfa0562127ee0cb03222de32d78 not found: ID does not exist" containerID="9b48b1a61458f0e62f8ee33238d3b77998c2fdfa0562127ee0cb03222de32d78"
	Aug 19 17:50:20 addons-347256 kubelet[1228]: I0819 17:50:20.034719    1228 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b48b1a61458f0e62f8ee33238d3b77998c2fdfa0562127ee0cb03222de32d78"} err="failed to get container status \"9b48b1a61458f0e62f8ee33238d3b77998c2fdfa0562127ee0cb03222de32d78\": rpc error: code = NotFound desc = could not find container \"9b48b1a61458f0e62f8ee33238d3b77998c2fdfa0562127ee0cb03222de32d78\": container with ID starting with 9b48b1a61458f0e62f8ee33238d3b77998c2fdfa0562127ee0cb03222de32d78 not found: ID does not exist"
	Aug 19 17:50:20 addons-347256 kubelet[1228]: I0819 17:50:20.867673    1228 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="329ebd23-58f1-421b-b957-bded5b3b7dfa" path="/var/lib/kubelet/pods/329ebd23-58f1-421b-b957-bded5b3b7dfa/volumes"
	Aug 19 17:50:20 addons-347256 kubelet[1228]: I0819 17:50:20.868097    1228 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44cd9847-645d-4375-b58a-d153a852f2c7" path="/var/lib/kubelet/pods/44cd9847-645d-4375-b58a-d153a852f2c7/volumes"
	Aug 19 17:50:20 addons-347256 kubelet[1228]: I0819 17:50:20.868502    1228 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8f780ae-412a-4d24-a835-e0ab77d71426" path="/var/lib/kubelet/pods/e8f780ae-412a-4d24-a835-e0ab77d71426/volumes"
	Aug 19 17:50:23 addons-347256 kubelet[1228]: I0819 17:50:23.974129    1228 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8bpb\" (UniqueName: \"kubernetes.io/projected/0ad978e9-ae74-4179-b4bd-d698a537b143-kube-api-access-k8bpb\") pod \"0ad978e9-ae74-4179-b4bd-d698a537b143\" (UID: \"0ad978e9-ae74-4179-b4bd-d698a537b143\") "
	Aug 19 17:50:23 addons-347256 kubelet[1228]: I0819 17:50:23.974202    1228 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0ad978e9-ae74-4179-b4bd-d698a537b143-webhook-cert\") pod \"0ad978e9-ae74-4179-b4bd-d698a537b143\" (UID: \"0ad978e9-ae74-4179-b4bd-d698a537b143\") "
	Aug 19 17:50:23 addons-347256 kubelet[1228]: I0819 17:50:23.980522    1228 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ad978e9-ae74-4179-b4bd-d698a537b143-kube-api-access-k8bpb" (OuterVolumeSpecName: "kube-api-access-k8bpb") pod "0ad978e9-ae74-4179-b4bd-d698a537b143" (UID: "0ad978e9-ae74-4179-b4bd-d698a537b143"). InnerVolumeSpecName "kube-api-access-k8bpb". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 19 17:50:23 addons-347256 kubelet[1228]: I0819 17:50:23.981008    1228 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ad978e9-ae74-4179-b4bd-d698a537b143-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "0ad978e9-ae74-4179-b4bd-d698a537b143" (UID: "0ad978e9-ae74-4179-b4bd-d698a537b143"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 19 17:50:24 addons-347256 kubelet[1228]: I0819 17:50:24.025726    1228 scope.go:117] "RemoveContainer" containerID="f55d63e0c129592fab9309991588bd4c021a2abf322dc9958db527757b021ad8"
	Aug 19 17:50:24 addons-347256 kubelet[1228]: I0819 17:50:24.046526    1228 scope.go:117] "RemoveContainer" containerID="f55d63e0c129592fab9309991588bd4c021a2abf322dc9958db527757b021ad8"
	Aug 19 17:50:24 addons-347256 kubelet[1228]: E0819 17:50:24.046997    1228 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f55d63e0c129592fab9309991588bd4c021a2abf322dc9958db527757b021ad8\": container with ID starting with f55d63e0c129592fab9309991588bd4c021a2abf322dc9958db527757b021ad8 not found: ID does not exist" containerID="f55d63e0c129592fab9309991588bd4c021a2abf322dc9958db527757b021ad8"
	Aug 19 17:50:24 addons-347256 kubelet[1228]: I0819 17:50:24.047033    1228 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f55d63e0c129592fab9309991588bd4c021a2abf322dc9958db527757b021ad8"} err="failed to get container status \"f55d63e0c129592fab9309991588bd4c021a2abf322dc9958db527757b021ad8\": rpc error: code = NotFound desc = could not find container \"f55d63e0c129592fab9309991588bd4c021a2abf322dc9958db527757b021ad8\": container with ID starting with f55d63e0c129592fab9309991588bd4c021a2abf322dc9958db527757b021ad8 not found: ID does not exist"
	Aug 19 17:50:24 addons-347256 kubelet[1228]: I0819 17:50:24.075036    1228 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-k8bpb\" (UniqueName: \"kubernetes.io/projected/0ad978e9-ae74-4179-b4bd-d698a537b143-kube-api-access-k8bpb\") on node \"addons-347256\" DevicePath \"\""
	Aug 19 17:50:24 addons-347256 kubelet[1228]: I0819 17:50:24.075074    1228 reconciler_common.go:288] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0ad978e9-ae74-4179-b4bd-d698a537b143-webhook-cert\") on node \"addons-347256\" DevicePath \"\""
	Aug 19 17:50:24 addons-347256 kubelet[1228]: I0819 17:50:24.866905    1228 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ad978e9-ae74-4179-b4bd-d698a537b143" path="/var/lib/kubelet/pods/0ad978e9-ae74-4179-b4bd-d698a537b143/volumes"
	Aug 19 17:50:25 addons-347256 kubelet[1228]: E0819 17:50:25.565598    1228 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724089825564942181,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:50:25 addons-347256 kubelet[1228]: E0819 17:50:25.565663    1228 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724089825564942181,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [07d88fd518a67130b2f237c0f5c0e12a105bb8b22bb8cf868165a3ab5c86352d] <==
	I0819 17:46:19.640054       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0819 17:46:19.653780       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0819 17:46:19.653845       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0819 17:46:19.668042       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0819 17:46:19.668211       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-347256_f07fd665-8ebe-45c8-941a-b2b23a7a38b0!
	I0819 17:46:19.673594       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b5e7af78-e96a-4168-93cd-a759afdeb66d", APIVersion:"v1", ResourceVersion:"940", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-347256_f07fd665-8ebe-45c8-941a-b2b23a7a38b0 became leader
	I0819 17:46:19.768444       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-347256_f07fd665-8ebe-45c8-941a-b2b23a7a38b0!
	
	
	==> storage-provisioner [f50bdc241404bd0f8b1a2869be786334bc01a4037c0b7eb743716d47d703a708] <==
	I0819 17:45:49.262885       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0819 17:46:19.266603       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-347256 -n addons-347256
helpers_test.go:261: (dbg) Run:  kubectl --context addons-347256 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (153.12s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (329.3s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.967557ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-8988944d9-xkj9p" [2cb192e0-5048-46b0-b74e-86ad5e4d39ea] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003689424s
addons_test.go:417: (dbg) Run:  kubectl --context addons-347256 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-347256 top pods -n kube-system: exit status 1 (74.894964ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-tljrk, age: 2m6.161295404s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-347256 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-347256 top pods -n kube-system: exit status 1 (64.587438ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-tljrk, age: 2m8.55360148s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-347256 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-347256 top pods -n kube-system: exit status 1 (88.088833ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-tljrk, age: 2m13.59872517s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-347256 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-347256 top pods -n kube-system: exit status 1 (78.043821ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-tljrk, age: 2m20.450241173s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-347256 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-347256 top pods -n kube-system: exit status 1 (71.088807ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-tljrk, age: 2m34.408811632s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-347256 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-347256 top pods -n kube-system: exit status 1 (66.711143ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-tljrk, age: 2m43.699350594s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-347256 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-347256 top pods -n kube-system: exit status 1 (63.891982ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-tljrk, age: 3m6.90895813s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-347256 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-347256 top pods -n kube-system: exit status 1 (66.280626ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-tljrk, age: 3m43.39244551s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-347256 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-347256 top pods -n kube-system: exit status 1 (79.379647ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-tljrk, age: 4m49.578248154s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-347256 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-347256 top pods -n kube-system: exit status 1 (65.258692ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-tljrk, age: 5m30.787312818s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-347256 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-347256 top pods -n kube-system: exit status 1 (67.647812ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-tljrk, age: 6m43.572227511s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-347256 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-347256 top pods -n kube-system: exit status 1 (69.782881ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-tljrk, age: 7m27.655878133s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-347256 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-347256 -n addons-347256
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-347256 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-347256 logs -n 25: (1.347911426s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-891667                                                                     | download-only-891667 | jenkins | v1.33.1 | 19 Aug 24 17:44 UTC | 19 Aug 24 17:44 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-807766 | jenkins | v1.33.1 | 19 Aug 24 17:44 UTC |                     |
	|         | binary-mirror-807766                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:38687                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-807766                                                                     | binary-mirror-807766 | jenkins | v1.33.1 | 19 Aug 24 17:44 UTC | 19 Aug 24 17:44 UTC |
	| addons  | disable dashboard -p                                                                        | addons-347256        | jenkins | v1.33.1 | 19 Aug 24 17:44 UTC |                     |
	|         | addons-347256                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-347256        | jenkins | v1.33.1 | 19 Aug 24 17:44 UTC |                     |
	|         | addons-347256                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-347256 --wait=true                                                                | addons-347256        | jenkins | v1.33.1 | 19 Aug 24 17:44 UTC | 19 Aug 24 17:47 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-347256 addons disable                                                                | addons-347256        | jenkins | v1.33.1 | 19 Aug 24 17:47 UTC | 19 Aug 24 17:47 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-347256 addons disable                                                                | addons-347256        | jenkins | v1.33.1 | 19 Aug 24 17:47 UTC | 19 Aug 24 17:47 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ssh     | addons-347256 ssh cat                                                                       | addons-347256        | jenkins | v1.33.1 | 19 Aug 24 17:47 UTC | 19 Aug 24 17:47 UTC |
	|         | /opt/local-path-provisioner/pvc-94a0ff27-15d3-467a-86db-027973dec176_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-347256 addons disable                                                                | addons-347256        | jenkins | v1.33.1 | 19 Aug 24 17:47 UTC | 19 Aug 24 17:47 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-347256 ip                                                                            | addons-347256        | jenkins | v1.33.1 | 19 Aug 24 17:47 UTC | 19 Aug 24 17:47 UTC |
	| addons  | addons-347256 addons disable                                                                | addons-347256        | jenkins | v1.33.1 | 19 Aug 24 17:47 UTC | 19 Aug 24 17:47 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-347256        | jenkins | v1.33.1 | 19 Aug 24 17:47 UTC | 19 Aug 24 17:47 UTC |
	|         | -p addons-347256                                                                            |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-347256        | jenkins | v1.33.1 | 19 Aug 24 17:47 UTC | 19 Aug 24 17:47 UTC |
	|         | addons-347256                                                                               |                      |         |         |                     |                     |
	| addons  | addons-347256 addons disable                                                                | addons-347256        | jenkins | v1.33.1 | 19 Aug 24 17:47 UTC | 19 Aug 24 17:47 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-347256        | jenkins | v1.33.1 | 19 Aug 24 17:48 UTC | 19 Aug 24 17:48 UTC |
	|         | addons-347256                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-347256        | jenkins | v1.33.1 | 19 Aug 24 17:48 UTC | 19 Aug 24 17:48 UTC |
	|         | -p addons-347256                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-347256 ssh curl -s                                                                   | addons-347256        | jenkins | v1.33.1 | 19 Aug 24 17:48 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-347256 addons disable                                                                | addons-347256        | jenkins | v1.33.1 | 19 Aug 24 17:48 UTC | 19 Aug 24 17:48 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-347256 addons                                                                        | addons-347256        | jenkins | v1.33.1 | 19 Aug 24 17:48 UTC | 19 Aug 24 17:48 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-347256 addons                                                                        | addons-347256        | jenkins | v1.33.1 | 19 Aug 24 17:48 UTC | 19 Aug 24 17:48 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-347256 ip                                                                            | addons-347256        | jenkins | v1.33.1 | 19 Aug 24 17:50 UTC | 19 Aug 24 17:50 UTC |
	| addons  | addons-347256 addons disable                                                                | addons-347256        | jenkins | v1.33.1 | 19 Aug 24 17:50 UTC | 19 Aug 24 17:50 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-347256 addons disable                                                                | addons-347256        | jenkins | v1.33.1 | 19 Aug 24 17:50 UTC | 19 Aug 24 17:50 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-347256 addons                                                                        | addons-347256        | jenkins | v1.33.1 | 19 Aug 24 17:53 UTC | 19 Aug 24 17:53 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 17:44:53
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 17:44:53.053605  380723 out.go:345] Setting OutFile to fd 1 ...
	I0819 17:44:53.053816  380723 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 17:44:53.053825  380723 out.go:358] Setting ErrFile to fd 2...
	I0819 17:44:53.053829  380723 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 17:44:53.053984  380723 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19468-372744/.minikube/bin
	I0819 17:44:53.054561  380723 out.go:352] Setting JSON to false
	I0819 17:44:53.055529  380723 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":5236,"bootTime":1724084257,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 17:44:53.055588  380723 start.go:139] virtualization: kvm guest
	I0819 17:44:53.057502  380723 out.go:177] * [addons-347256] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 17:44:53.058661  380723 out.go:177]   - MINIKUBE_LOCATION=19468
	I0819 17:44:53.058673  380723 notify.go:220] Checking for updates...
	I0819 17:44:53.061327  380723 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 17:44:53.062544  380723 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19468-372744/kubeconfig
	I0819 17:44:53.063749  380723 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19468-372744/.minikube
	I0819 17:44:53.064862  380723 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 17:44:53.066072  380723 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 17:44:53.067543  380723 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 17:44:53.099698  380723 out.go:177] * Using the kvm2 driver based on user configuration
	I0819 17:44:53.101139  380723 start.go:297] selected driver: kvm2
	I0819 17:44:53.101170  380723 start.go:901] validating driver "kvm2" against <nil>
	I0819 17:44:53.101186  380723 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 17:44:53.101949  380723 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 17:44:53.102038  380723 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19468-372744/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 17:44:53.117529  380723 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 17:44:53.117602  380723 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 17:44:53.117831  380723 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 17:44:53.117895  380723 cni.go:84] Creating CNI manager for ""
	I0819 17:44:53.117908  380723 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 17:44:53.117915  380723 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 17:44:53.117968  380723 start.go:340] cluster config:
	{Name:addons-347256 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-347256 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 17:44:53.118081  380723 iso.go:125] acquiring lock: {Name:mk4c0ac1c3202b1a296739df622960e7a0bd8566 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 17:44:53.119877  380723 out.go:177] * Starting "addons-347256" primary control-plane node in "addons-347256" cluster
	I0819 17:44:53.121147  380723 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 17:44:53.121182  380723 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0819 17:44:53.121193  380723 cache.go:56] Caching tarball of preloaded images
	I0819 17:44:53.121260  380723 preload.go:172] Found /home/jenkins/minikube-integration/19468-372744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 17:44:53.121270  380723 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 17:44:53.121582  380723 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/config.json ...
	I0819 17:44:53.121602  380723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/config.json: {Name:mkfeca91554d7bf1aa95ccb29e2b8c6aa486d7f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:44:53.121742  380723 start.go:360] acquireMachinesLock for addons-347256: {Name:mk24ba67a747357e9ce40f1e460d2bb0bc59cc75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 17:44:53.121790  380723 start.go:364] duration metric: took 35.232µs to acquireMachinesLock for "addons-347256"
	I0819 17:44:53.121808  380723 start.go:93] Provisioning new machine with config: &{Name:addons-347256 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:addons-347256 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 17:44:53.121866  380723 start.go:125] createHost starting for "" (driver="kvm2")
	I0819 17:44:53.123421  380723 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0819 17:44:53.123561  380723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:44:53.123597  380723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:44:53.138179  380723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35243
	I0819 17:44:53.138753  380723 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:44:53.139343  380723 main.go:141] libmachine: Using API Version  1
	I0819 17:44:53.139366  380723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:44:53.139760  380723 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:44:53.139989  380723 main.go:141] libmachine: (addons-347256) Calling .GetMachineName
	I0819 17:44:53.140132  380723 main.go:141] libmachine: (addons-347256) Calling .DriverName
	I0819 17:44:53.140302  380723 start.go:159] libmachine.API.Create for "addons-347256" (driver="kvm2")
	I0819 17:44:53.140330  380723 client.go:168] LocalClient.Create starting
	I0819 17:44:53.140379  380723 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem
	I0819 17:44:53.336351  380723 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem
	I0819 17:44:53.702401  380723 main.go:141] libmachine: Running pre-create checks...
	I0819 17:44:53.702433  380723 main.go:141] libmachine: (addons-347256) Calling .PreCreateCheck
	I0819 17:44:53.703016  380723 main.go:141] libmachine: (addons-347256) Calling .GetConfigRaw
	I0819 17:44:53.703451  380723 main.go:141] libmachine: Creating machine...
	I0819 17:44:53.703470  380723 main.go:141] libmachine: (addons-347256) Calling .Create
	I0819 17:44:53.703647  380723 main.go:141] libmachine: (addons-347256) Creating KVM machine...
	I0819 17:44:53.704830  380723 main.go:141] libmachine: (addons-347256) DBG | found existing default KVM network
	I0819 17:44:53.705633  380723 main.go:141] libmachine: (addons-347256) DBG | I0819 17:44:53.705486  380745 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ad0}
	I0819 17:44:53.705663  380723 main.go:141] libmachine: (addons-347256) DBG | created network xml: 
	I0819 17:44:53.705682  380723 main.go:141] libmachine: (addons-347256) DBG | <network>
	I0819 17:44:53.705690  380723 main.go:141] libmachine: (addons-347256) DBG |   <name>mk-addons-347256</name>
	I0819 17:44:53.705697  380723 main.go:141] libmachine: (addons-347256) DBG |   <dns enable='no'/>
	I0819 17:44:53.705702  380723 main.go:141] libmachine: (addons-347256) DBG |   
	I0819 17:44:53.705708  380723 main.go:141] libmachine: (addons-347256) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0819 17:44:53.705714  380723 main.go:141] libmachine: (addons-347256) DBG |     <dhcp>
	I0819 17:44:53.705720  380723 main.go:141] libmachine: (addons-347256) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0819 17:44:53.705727  380723 main.go:141] libmachine: (addons-347256) DBG |     </dhcp>
	I0819 17:44:53.705732  380723 main.go:141] libmachine: (addons-347256) DBG |   </ip>
	I0819 17:44:53.705741  380723 main.go:141] libmachine: (addons-347256) DBG |   
	I0819 17:44:53.705749  380723 main.go:141] libmachine: (addons-347256) DBG | </network>
	I0819 17:44:53.705759  380723 main.go:141] libmachine: (addons-347256) DBG | 
	I0819 17:44:53.710875  380723 main.go:141] libmachine: (addons-347256) DBG | trying to create private KVM network mk-addons-347256 192.168.39.0/24...
	I0819 17:44:53.774194  380723 main.go:141] libmachine: (addons-347256) DBG | private KVM network mk-addons-347256 192.168.39.0/24 created
	I0819 17:44:53.774243  380723 main.go:141] libmachine: (addons-347256) Setting up store path in /home/jenkins/minikube-integration/19468-372744/.minikube/machines/addons-347256 ...
	I0819 17:44:53.774262  380723 main.go:141] libmachine: (addons-347256) DBG | I0819 17:44:53.774137  380745 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19468-372744/.minikube
	I0819 17:44:53.774279  380723 main.go:141] libmachine: (addons-347256) Building disk image from file:///home/jenkins/minikube-integration/19468-372744/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0819 17:44:53.774313  380723 main.go:141] libmachine: (addons-347256) Downloading /home/jenkins/minikube-integration/19468-372744/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19468-372744/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 17:44:54.046509  380723 main.go:141] libmachine: (addons-347256) DBG | I0819 17:44:54.046347  380745 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/addons-347256/id_rsa...
	I0819 17:44:54.180081  380723 main.go:141] libmachine: (addons-347256) DBG | I0819 17:44:54.179903  380745 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/addons-347256/addons-347256.rawdisk...
	I0819 17:44:54.180124  380723 main.go:141] libmachine: (addons-347256) DBG | Writing magic tar header
	I0819 17:44:54.180142  380723 main.go:141] libmachine: (addons-347256) DBG | Writing SSH key tar header
	I0819 17:44:54.180152  380723 main.go:141] libmachine: (addons-347256) DBG | I0819 17:44:54.180089  380745 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19468-372744/.minikube/machines/addons-347256 ...
	I0819 17:44:54.180245  380723 main.go:141] libmachine: (addons-347256) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/addons-347256
	I0819 17:44:54.180325  380723 main.go:141] libmachine: (addons-347256) Setting executable bit set on /home/jenkins/minikube-integration/19468-372744/.minikube/machines/addons-347256 (perms=drwx------)
	I0819 17:44:54.180361  380723 main.go:141] libmachine: (addons-347256) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19468-372744/.minikube/machines
	I0819 17:44:54.180372  380723 main.go:141] libmachine: (addons-347256) Setting executable bit set on /home/jenkins/minikube-integration/19468-372744/.minikube/machines (perms=drwxr-xr-x)
	I0819 17:44:54.180389  380723 main.go:141] libmachine: (addons-347256) Setting executable bit set on /home/jenkins/minikube-integration/19468-372744/.minikube (perms=drwxr-xr-x)
	I0819 17:44:54.180415  380723 main.go:141] libmachine: (addons-347256) Setting executable bit set on /home/jenkins/minikube-integration/19468-372744 (perms=drwxrwxr-x)
	I0819 17:44:54.180439  380723 main.go:141] libmachine: (addons-347256) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0819 17:44:54.180454  380723 main.go:141] libmachine: (addons-347256) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0819 17:44:54.180468  380723 main.go:141] libmachine: (addons-347256) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19468-372744/.minikube
	I0819 17:44:54.180483  380723 main.go:141] libmachine: (addons-347256) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19468-372744
	I0819 17:44:54.180497  380723 main.go:141] libmachine: (addons-347256) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0819 17:44:54.180512  380723 main.go:141] libmachine: (addons-347256) DBG | Checking permissions on dir: /home/jenkins
	I0819 17:44:54.180525  380723 main.go:141] libmachine: (addons-347256) DBG | Checking permissions on dir: /home
	I0819 17:44:54.180537  380723 main.go:141] libmachine: (addons-347256) DBG | Skipping /home - not owner
	I0819 17:44:54.180548  380723 main.go:141] libmachine: (addons-347256) Creating domain...
	I0819 17:44:54.181498  380723 main.go:141] libmachine: (addons-347256) define libvirt domain using xml: 
	I0819 17:44:54.181524  380723 main.go:141] libmachine: (addons-347256) <domain type='kvm'>
	I0819 17:44:54.181536  380723 main.go:141] libmachine: (addons-347256)   <name>addons-347256</name>
	I0819 17:44:54.181552  380723 main.go:141] libmachine: (addons-347256)   <memory unit='MiB'>4000</memory>
	I0819 17:44:54.181562  380723 main.go:141] libmachine: (addons-347256)   <vcpu>2</vcpu>
	I0819 17:44:54.181577  380723 main.go:141] libmachine: (addons-347256)   <features>
	I0819 17:44:54.181589  380723 main.go:141] libmachine: (addons-347256)     <acpi/>
	I0819 17:44:54.181596  380723 main.go:141] libmachine: (addons-347256)     <apic/>
	I0819 17:44:54.181605  380723 main.go:141] libmachine: (addons-347256)     <pae/>
	I0819 17:44:54.181612  380723 main.go:141] libmachine: (addons-347256)     
	I0819 17:44:54.181618  380723 main.go:141] libmachine: (addons-347256)   </features>
	I0819 17:44:54.181626  380723 main.go:141] libmachine: (addons-347256)   <cpu mode='host-passthrough'>
	I0819 17:44:54.181637  380723 main.go:141] libmachine: (addons-347256)   
	I0819 17:44:54.181649  380723 main.go:141] libmachine: (addons-347256)   </cpu>
	I0819 17:44:54.181680  380723 main.go:141] libmachine: (addons-347256)   <os>
	I0819 17:44:54.181702  380723 main.go:141] libmachine: (addons-347256)     <type>hvm</type>
	I0819 17:44:54.181718  380723 main.go:141] libmachine: (addons-347256)     <boot dev='cdrom'/>
	I0819 17:44:54.181734  380723 main.go:141] libmachine: (addons-347256)     <boot dev='hd'/>
	I0819 17:44:54.181748  380723 main.go:141] libmachine: (addons-347256)     <bootmenu enable='no'/>
	I0819 17:44:54.181757  380723 main.go:141] libmachine: (addons-347256)   </os>
	I0819 17:44:54.181768  380723 main.go:141] libmachine: (addons-347256)   <devices>
	I0819 17:44:54.181780  380723 main.go:141] libmachine: (addons-347256)     <disk type='file' device='cdrom'>
	I0819 17:44:54.181799  380723 main.go:141] libmachine: (addons-347256)       <source file='/home/jenkins/minikube-integration/19468-372744/.minikube/machines/addons-347256/boot2docker.iso'/>
	I0819 17:44:54.181811  380723 main.go:141] libmachine: (addons-347256)       <target dev='hdc' bus='scsi'/>
	I0819 17:44:54.181823  380723 main.go:141] libmachine: (addons-347256)       <readonly/>
	I0819 17:44:54.181839  380723 main.go:141] libmachine: (addons-347256)     </disk>
	I0819 17:44:54.181853  380723 main.go:141] libmachine: (addons-347256)     <disk type='file' device='disk'>
	I0819 17:44:54.181867  380723 main.go:141] libmachine: (addons-347256)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0819 17:44:54.181884  380723 main.go:141] libmachine: (addons-347256)       <source file='/home/jenkins/minikube-integration/19468-372744/.minikube/machines/addons-347256/addons-347256.rawdisk'/>
	I0819 17:44:54.181896  380723 main.go:141] libmachine: (addons-347256)       <target dev='hda' bus='virtio'/>
	I0819 17:44:54.181913  380723 main.go:141] libmachine: (addons-347256)     </disk>
	I0819 17:44:54.181924  380723 main.go:141] libmachine: (addons-347256)     <interface type='network'>
	I0819 17:44:54.181937  380723 main.go:141] libmachine: (addons-347256)       <source network='mk-addons-347256'/>
	I0819 17:44:54.181948  380723 main.go:141] libmachine: (addons-347256)       <model type='virtio'/>
	I0819 17:44:54.181957  380723 main.go:141] libmachine: (addons-347256)     </interface>
	I0819 17:44:54.181976  380723 main.go:141] libmachine: (addons-347256)     <interface type='network'>
	I0819 17:44:54.181989  380723 main.go:141] libmachine: (addons-347256)       <source network='default'/>
	I0819 17:44:54.182008  380723 main.go:141] libmachine: (addons-347256)       <model type='virtio'/>
	I0819 17:44:54.182020  380723 main.go:141] libmachine: (addons-347256)     </interface>
	I0819 17:44:54.182030  380723 main.go:141] libmachine: (addons-347256)     <serial type='pty'>
	I0819 17:44:54.182040  380723 main.go:141] libmachine: (addons-347256)       <target port='0'/>
	I0819 17:44:54.182050  380723 main.go:141] libmachine: (addons-347256)     </serial>
	I0819 17:44:54.182062  380723 main.go:141] libmachine: (addons-347256)     <console type='pty'>
	I0819 17:44:54.182078  380723 main.go:141] libmachine: (addons-347256)       <target type='serial' port='0'/>
	I0819 17:44:54.182090  380723 main.go:141] libmachine: (addons-347256)     </console>
	I0819 17:44:54.182101  380723 main.go:141] libmachine: (addons-347256)     <rng model='virtio'>
	I0819 17:44:54.182115  380723 main.go:141] libmachine: (addons-347256)       <backend model='random'>/dev/random</backend>
	I0819 17:44:54.182123  380723 main.go:141] libmachine: (addons-347256)     </rng>
	I0819 17:44:54.182135  380723 main.go:141] libmachine: (addons-347256)     
	I0819 17:44:54.182149  380723 main.go:141] libmachine: (addons-347256)     
	I0819 17:44:54.182161  380723 main.go:141] libmachine: (addons-347256)   </devices>
	I0819 17:44:54.182171  380723 main.go:141] libmachine: (addons-347256) </domain>
	I0819 17:44:54.182184  380723 main.go:141] libmachine: (addons-347256) 
	I0819 17:44:54.187984  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:53:c4:2e in network default
	I0819 17:44:54.188526  380723 main.go:141] libmachine: (addons-347256) Ensuring networks are active...
	I0819 17:44:54.188545  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:44:54.189160  380723 main.go:141] libmachine: (addons-347256) Ensuring network default is active
	I0819 17:44:54.189471  380723 main.go:141] libmachine: (addons-347256) Ensuring network mk-addons-347256 is active
	I0819 17:44:54.189930  380723 main.go:141] libmachine: (addons-347256) Getting domain xml...
	I0819 17:44:54.190558  380723 main.go:141] libmachine: (addons-347256) Creating domain...
	I0819 17:44:55.575338  380723 main.go:141] libmachine: (addons-347256) Waiting to get IP...
	I0819 17:44:55.576124  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:44:55.576562  380723 main.go:141] libmachine: (addons-347256) DBG | unable to find current IP address of domain addons-347256 in network mk-addons-347256
	I0819 17:44:55.576594  380723 main.go:141] libmachine: (addons-347256) DBG | I0819 17:44:55.576511  380745 retry.go:31] will retry after 295.150701ms: waiting for machine to come up
	I0819 17:44:55.872866  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:44:55.873329  380723 main.go:141] libmachine: (addons-347256) DBG | unable to find current IP address of domain addons-347256 in network mk-addons-347256
	I0819 17:44:55.873350  380723 main.go:141] libmachine: (addons-347256) DBG | I0819 17:44:55.873288  380745 retry.go:31] will retry after 287.211341ms: waiting for machine to come up
	I0819 17:44:56.161830  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:44:56.162615  380723 main.go:141] libmachine: (addons-347256) DBG | unable to find current IP address of domain addons-347256 in network mk-addons-347256
	I0819 17:44:56.162643  380723 main.go:141] libmachine: (addons-347256) DBG | I0819 17:44:56.162581  380745 retry.go:31] will retry after 377.259476ms: waiting for machine to come up
	I0819 17:44:56.541888  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:44:56.542314  380723 main.go:141] libmachine: (addons-347256) DBG | unable to find current IP address of domain addons-347256 in network mk-addons-347256
	I0819 17:44:56.542346  380723 main.go:141] libmachine: (addons-347256) DBG | I0819 17:44:56.542273  380745 retry.go:31] will retry after 519.651535ms: waiting for machine to come up
	I0819 17:44:57.065287  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:44:57.065704  380723 main.go:141] libmachine: (addons-347256) DBG | unable to find current IP address of domain addons-347256 in network mk-addons-347256
	I0819 17:44:57.065732  380723 main.go:141] libmachine: (addons-347256) DBG | I0819 17:44:57.065650  380745 retry.go:31] will retry after 553.174431ms: waiting for machine to come up
	I0819 17:44:57.620642  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:44:57.621087  380723 main.go:141] libmachine: (addons-347256) DBG | unable to find current IP address of domain addons-347256 in network mk-addons-347256
	I0819 17:44:57.621108  380723 main.go:141] libmachine: (addons-347256) DBG | I0819 17:44:57.621049  380745 retry.go:31] will retry after 898.791982ms: waiting for machine to come up
	I0819 17:44:58.521912  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:44:58.522296  380723 main.go:141] libmachine: (addons-347256) DBG | unable to find current IP address of domain addons-347256 in network mk-addons-347256
	I0819 17:44:58.522324  380723 main.go:141] libmachine: (addons-347256) DBG | I0819 17:44:58.522255  380745 retry.go:31] will retry after 929.252814ms: waiting for machine to come up
	I0819 17:44:59.453409  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:44:59.453776  380723 main.go:141] libmachine: (addons-347256) DBG | unable to find current IP address of domain addons-347256 in network mk-addons-347256
	I0819 17:44:59.453801  380723 main.go:141] libmachine: (addons-347256) DBG | I0819 17:44:59.453724  380745 retry.go:31] will retry after 1.314906411s: waiting for machine to come up
	I0819 17:45:00.770448  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:00.770972  380723 main.go:141] libmachine: (addons-347256) DBG | unable to find current IP address of domain addons-347256 in network mk-addons-347256
	I0819 17:45:00.771005  380723 main.go:141] libmachine: (addons-347256) DBG | I0819 17:45:00.770916  380745 retry.go:31] will retry after 1.678424852s: waiting for machine to come up
	I0819 17:45:02.450850  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:02.451285  380723 main.go:141] libmachine: (addons-347256) DBG | unable to find current IP address of domain addons-347256 in network mk-addons-347256
	I0819 17:45:02.451306  380723 main.go:141] libmachine: (addons-347256) DBG | I0819 17:45:02.451251  380745 retry.go:31] will retry after 2.169043026s: waiting for machine to come up
	I0819 17:45:04.622786  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:04.623275  380723 main.go:141] libmachine: (addons-347256) DBG | unable to find current IP address of domain addons-347256 in network mk-addons-347256
	I0819 17:45:04.623307  380723 main.go:141] libmachine: (addons-347256) DBG | I0819 17:45:04.623177  380745 retry.go:31] will retry after 2.403674314s: waiting for machine to come up
	I0819 17:45:07.029819  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:07.030317  380723 main.go:141] libmachine: (addons-347256) DBG | unable to find current IP address of domain addons-347256 in network mk-addons-347256
	I0819 17:45:07.030349  380723 main.go:141] libmachine: (addons-347256) DBG | I0819 17:45:07.030267  380745 retry.go:31] will retry after 3.135440118s: waiting for machine to come up
	I0819 17:45:10.168488  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:10.168888  380723 main.go:141] libmachine: (addons-347256) DBG | unable to find current IP address of domain addons-347256 in network mk-addons-347256
	I0819 17:45:10.168969  380723 main.go:141] libmachine: (addons-347256) DBG | I0819 17:45:10.168818  380745 retry.go:31] will retry after 3.383905861s: waiting for machine to come up
	I0819 17:45:13.554423  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:13.554863  380723 main.go:141] libmachine: (addons-347256) DBG | unable to find current IP address of domain addons-347256 in network mk-addons-347256
	I0819 17:45:13.554902  380723 main.go:141] libmachine: (addons-347256) DBG | I0819 17:45:13.554816  380745 retry.go:31] will retry after 3.910322903s: waiting for machine to come up
	I0819 17:45:17.469972  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:17.470466  380723 main.go:141] libmachine: (addons-347256) Found IP for machine: 192.168.39.18
	I0819 17:45:17.470499  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has current primary IP address 192.168.39.18 and MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:17.470509  380723 main.go:141] libmachine: (addons-347256) Reserving static IP address...
	I0819 17:45:17.470810  380723 main.go:141] libmachine: (addons-347256) DBG | unable to find host DHCP lease matching {name: "addons-347256", mac: "52:54:00:96:9a:be", ip: "192.168.39.18"} in network mk-addons-347256
	I0819 17:45:17.540970  380723 main.go:141] libmachine: (addons-347256) DBG | Getting to WaitForSSH function...
	I0819 17:45:17.541006  380723 main.go:141] libmachine: (addons-347256) Reserved static IP address: 192.168.39.18
	I0819 17:45:17.541019  380723 main.go:141] libmachine: (addons-347256) Waiting for SSH to be available...
	I0819 17:45:17.543574  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:17.544080  380723 main.go:141] libmachine: (addons-347256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:9a:be", ip: ""} in network mk-addons-347256: {Iface:virbr1 ExpiryTime:2024-08-19 18:45:08 +0000 UTC Type:0 Mac:52:54:00:96:9a:be Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:minikube Clientid:01:52:54:00:96:9a:be}
	I0819 17:45:17.544114  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined IP address 192.168.39.18 and MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:17.544262  380723 main.go:141] libmachine: (addons-347256) DBG | Using SSH client type: external
	I0819 17:45:17.544301  380723 main.go:141] libmachine: (addons-347256) DBG | Using SSH private key: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/addons-347256/id_rsa (-rw-------)
	I0819 17:45:17.544334  380723 main.go:141] libmachine: (addons-347256) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.18 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19468-372744/.minikube/machines/addons-347256/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 17:45:17.544349  380723 main.go:141] libmachine: (addons-347256) DBG | About to run SSH command:
	I0819 17:45:17.544365  380723 main.go:141] libmachine: (addons-347256) DBG | exit 0
	I0819 17:45:17.679691  380723 main.go:141] libmachine: (addons-347256) DBG | SSH cmd err, output: <nil>: 
	I0819 17:45:17.679999  380723 main.go:141] libmachine: (addons-347256) KVM machine creation complete!
	I0819 17:45:17.680361  380723 main.go:141] libmachine: (addons-347256) Calling .GetConfigRaw
	I0819 17:45:17.680946  380723 main.go:141] libmachine: (addons-347256) Calling .DriverName
	I0819 17:45:17.681177  380723 main.go:141] libmachine: (addons-347256) Calling .DriverName
	I0819 17:45:17.681380  380723 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0819 17:45:17.681395  380723 main.go:141] libmachine: (addons-347256) Calling .GetState
	I0819 17:45:17.682671  380723 main.go:141] libmachine: Detecting operating system of created instance...
	I0819 17:45:17.682689  380723 main.go:141] libmachine: Waiting for SSH to be available...
	I0819 17:45:17.682697  380723 main.go:141] libmachine: Getting to WaitForSSH function...
	I0819 17:45:17.682706  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHHostname
	I0819 17:45:17.684925  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:17.685212  380723 main.go:141] libmachine: (addons-347256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:9a:be", ip: ""} in network mk-addons-347256: {Iface:virbr1 ExpiryTime:2024-08-19 18:45:08 +0000 UTC Type:0 Mac:52:54:00:96:9a:be Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-347256 Clientid:01:52:54:00:96:9a:be}
	I0819 17:45:17.685241  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined IP address 192.168.39.18 and MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:17.685363  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHPort
	I0819 17:45:17.685526  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHKeyPath
	I0819 17:45:17.685686  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHKeyPath
	I0819 17:45:17.685818  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHUsername
	I0819 17:45:17.686012  380723 main.go:141] libmachine: Using SSH client type: native
	I0819 17:45:17.686209  380723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I0819 17:45:17.686221  380723 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0819 17:45:17.799076  380723 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 17:45:17.799100  380723 main.go:141] libmachine: Detecting the provisioner...
	I0819 17:45:17.799108  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHHostname
	I0819 17:45:17.801845  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:17.802151  380723 main.go:141] libmachine: (addons-347256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:9a:be", ip: ""} in network mk-addons-347256: {Iface:virbr1 ExpiryTime:2024-08-19 18:45:08 +0000 UTC Type:0 Mac:52:54:00:96:9a:be Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-347256 Clientid:01:52:54:00:96:9a:be}
	I0819 17:45:17.802182  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined IP address 192.168.39.18 and MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:17.802363  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHPort
	I0819 17:45:17.802601  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHKeyPath
	I0819 17:45:17.802763  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHKeyPath
	I0819 17:45:17.802894  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHUsername
	I0819 17:45:17.803028  380723 main.go:141] libmachine: Using SSH client type: native
	I0819 17:45:17.803215  380723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I0819 17:45:17.803227  380723 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0819 17:45:17.916749  380723 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0819 17:45:17.916839  380723 main.go:141] libmachine: found compatible host: buildroot
	I0819 17:45:17.916853  380723 main.go:141] libmachine: Provisioning with buildroot...
	I0819 17:45:17.916866  380723 main.go:141] libmachine: (addons-347256) Calling .GetMachineName
	I0819 17:45:17.917178  380723 buildroot.go:166] provisioning hostname "addons-347256"
	I0819 17:45:17.917209  380723 main.go:141] libmachine: (addons-347256) Calling .GetMachineName
	I0819 17:45:17.917393  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHHostname
	I0819 17:45:17.920321  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:17.920754  380723 main.go:141] libmachine: (addons-347256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:9a:be", ip: ""} in network mk-addons-347256: {Iface:virbr1 ExpiryTime:2024-08-19 18:45:08 +0000 UTC Type:0 Mac:52:54:00:96:9a:be Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-347256 Clientid:01:52:54:00:96:9a:be}
	I0819 17:45:17.920792  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined IP address 192.168.39.18 and MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:17.920987  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHPort
	I0819 17:45:17.921195  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHKeyPath
	I0819 17:45:17.921382  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHKeyPath
	I0819 17:45:17.921591  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHUsername
	I0819 17:45:17.921777  380723 main.go:141] libmachine: Using SSH client type: native
	I0819 17:45:17.921982  380723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I0819 17:45:17.921999  380723 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-347256 && echo "addons-347256" | sudo tee /etc/hostname
	I0819 17:45:18.050302  380723 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-347256
	
	I0819 17:45:18.050339  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHHostname
	I0819 17:45:18.053307  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:18.053686  380723 main.go:141] libmachine: (addons-347256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:9a:be", ip: ""} in network mk-addons-347256: {Iface:virbr1 ExpiryTime:2024-08-19 18:45:08 +0000 UTC Type:0 Mac:52:54:00:96:9a:be Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-347256 Clientid:01:52:54:00:96:9a:be}
	I0819 17:45:18.053764  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined IP address 192.168.39.18 and MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:18.053894  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHPort
	I0819 17:45:18.054109  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHKeyPath
	I0819 17:45:18.054294  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHKeyPath
	I0819 17:45:18.054473  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHUsername
	I0819 17:45:18.054668  380723 main.go:141] libmachine: Using SSH client type: native
	I0819 17:45:18.054888  380723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I0819 17:45:18.054906  380723 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-347256' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-347256/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-347256' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 17:45:18.177246  380723 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 17:45:18.177281  380723 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19468-372744/.minikube CaCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19468-372744/.minikube}
	I0819 17:45:18.177302  380723 buildroot.go:174] setting up certificates
	I0819 17:45:18.177315  380723 provision.go:84] configureAuth start
	I0819 17:45:18.177327  380723 main.go:141] libmachine: (addons-347256) Calling .GetMachineName
	I0819 17:45:18.177658  380723 main.go:141] libmachine: (addons-347256) Calling .GetIP
	I0819 17:45:18.180197  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:18.180554  380723 main.go:141] libmachine: (addons-347256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:9a:be", ip: ""} in network mk-addons-347256: {Iface:virbr1 ExpiryTime:2024-08-19 18:45:08 +0000 UTC Type:0 Mac:52:54:00:96:9a:be Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-347256 Clientid:01:52:54:00:96:9a:be}
	I0819 17:45:18.180585  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined IP address 192.168.39.18 and MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:18.180730  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHHostname
	I0819 17:45:18.182860  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:18.183193  380723 main.go:141] libmachine: (addons-347256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:9a:be", ip: ""} in network mk-addons-347256: {Iface:virbr1 ExpiryTime:2024-08-19 18:45:08 +0000 UTC Type:0 Mac:52:54:00:96:9a:be Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-347256 Clientid:01:52:54:00:96:9a:be}
	I0819 17:45:18.183218  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined IP address 192.168.39.18 and MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:18.183388  380723 provision.go:143] copyHostCerts
	I0819 17:45:18.183468  380723 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem (1082 bytes)
	I0819 17:45:18.183604  380723 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem (1123 bytes)
	I0819 17:45:18.183743  380723 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem (1675 bytes)
	I0819 17:45:18.183802  380723 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem org=jenkins.addons-347256 san=[127.0.0.1 192.168.39.18 addons-347256 localhost minikube]
	I0819 17:45:18.533128  380723 provision.go:177] copyRemoteCerts
	I0819 17:45:18.533192  380723 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 17:45:18.533218  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHHostname
	I0819 17:45:18.536191  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:18.536567  380723 main.go:141] libmachine: (addons-347256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:9a:be", ip: ""} in network mk-addons-347256: {Iface:virbr1 ExpiryTime:2024-08-19 18:45:08 +0000 UTC Type:0 Mac:52:54:00:96:9a:be Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-347256 Clientid:01:52:54:00:96:9a:be}
	I0819 17:45:18.536599  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined IP address 192.168.39.18 and MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:18.536802  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHPort
	I0819 17:45:18.537032  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHKeyPath
	I0819 17:45:18.537220  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHUsername
	I0819 17:45:18.537380  380723 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/addons-347256/id_rsa Username:docker}
	I0819 17:45:18.625766  380723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 17:45:18.650381  380723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0819 17:45:18.674543  380723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 17:45:18.698674  380723 provision.go:87] duration metric: took 521.340221ms to configureAuth
	I0819 17:45:18.698707  380723 buildroot.go:189] setting minikube options for container-runtime
	I0819 17:45:18.698915  380723 config.go:182] Loaded profile config "addons-347256": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 17:45:18.699022  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHHostname
	I0819 17:45:18.701748  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:18.702114  380723 main.go:141] libmachine: (addons-347256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:9a:be", ip: ""} in network mk-addons-347256: {Iface:virbr1 ExpiryTime:2024-08-19 18:45:08 +0000 UTC Type:0 Mac:52:54:00:96:9a:be Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-347256 Clientid:01:52:54:00:96:9a:be}
	I0819 17:45:18.702146  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined IP address 192.168.39.18 and MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:18.702339  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHPort
	I0819 17:45:18.702571  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHKeyPath
	I0819 17:45:18.702725  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHKeyPath
	I0819 17:45:18.702911  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHUsername
	I0819 17:45:18.703067  380723 main.go:141] libmachine: Using SSH client type: native
	I0819 17:45:18.703245  380723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I0819 17:45:18.703259  380723 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 17:45:18.970758  380723 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 17:45:18.970793  380723 main.go:141] libmachine: Checking connection to Docker...
	I0819 17:45:18.970811  380723 main.go:141] libmachine: (addons-347256) Calling .GetURL
	I0819 17:45:18.972103  380723 main.go:141] libmachine: (addons-347256) DBG | Using libvirt version 6000000
	I0819 17:45:18.974612  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:18.974955  380723 main.go:141] libmachine: (addons-347256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:9a:be", ip: ""} in network mk-addons-347256: {Iface:virbr1 ExpiryTime:2024-08-19 18:45:08 +0000 UTC Type:0 Mac:52:54:00:96:9a:be Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-347256 Clientid:01:52:54:00:96:9a:be}
	I0819 17:45:18.974983  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined IP address 192.168.39.18 and MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:18.975163  380723 main.go:141] libmachine: Docker is up and running!
	I0819 17:45:18.975176  380723 main.go:141] libmachine: Reticulating splines...
	I0819 17:45:18.975184  380723 client.go:171] duration metric: took 25.834843542s to LocalClient.Create
	I0819 17:45:18.975214  380723 start.go:167] duration metric: took 25.834912671s to libmachine.API.Create "addons-347256"
	I0819 17:45:18.975228  380723 start.go:293] postStartSetup for "addons-347256" (driver="kvm2")
	I0819 17:45:18.975243  380723 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 17:45:18.975261  380723 main.go:141] libmachine: (addons-347256) Calling .DriverName
	I0819 17:45:18.975517  380723 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 17:45:18.975552  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHHostname
	I0819 17:45:18.977677  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:18.977956  380723 main.go:141] libmachine: (addons-347256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:9a:be", ip: ""} in network mk-addons-347256: {Iface:virbr1 ExpiryTime:2024-08-19 18:45:08 +0000 UTC Type:0 Mac:52:54:00:96:9a:be Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-347256 Clientid:01:52:54:00:96:9a:be}
	I0819 17:45:18.977982  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined IP address 192.168.39.18 and MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:18.978127  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHPort
	I0819 17:45:18.978378  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHKeyPath
	I0819 17:45:18.978539  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHUsername
	I0819 17:45:18.978714  380723 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/addons-347256/id_rsa Username:docker}
	I0819 17:45:19.066463  380723 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 17:45:19.071232  380723 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 17:45:19.071265  380723 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-372744/.minikube/addons for local assets ...
	I0819 17:45:19.071342  380723 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-372744/.minikube/files for local assets ...
	I0819 17:45:19.071366  380723 start.go:296] duration metric: took 96.131784ms for postStartSetup
	I0819 17:45:19.071406  380723 main.go:141] libmachine: (addons-347256) Calling .GetConfigRaw
	I0819 17:45:19.072003  380723 main.go:141] libmachine: (addons-347256) Calling .GetIP
	I0819 17:45:19.074691  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:19.075061  380723 main.go:141] libmachine: (addons-347256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:9a:be", ip: ""} in network mk-addons-347256: {Iface:virbr1 ExpiryTime:2024-08-19 18:45:08 +0000 UTC Type:0 Mac:52:54:00:96:9a:be Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-347256 Clientid:01:52:54:00:96:9a:be}
	I0819 17:45:19.075089  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined IP address 192.168.39.18 and MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:19.075338  380723 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/config.json ...
	I0819 17:45:19.075548  380723 start.go:128] duration metric: took 25.953671356s to createHost
	I0819 17:45:19.075577  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHHostname
	I0819 17:45:19.077812  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:19.078129  380723 main.go:141] libmachine: (addons-347256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:9a:be", ip: ""} in network mk-addons-347256: {Iface:virbr1 ExpiryTime:2024-08-19 18:45:08 +0000 UTC Type:0 Mac:52:54:00:96:9a:be Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-347256 Clientid:01:52:54:00:96:9a:be}
	I0819 17:45:19.078152  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined IP address 192.168.39.18 and MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:19.078347  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHPort
	I0819 17:45:19.078529  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHKeyPath
	I0819 17:45:19.078689  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHKeyPath
	I0819 17:45:19.078801  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHUsername
	I0819 17:45:19.078958  380723 main.go:141] libmachine: Using SSH client type: native
	I0819 17:45:19.079123  380723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I0819 17:45:19.079133  380723 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 17:45:19.192391  380723 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724089519.168750525
	
	I0819 17:45:19.192417  380723 fix.go:216] guest clock: 1724089519.168750525
	I0819 17:45:19.192426  380723 fix.go:229] Guest: 2024-08-19 17:45:19.168750525 +0000 UTC Remote: 2024-08-19 17:45:19.075561803 +0000 UTC m=+26.056759756 (delta=93.188722ms)
	I0819 17:45:19.192479  380723 fix.go:200] guest clock delta is within tolerance: 93.188722ms
	I0819 17:45:19.192485  380723 start.go:83] releasing machines lock for "addons-347256", held for 26.070685533s
	I0819 17:45:19.192510  380723 main.go:141] libmachine: (addons-347256) Calling .DriverName
	I0819 17:45:19.192808  380723 main.go:141] libmachine: (addons-347256) Calling .GetIP
	I0819 17:45:19.195227  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:19.195544  380723 main.go:141] libmachine: (addons-347256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:9a:be", ip: ""} in network mk-addons-347256: {Iface:virbr1 ExpiryTime:2024-08-19 18:45:08 +0000 UTC Type:0 Mac:52:54:00:96:9a:be Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-347256 Clientid:01:52:54:00:96:9a:be}
	I0819 17:45:19.195576  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined IP address 192.168.39.18 and MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:19.195713  380723 main.go:141] libmachine: (addons-347256) Calling .DriverName
	I0819 17:45:19.196239  380723 main.go:141] libmachine: (addons-347256) Calling .DriverName
	I0819 17:45:19.196453  380723 main.go:141] libmachine: (addons-347256) Calling .DriverName
	I0819 17:45:19.196570  380723 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 17:45:19.196645  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHHostname
	I0819 17:45:19.196686  380723 ssh_runner.go:195] Run: cat /version.json
	I0819 17:45:19.196712  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHHostname
	I0819 17:45:19.199202  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:19.199534  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:19.199655  380723 main.go:141] libmachine: (addons-347256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:9a:be", ip: ""} in network mk-addons-347256: {Iface:virbr1 ExpiryTime:2024-08-19 18:45:08 +0000 UTC Type:0 Mac:52:54:00:96:9a:be Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-347256 Clientid:01:52:54:00:96:9a:be}
	I0819 17:45:19.199699  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined IP address 192.168.39.18 and MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:19.199864  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHPort
	I0819 17:45:19.199985  380723 main.go:141] libmachine: (addons-347256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:9a:be", ip: ""} in network mk-addons-347256: {Iface:virbr1 ExpiryTime:2024-08-19 18:45:08 +0000 UTC Type:0 Mac:52:54:00:96:9a:be Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-347256 Clientid:01:52:54:00:96:9a:be}
	I0819 17:45:19.200018  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined IP address 192.168.39.18 and MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:19.200044  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHKeyPath
	I0819 17:45:19.200223  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHUsername
	I0819 17:45:19.200229  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHPort
	I0819 17:45:19.200398  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHKeyPath
	I0819 17:45:19.200393  380723 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/addons-347256/id_rsa Username:docker}
	I0819 17:45:19.200559  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHUsername
	I0819 17:45:19.200705  380723 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/addons-347256/id_rsa Username:docker}
	I0819 17:45:19.303317  380723 ssh_runner.go:195] Run: systemctl --version
	I0819 17:45:19.309394  380723 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 17:45:19.469769  380723 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 17:45:19.475733  380723 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 17:45:19.475804  380723 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 17:45:19.492217  380723 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 17:45:19.492246  380723 start.go:495] detecting cgroup driver to use...
	I0819 17:45:19.492312  380723 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 17:45:19.512633  380723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 17:45:19.526666  380723 docker.go:217] disabling cri-docker service (if available) ...
	I0819 17:45:19.526723  380723 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 17:45:19.540412  380723 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 17:45:19.554050  380723 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 17:45:19.681052  380723 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 17:45:19.826760  380723 docker.go:233] disabling docker service ...
	I0819 17:45:19.826844  380723 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 17:45:19.841303  380723 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 17:45:19.854153  380723 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 17:45:19.980148  380723 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 17:45:20.114056  380723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 17:45:20.128089  380723 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 17:45:20.146365  380723 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 17:45:20.146431  380723 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:45:20.157135  380723 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 17:45:20.157211  380723 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:45:20.167642  380723 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:45:20.178347  380723 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:45:20.189041  380723 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 17:45:20.200449  380723 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:45:20.211135  380723 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:45:20.228424  380723 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:45:20.239113  380723 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 17:45:20.248596  380723 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 17:45:20.248657  380723 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 17:45:20.261895  380723 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 17:45:20.271193  380723 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 17:45:20.391778  380723 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 17:45:20.528119  380723 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 17:45:20.528214  380723 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 17:45:20.533144  380723 start.go:563] Will wait 60s for crictl version
	I0819 17:45:20.533227  380723 ssh_runner.go:195] Run: which crictl
	I0819 17:45:20.536823  380723 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 17:45:20.575052  380723 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 17:45:20.575136  380723 ssh_runner.go:195] Run: crio --version
	I0819 17:45:20.601890  380723 ssh_runner.go:195] Run: crio --version
	I0819 17:45:20.630807  380723 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 17:45:20.632144  380723 main.go:141] libmachine: (addons-347256) Calling .GetIP
	I0819 17:45:20.634767  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:20.635142  380723 main.go:141] libmachine: (addons-347256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:9a:be", ip: ""} in network mk-addons-347256: {Iface:virbr1 ExpiryTime:2024-08-19 18:45:08 +0000 UTC Type:0 Mac:52:54:00:96:9a:be Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-347256 Clientid:01:52:54:00:96:9a:be}
	I0819 17:45:20.635184  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined IP address 192.168.39.18 and MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:20.635375  380723 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 17:45:20.639550  380723 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 17:45:20.651906  380723 kubeadm.go:883] updating cluster {Name:addons-347256 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:addons-347256 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.18 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 17:45:20.652018  380723 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 17:45:20.652059  380723 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 17:45:20.685872  380723 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0819 17:45:20.685942  380723 ssh_runner.go:195] Run: which lz4
	I0819 17:45:20.690104  380723 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 17:45:20.694324  380723 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 17:45:20.694354  380723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0819 17:45:21.956220  380723 crio.go:462] duration metric: took 1.266150323s to copy over tarball
	I0819 17:45:21.956324  380723 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 17:45:24.072963  380723 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.11660057s)
	I0819 17:45:24.072995  380723 crio.go:469] duration metric: took 2.116739s to extract the tarball
	I0819 17:45:24.073004  380723 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 17:45:24.109933  380723 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 17:45:24.160419  380723 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 17:45:24.160454  380723 cache_images.go:84] Images are preloaded, skipping loading
	I0819 17:45:24.160466  380723 kubeadm.go:934] updating node { 192.168.39.18 8443 v1.31.0 crio true true} ...
	I0819 17:45:24.160628  380723 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-347256 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.18
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-347256 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 17:45:24.160755  380723 ssh_runner.go:195] Run: crio config
	I0819 17:45:24.216129  380723 cni.go:84] Creating CNI manager for ""
	I0819 17:45:24.216154  380723 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 17:45:24.216168  380723 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 17:45:24.216196  380723 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.18 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-347256 NodeName:addons-347256 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.18"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.18 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 17:45:24.216360  380723 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.18
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-347256"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.18
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.18"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 17:45:24.216427  380723 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 17:45:24.228695  380723 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 17:45:24.228770  380723 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 17:45:24.239098  380723 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0819 17:45:24.256669  380723 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 17:45:24.273434  380723 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0819 17:45:24.290431  380723 ssh_runner.go:195] Run: grep 192.168.39.18	control-plane.minikube.internal$ /etc/hosts
	I0819 17:45:24.294455  380723 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.18	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 17:45:24.307092  380723 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 17:45:24.437166  380723 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 17:45:24.454975  380723 certs.go:68] Setting up /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256 for IP: 192.168.39.18
	I0819 17:45:24.455003  380723 certs.go:194] generating shared ca certs ...
	I0819 17:45:24.455021  380723 certs.go:226] acquiring lock for ca certs: {Name:mk639e03f593e0bccac045f6e9f5ba3b96cc81e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:45:24.455160  380723 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key
	I0819 17:45:24.607373  380723 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt ...
	I0819 17:45:24.607406  380723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt: {Name:mk720863d1644f0a4aa6f75fb34905a83c015168 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:45:24.607614  380723 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key ...
	I0819 17:45:24.607629  380723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key: {Name:mkd3386fa062f8a0dfb5858759605de084d42867 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:45:24.607757  380723 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key
	I0819 17:45:24.692703  380723 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.crt ...
	I0819 17:45:24.692732  380723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.crt: {Name:mk1dc711d257e531e3c71c7d0984b6df867cfe02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:45:24.692930  380723 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key ...
	I0819 17:45:24.692951  380723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key: {Name:mk8e16aff6516c290adb78b092691391102b99e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:45:24.693049  380723 certs.go:256] generating profile certs ...
	I0819 17:45:24.693113  380723 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/client.key
	I0819 17:45:24.693139  380723 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/client.crt with IP's: []
	I0819 17:45:24.857181  380723 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/client.crt ...
	I0819 17:45:24.857214  380723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/client.crt: {Name:mk6a1a046e55814f12df6a0e42b22fdeb6c0d339 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:45:24.857408  380723 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/client.key ...
	I0819 17:45:24.857424  380723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/client.key: {Name:mk3097dd049f7745d2605bf1f16a97f955f21ed3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:45:24.857524  380723 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/apiserver.key.bc8d03ea
	I0819 17:45:24.857545  380723 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/apiserver.crt.bc8d03ea with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.18]
	I0819 17:45:25.217861  380723 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/apiserver.crt.bc8d03ea ...
	I0819 17:45:25.217894  380723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/apiserver.crt.bc8d03ea: {Name:mk39d188cf7bf6d5dd4f56ad5ff39f9b6bbaaf56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:45:25.218082  380723 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/apiserver.key.bc8d03ea ...
	I0819 17:45:25.218100  380723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/apiserver.key.bc8d03ea: {Name:mke2f1fe200569be9110b53c2b6e9c6316ac6de9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:45:25.218202  380723 certs.go:381] copying /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/apiserver.crt.bc8d03ea -> /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/apiserver.crt
	I0819 17:45:25.218284  380723 certs.go:385] copying /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/apiserver.key.bc8d03ea -> /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/apiserver.key
	I0819 17:45:25.218331  380723 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/proxy-client.key
	I0819 17:45:25.218349  380723 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/proxy-client.crt with IP's: []
	I0819 17:45:25.507812  380723 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/proxy-client.crt ...
	I0819 17:45:25.507852  380723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/proxy-client.crt: {Name:mkc9cb74c9901604fb7d3a8203fa6096a334239d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:45:25.508025  380723 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/proxy-client.key ...
	I0819 17:45:25.508038  380723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/proxy-client.key: {Name:mk6bd0a8aed7d4a5c3e994dc78890b950bdd72a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:45:25.508215  380723 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 17:45:25.508254  380723 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem (1082 bytes)
	I0819 17:45:25.508279  380723 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem (1123 bytes)
	I0819 17:45:25.508303  380723 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem (1675 bytes)
	I0819 17:45:25.508916  380723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 17:45:25.540333  380723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 17:45:25.564833  380723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 17:45:25.589155  380723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 17:45:25.613367  380723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0819 17:45:25.637037  380723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 17:45:25.661485  380723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 17:45:25.685131  380723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 17:45:25.709378  380723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 17:45:25.733248  380723 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 17:45:25.749801  380723 ssh_runner.go:195] Run: openssl version
	I0819 17:45:25.755506  380723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 17:45:25.766270  380723 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 17:45:25.770783  380723 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0819 17:45:25.770848  380723 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 17:45:25.776580  380723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 17:45:25.787227  380723 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 17:45:25.791427  380723 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 17:45:25.791480  380723 kubeadm.go:392] StartCluster: {Name:addons-347256 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:addons-347256 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.18 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 17:45:25.791641  380723 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 17:45:25.791747  380723 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 17:45:25.830591  380723 cri.go:89] found id: ""
	I0819 17:45:25.830683  380723 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 17:45:25.840513  380723 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 17:45:25.849805  380723 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 17:45:25.859085  380723 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 17:45:25.859110  380723 kubeadm.go:157] found existing configuration files:
	
	I0819 17:45:25.859155  380723 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 17:45:25.867614  380723 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 17:45:25.867707  380723 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 17:45:25.876869  380723 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 17:45:25.885771  380723 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 17:45:25.885837  380723 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 17:45:25.895004  380723 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 17:45:25.903555  380723 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 17:45:25.903610  380723 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 17:45:25.912939  380723 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 17:45:25.921561  380723 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 17:45:25.921622  380723 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 17:45:25.930854  380723 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 17:45:25.979274  380723 kubeadm.go:310] W0819 17:45:25.962183     839 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 17:45:25.979964  380723 kubeadm.go:310] W0819 17:45:25.962997     839 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 17:45:26.084588  380723 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 17:45:35.554082  380723 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0819 17:45:35.554153  380723 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 17:45:35.554220  380723 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 17:45:35.554378  380723 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 17:45:35.554535  380723 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0819 17:45:35.554613  380723 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 17:45:35.556110  380723 out.go:235]   - Generating certificates and keys ...
	I0819 17:45:35.556179  380723 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 17:45:35.556239  380723 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 17:45:35.556302  380723 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0819 17:45:35.556390  380723 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0819 17:45:35.556443  380723 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0819 17:45:35.556485  380723 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0819 17:45:35.556544  380723 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0819 17:45:35.556678  380723 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-347256 localhost] and IPs [192.168.39.18 127.0.0.1 ::1]
	I0819 17:45:35.556749  380723 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0819 17:45:35.556901  380723 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-347256 localhost] and IPs [192.168.39.18 127.0.0.1 ::1]
	I0819 17:45:35.556981  380723 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0819 17:45:35.557052  380723 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0819 17:45:35.557098  380723 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0819 17:45:35.557150  380723 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 17:45:35.557214  380723 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 17:45:35.557305  380723 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0819 17:45:35.557380  380723 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 17:45:35.557465  380723 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 17:45:35.557539  380723 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 17:45:35.557636  380723 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 17:45:35.557723  380723 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 17:45:35.559211  380723 out.go:235]   - Booting up control plane ...
	I0819 17:45:35.559286  380723 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 17:45:35.559345  380723 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 17:45:35.559400  380723 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 17:45:35.559479  380723 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 17:45:35.559591  380723 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 17:45:35.559654  380723 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 17:45:35.559820  380723 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0819 17:45:35.559942  380723 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0819 17:45:35.560037  380723 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.131627ms
	I0819 17:45:35.560127  380723 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0819 17:45:35.560201  380723 kubeadm.go:310] [api-check] The API server is healthy after 5.002168832s
	I0819 17:45:35.560313  380723 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 17:45:35.560426  380723 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 17:45:35.560490  380723 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 17:45:35.560694  380723 kubeadm.go:310] [mark-control-plane] Marking the node addons-347256 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 17:45:35.560743  380723 kubeadm.go:310] [bootstrap-token] Using token: 02k7t2.hl4r0htmlbvvfk0d
	I0819 17:45:35.562138  380723 out.go:235]   - Configuring RBAC rules ...
	I0819 17:45:35.562238  380723 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 17:45:35.562306  380723 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 17:45:35.562440  380723 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 17:45:35.562550  380723 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 17:45:35.562658  380723 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 17:45:35.562733  380723 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 17:45:35.562829  380723 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 17:45:35.562869  380723 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 17:45:35.562908  380723 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 17:45:35.562917  380723 kubeadm.go:310] 
	I0819 17:45:35.562969  380723 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 17:45:35.562975  380723 kubeadm.go:310] 
	I0819 17:45:35.563047  380723 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 17:45:35.563055  380723 kubeadm.go:310] 
	I0819 17:45:35.563078  380723 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 17:45:35.563150  380723 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 17:45:35.563203  380723 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 17:45:35.563210  380723 kubeadm.go:310] 
	I0819 17:45:35.563262  380723 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 17:45:35.563268  380723 kubeadm.go:310] 
	I0819 17:45:35.563327  380723 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 17:45:35.563337  380723 kubeadm.go:310] 
	I0819 17:45:35.563390  380723 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 17:45:35.563457  380723 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 17:45:35.563524  380723 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 17:45:35.563538  380723 kubeadm.go:310] 
	I0819 17:45:35.563639  380723 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 17:45:35.563744  380723 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 17:45:35.563753  380723 kubeadm.go:310] 
	I0819 17:45:35.563828  380723 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 02k7t2.hl4r0htmlbvvfk0d \
	I0819 17:45:35.563967  380723 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3fcbd90565c5acbc36a47b2db682cb22dce9b172c9bf3af21e506ebb67608039 \
	I0819 17:45:35.563998  380723 kubeadm.go:310] 	--control-plane 
	I0819 17:45:35.564011  380723 kubeadm.go:310] 
	I0819 17:45:35.564117  380723 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 17:45:35.564129  380723 kubeadm.go:310] 
	I0819 17:45:35.564239  380723 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 02k7t2.hl4r0htmlbvvfk0d \
	I0819 17:45:35.564383  380723 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3fcbd90565c5acbc36a47b2db682cb22dce9b172c9bf3af21e506ebb67608039 
	I0819 17:45:35.564398  380723 cni.go:84] Creating CNI manager for ""
	I0819 17:45:35.564405  380723 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 17:45:35.565906  380723 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 17:45:35.567045  380723 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 17:45:35.581957  380723 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 17:45:35.600228  380723 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 17:45:35.600321  380723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 17:45:35.600370  380723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-347256 minikube.k8s.io/updated_at=2024_08_19T17_45_35_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=9c2db9d51ec33b5c53a86e9ba3d384ee332e3411 minikube.k8s.io/name=addons-347256 minikube.k8s.io/primary=true
	I0819 17:45:35.757365  380723 ops.go:34] apiserver oom_adj: -16
	I0819 17:45:35.757451  380723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 17:45:36.258226  380723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 17:45:36.757575  380723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 17:45:37.257560  380723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 17:45:37.758488  380723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 17:45:38.257909  380723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 17:45:38.758330  380723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 17:45:39.258278  380723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 17:45:39.758389  380723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 17:45:39.849761  380723 kubeadm.go:1113] duration metric: took 4.249515717s to wait for elevateKubeSystemPrivileges
	I0819 17:45:39.849812  380723 kubeadm.go:394] duration metric: took 14.058337596s to StartCluster
	I0819 17:45:39.849843  380723 settings.go:142] acquiring lock: {Name:mk396fcf49a1d0e69583cf37ff3c819e37118163 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:45:39.850019  380723 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19468-372744/kubeconfig
	I0819 17:45:39.850726  380723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/kubeconfig: {Name:mk8e7b4e1bb7da665111d2acd83eb48882c66853 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:45:39.850943  380723 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0819 17:45:39.850995  380723 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.18 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 17:45:39.851061  380723 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0819 17:45:39.851182  380723 addons.go:69] Setting yakd=true in profile "addons-347256"
	I0819 17:45:39.851235  380723 addons.go:234] Setting addon yakd=true in "addons-347256"
	I0819 17:45:39.851259  380723 config.go:182] Loaded profile config "addons-347256": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 17:45:39.851268  380723 addons.go:69] Setting inspektor-gadget=true in profile "addons-347256"
	I0819 17:45:39.851287  380723 addons.go:69] Setting metrics-server=true in profile "addons-347256"
	I0819 17:45:39.851286  380723 addons.go:69] Setting gcp-auth=true in profile "addons-347256"
	I0819 17:45:39.851314  380723 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-347256"
	I0819 17:45:39.851321  380723 addons.go:69] Setting ingress=true in profile "addons-347256"
	I0819 17:45:39.851323  380723 addons.go:69] Setting volcano=true in profile "addons-347256"
	I0819 17:45:39.851338  380723 addons.go:234] Setting addon ingress=true in "addons-347256"
	I0819 17:45:39.851341  380723 addons.go:234] Setting addon volcano=true in "addons-347256"
	I0819 17:45:39.851330  380723 addons.go:69] Setting storage-provisioner=true in profile "addons-347256"
	I0819 17:45:39.851363  380723 host.go:66] Checking if "addons-347256" exists ...
	I0819 17:45:39.851363  380723 addons.go:69] Setting cloud-spanner=true in profile "addons-347256"
	I0819 17:45:39.851373  380723 addons.go:234] Setting addon storage-provisioner=true in "addons-347256"
	I0819 17:45:39.851377  380723 addons.go:69] Setting volumesnapshots=true in profile "addons-347256"
	I0819 17:45:39.851385  380723 host.go:66] Checking if "addons-347256" exists ...
	I0819 17:45:39.851391  380723 addons.go:234] Setting addon cloud-spanner=true in "addons-347256"
	I0819 17:45:39.851396  380723 addons.go:234] Setting addon volumesnapshots=true in "addons-347256"
	I0819 17:45:39.851406  380723 host.go:66] Checking if "addons-347256" exists ...
	I0819 17:45:39.851418  380723 host.go:66] Checking if "addons-347256" exists ...
	I0819 17:45:39.851428  380723 host.go:66] Checking if "addons-347256" exists ...
	I0819 17:45:39.851443  380723 addons.go:69] Setting ingress-dns=true in profile "addons-347256"
	I0819 17:45:39.851459  380723 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-347256"
	I0819 17:45:39.851476  380723 addons.go:234] Setting addon ingress-dns=true in "addons-347256"
	I0819 17:45:39.851494  380723 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-347256"
	I0819 17:45:39.851496  380723 addons.go:69] Setting registry=true in profile "addons-347256"
	I0819 17:45:39.851509  380723 host.go:66] Checking if "addons-347256" exists ...
	I0819 17:45:39.851516  380723 addons.go:234] Setting addon registry=true in "addons-347256"
	I0819 17:45:39.851520  380723 host.go:66] Checking if "addons-347256" exists ...
	I0819 17:45:39.851538  380723 host.go:66] Checking if "addons-347256" exists ...
	I0819 17:45:39.851364  380723 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-347256"
	I0819 17:45:39.851889  380723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:45:39.851891  380723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:45:39.851899  380723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:45:39.851907  380723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:45:39.851907  380723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:45:39.851341  380723 mustload.go:65] Loading cluster: addons-347256
	I0819 17:45:39.851918  380723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:45:39.851305  380723 addons.go:234] Setting addon metrics-server=true in "addons-347256"
	I0819 17:45:39.851896  380723 host.go:66] Checking if "addons-347256" exists ...
	I0819 17:45:39.851937  380723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:45:39.851949  380723 host.go:66] Checking if "addons-347256" exists ...
	I0819 17:45:39.851349  380723 addons.go:69] Setting default-storageclass=true in profile "addons-347256"
	I0819 17:45:39.851962  380723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:45:39.851983  380723 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-347256"
	I0819 17:45:39.852019  380723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:45:39.852029  380723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:45:39.852052  380723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:45:39.852057  380723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:45:39.852070  380723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:45:39.852072  380723 config.go:182] Loaded profile config "addons-347256": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 17:45:39.851888  380723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:45:39.852177  380723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:45:39.851314  380723 addons.go:69] Setting helm-tiller=true in profile "addons-347256"
	I0819 17:45:39.851278  380723 host.go:66] Checking if "addons-347256" exists ...
	I0819 17:45:39.852237  380723 addons.go:234] Setting addon helm-tiller=true in "addons-347256"
	I0819 17:45:39.851307  380723 addons.go:234] Setting addon inspektor-gadget=true in "addons-347256"
	I0819 17:45:39.851314  380723 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-347256"
	I0819 17:45:39.852356  380723 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-347256"
	I0819 17:45:39.852360  380723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:45:39.852381  380723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:45:39.852387  380723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:45:39.852400  380723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:45:39.852407  380723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:45:39.852421  380723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:45:39.852484  380723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:45:39.852495  380723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:45:39.852542  380723 host.go:66] Checking if "addons-347256" exists ...
	I0819 17:45:39.852547  380723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:45:39.851913  380723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:45:39.852566  380723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:45:39.852782  380723 out.go:177] * Verifying Kubernetes components...
	I0819 17:45:39.853019  380723 host.go:66] Checking if "addons-347256" exists ...
	I0819 17:45:39.853387  380723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:45:39.853429  380723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:45:39.868465  380723 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 17:45:39.872955  380723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39907
	I0819 17:45:39.873131  380723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43693
	I0819 17:45:39.873222  380723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33873
	I0819 17:45:39.873419  380723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44057
	I0819 17:45:39.873709  380723 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:45:39.873817  380723 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:45:39.873869  380723 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:45:39.873910  380723 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:45:39.874431  380723 main.go:141] libmachine: Using API Version  1
	I0819 17:45:39.874455  380723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:45:39.874561  380723 main.go:141] libmachine: Using API Version  1
	I0819 17:45:39.874564  380723 main.go:141] libmachine: Using API Version  1
	I0819 17:45:39.874576  380723 main.go:141] libmachine: Using API Version  1
	I0819 17:45:39.874583  380723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:45:39.874582  380723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:45:39.874569  380723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:45:39.874943  380723 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:45:39.874985  380723 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:45:39.874997  380723 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:45:39.875503  380723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:45:39.875542  380723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:45:39.875757  380723 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:45:39.876434  380723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:45:39.876471  380723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:45:39.880110  380723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:45:39.880123  380723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:45:39.880139  380723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:45:39.880156  380723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:45:39.884494  380723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:45:39.884524  380723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:45:39.885117  380723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:45:39.885143  380723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:45:39.889026  380723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40431
	I0819 17:45:39.889217  380723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41289
	I0819 17:45:39.889316  380723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45711
	I0819 17:45:39.889712  380723 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:45:39.889819  380723 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:45:39.890047  380723 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:45:39.890370  380723 main.go:141] libmachine: Using API Version  1
	I0819 17:45:39.890389  380723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:45:39.890953  380723 main.go:141] libmachine: Using API Version  1
	I0819 17:45:39.890970  380723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:45:39.891024  380723 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:45:39.891736  380723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:45:39.891780  380723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:45:39.892135  380723 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:45:39.892752  380723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:45:39.892781  380723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:45:39.893313  380723 main.go:141] libmachine: Using API Version  1
	I0819 17:45:39.893333  380723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:45:39.893697  380723 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:45:39.894275  380723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:45:39.894312  380723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:45:39.902176  380723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35607
	I0819 17:45:39.905054  380723 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:45:39.905726  380723 main.go:141] libmachine: Using API Version  1
	I0819 17:45:39.905749  380723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:45:39.906324  380723 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:45:39.906537  380723 main.go:141] libmachine: (addons-347256) Calling .GetState
	I0819 17:45:39.907569  380723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43753
	I0819 17:45:39.909630  380723 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:45:39.910233  380723 main.go:141] libmachine: Using API Version  1
	I0819 17:45:39.910252  380723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:45:39.910710  380723 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:45:39.911329  380723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:45:39.911379  380723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:45:39.911629  380723 main.go:141] libmachine: (addons-347256) Calling .DriverName
	I0819 17:45:39.912137  380723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46673
	I0819 17:45:39.912812  380723 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:45:39.913142  380723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41051
	I0819 17:45:39.913403  380723 main.go:141] libmachine: Using API Version  1
	I0819 17:45:39.913419  380723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:45:39.913534  380723 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0819 17:45:39.913764  380723 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:45:39.913877  380723 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:45:39.914070  380723 main.go:141] libmachine: (addons-347256) Calling .GetState
	I0819 17:45:39.914737  380723 main.go:141] libmachine: Using API Version  1
	I0819 17:45:39.914756  380723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:45:39.914857  380723 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0819 17:45:39.914891  380723 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0819 17:45:39.914912  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHHostname
	I0819 17:45:39.915733  380723 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:45:39.916166  380723 main.go:141] libmachine: (addons-347256) Calling .GetState
	I0819 17:45:39.919248  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:39.919709  380723 main.go:141] libmachine: (addons-347256) Calling .DriverName
	I0819 17:45:39.919830  380723 main.go:141] libmachine: (addons-347256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:9a:be", ip: ""} in network mk-addons-347256: {Iface:virbr1 ExpiryTime:2024-08-19 18:45:08 +0000 UTC Type:0 Mac:52:54:00:96:9a:be Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-347256 Clientid:01:52:54:00:96:9a:be}
	I0819 17:45:39.919866  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined IP address 192.168.39.18 and MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:39.920129  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHPort
	I0819 17:45:39.920325  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHKeyPath
	I0819 17:45:39.920397  380723 addons.go:234] Setting addon default-storageclass=true in "addons-347256"
	I0819 17:45:39.920444  380723 host.go:66] Checking if "addons-347256" exists ...
	I0819 17:45:39.920452  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHUsername
	I0819 17:45:39.920548  380723 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/addons-347256/id_rsa Username:docker}
	I0819 17:45:39.920838  380723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:45:39.920859  380723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:45:39.921251  380723 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0819 17:45:39.922614  380723 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0819 17:45:39.922638  380723 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0819 17:45:39.922657  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHHostname
	I0819 17:45:39.923257  380723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32985
	I0819 17:45:39.923931  380723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32947
	I0819 17:45:39.924521  380723 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:45:39.925068  380723 main.go:141] libmachine: Using API Version  1
	I0819 17:45:39.925091  380723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:45:39.925440  380723 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:45:39.925629  380723 main.go:141] libmachine: (addons-347256) Calling .GetState
	I0819 17:45:39.926093  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:39.926882  380723 main.go:141] libmachine: (addons-347256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:9a:be", ip: ""} in network mk-addons-347256: {Iface:virbr1 ExpiryTime:2024-08-19 18:45:08 +0000 UTC Type:0 Mac:52:54:00:96:9a:be Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-347256 Clientid:01:52:54:00:96:9a:be}
	I0819 17:45:39.926905  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined IP address 192.168.39.18 and MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:39.927416  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHPort
	I0819 17:45:39.927607  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHKeyPath
	I0819 17:45:39.927781  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHUsername
	I0819 17:45:39.927922  380723 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/addons-347256/id_rsa Username:docker}
	I0819 17:45:39.928203  380723 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:45:39.928335  380723 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-347256"
	I0819 17:45:39.928374  380723 host.go:66] Checking if "addons-347256" exists ...
	I0819 17:45:39.928677  380723 main.go:141] libmachine: Using API Version  1
	I0819 17:45:39.928692  380723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:45:39.928736  380723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:45:39.928781  380723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:45:39.929091  380723 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:45:39.929613  380723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:45:39.929657  380723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:45:39.933497  380723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41927
	I0819 17:45:39.934174  380723 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:45:39.934747  380723 main.go:141] libmachine: Using API Version  1
	I0819 17:45:39.934774  380723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:45:39.935212  380723 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:45:39.935409  380723 main.go:141] libmachine: (addons-347256) Calling .GetState
	I0819 17:45:39.936450  380723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43597
	I0819 17:45:39.936796  380723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45409
	I0819 17:45:39.937443  380723 main.go:141] libmachine: (addons-347256) Calling .DriverName
	I0819 17:45:39.937446  380723 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:45:39.937944  380723 main.go:141] libmachine: Using API Version  1
	I0819 17:45:39.937961  380723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:45:39.938372  380723 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:45:39.938589  380723 main.go:141] libmachine: (addons-347256) Calling .GetState
	I0819 17:45:39.939305  380723 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:45:39.939775  380723 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0819 17:45:39.939986  380723 main.go:141] libmachine: Using API Version  1
	I0819 17:45:39.940002  380723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:45:39.940528  380723 main.go:141] libmachine: (addons-347256) Calling .DriverName
	I0819 17:45:39.940706  380723 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:45:39.941103  380723 main.go:141] libmachine: (addons-347256) Calling .GetState
	I0819 17:45:39.941822  380723 out.go:177]   - Using image docker.io/registry:2.8.3
	I0819 17:45:39.941822  380723 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0819 17:45:39.942884  380723 host.go:66] Checking if "addons-347256" exists ...
	I0819 17:45:39.943281  380723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:45:39.943318  380723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:45:39.943521  380723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32801
	I0819 17:45:39.944035  380723 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:45:39.944692  380723 main.go:141] libmachine: Using API Version  1
	I0819 17:45:39.944719  380723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:45:39.945059  380723 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0819 17:45:39.945085  380723 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:45:39.945110  380723 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0819 17:45:39.945595  380723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:45:39.945645  380723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:45:39.946547  380723 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0819 17:45:39.946566  380723 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0819 17:45:39.946586  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHHostname
	I0819 17:45:39.946648  380723 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0819 17:45:39.946659  380723 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0819 17:45:39.946672  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHHostname
	I0819 17:45:39.950458  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:39.952481  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:39.953001  380723 main.go:141] libmachine: (addons-347256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:9a:be", ip: ""} in network mk-addons-347256: {Iface:virbr1 ExpiryTime:2024-08-19 18:45:08 +0000 UTC Type:0 Mac:52:54:00:96:9a:be Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-347256 Clientid:01:52:54:00:96:9a:be}
	I0819 17:45:39.953110  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined IP address 192.168.39.18 and MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:39.953305  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHPort
	I0819 17:45:39.953504  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHKeyPath
	I0819 17:45:39.953573  380723 main.go:141] libmachine: (addons-347256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:9a:be", ip: ""} in network mk-addons-347256: {Iface:virbr1 ExpiryTime:2024-08-19 18:45:08 +0000 UTC Type:0 Mac:52:54:00:96:9a:be Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-347256 Clientid:01:52:54:00:96:9a:be}
	I0819 17:45:39.953586  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined IP address 192.168.39.18 and MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:39.953611  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHUsername
	I0819 17:45:39.953708  380723 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/addons-347256/id_rsa Username:docker}
	I0819 17:45:39.954056  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHPort
	I0819 17:45:39.954244  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHKeyPath
	I0819 17:45:39.954369  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHUsername
	I0819 17:45:39.954489  380723 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/addons-347256/id_rsa Username:docker}
	I0819 17:45:39.959581  380723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36901
	I0819 17:45:39.960341  380723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37721
	I0819 17:45:39.960837  380723 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:45:39.961427  380723 main.go:141] libmachine: Using API Version  1
	I0819 17:45:39.961448  380723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:45:39.961886  380723 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:45:39.962469  380723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:45:39.962498  380723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:45:39.962691  380723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45431
	I0819 17:45:39.962864  380723 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:45:39.963240  380723 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:45:39.963818  380723 main.go:141] libmachine: Using API Version  1
	I0819 17:45:39.963833  380723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:45:39.963899  380723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45435
	I0819 17:45:39.964161  380723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40119
	I0819 17:45:39.964594  380723 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:45:39.964754  380723 main.go:141] libmachine: Using API Version  1
	I0819 17:45:39.964767  380723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:45:39.964828  380723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36989
	I0819 17:45:39.964961  380723 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:45:39.965163  380723 main.go:141] libmachine: (addons-347256) Calling .GetState
	I0819 17:45:39.965235  380723 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:45:39.965327  380723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44383
	I0819 17:45:39.965549  380723 main.go:141] libmachine: Using API Version  1
	I0819 17:45:39.965577  380723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:45:39.965603  380723 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:45:39.965670  380723 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:45:39.965721  380723 main.go:141] libmachine: Using API Version  1
	I0819 17:45:39.965757  380723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:45:39.966117  380723 main.go:141] libmachine: Using API Version  1
	I0819 17:45:39.966135  380723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:45:39.966212  380723 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:45:39.966235  380723 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:45:39.966581  380723 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:45:39.966690  380723 main.go:141] libmachine: Using API Version  1
	I0819 17:45:39.966710  380723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:45:39.966770  380723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:45:39.966812  380723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:45:39.966773  380723 main.go:141] libmachine: (addons-347256) Calling .GetState
	I0819 17:45:39.967090  380723 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:45:39.967149  380723 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:45:39.967860  380723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:45:39.967902  380723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:45:39.968126  380723 main.go:141] libmachine: (addons-347256) Calling .DriverName
	I0819 17:45:39.968147  380723 main.go:141] libmachine: (addons-347256) Calling .GetState
	I0819 17:45:39.968276  380723 main.go:141] libmachine: (addons-347256) Calling .DriverName
	I0819 17:45:39.969729  380723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42117
	I0819 17:45:39.969757  380723 main.go:141] libmachine: (addons-347256) Calling .DriverName
	I0819 17:45:39.970087  380723 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:45:39.970578  380723 main.go:141] libmachine: Using API Version  1
	I0819 17:45:39.970604  380723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:45:39.970945  380723 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:45:39.971078  380723 main.go:141] libmachine: (addons-347256) Calling .DriverName
	I0819 17:45:39.971479  380723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:45:39.971978  380723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:45:39.972189  380723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45299
	I0819 17:45:39.972241  380723 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0819 17:45:39.972594  380723 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:45:39.973134  380723 main.go:141] libmachine: Using API Version  1
	I0819 17:45:39.973158  380723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:45:39.973244  380723 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0819 17:45:39.973274  380723 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0819 17:45:39.973503  380723 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:45:39.974030  380723 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0819 17:45:39.974058  380723 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0819 17:45:39.974082  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHHostname
	I0819 17:45:39.974086  380723 main.go:141] libmachine: (addons-347256) Calling .GetState
	I0819 17:45:39.974893  380723 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0819 17:45:39.974912  380723 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0819 17:45:39.974930  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHHostname
	I0819 17:45:39.976282  380723 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0819 17:45:39.976391  380723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46183
	I0819 17:45:39.976851  380723 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:45:39.977462  380723 main.go:141] libmachine: Using API Version  1
	I0819 17:45:39.977479  380723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:45:39.977551  380723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42651
	I0819 17:45:39.977886  380723 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:45:39.978551  380723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:45:39.978592  380723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:45:39.978660  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:39.978753  380723 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0819 17:45:39.978864  380723 main.go:141] libmachine: (addons-347256) Calling .DriverName
	I0819 17:45:39.978936  380723 main.go:141] libmachine: (addons-347256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:9a:be", ip: ""} in network mk-addons-347256: {Iface:virbr1 ExpiryTime:2024-08-19 18:45:08 +0000 UTC Type:0 Mac:52:54:00:96:9a:be Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-347256 Clientid:01:52:54:00:96:9a:be}
	I0819 17:45:39.978952  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined IP address 192.168.39.18 and MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:39.978995  380723 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:45:39.979088  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHPort
	I0819 17:45:39.979317  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHKeyPath
	I0819 17:45:39.979443  380723 main.go:141] libmachine: Using API Version  1
	I0819 17:45:39.979610  380723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:45:39.979764  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHUsername
	I0819 17:45:39.980016  380723 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/addons-347256/id_rsa Username:docker}
	I0819 17:45:39.980120  380723 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:45:39.980747  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:39.980833  380723 main.go:141] libmachine: (addons-347256) Calling .GetState
	I0819 17:45:39.980849  380723 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0819 17:45:39.981294  380723 main.go:141] libmachine: (addons-347256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:9a:be", ip: ""} in network mk-addons-347256: {Iface:virbr1 ExpiryTime:2024-08-19 18:45:08 +0000 UTC Type:0 Mac:52:54:00:96:9a:be Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-347256 Clientid:01:52:54:00:96:9a:be}
	I0819 17:45:39.981315  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined IP address 192.168.39.18 and MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:39.981493  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHPort
	I0819 17:45:39.981673  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHKeyPath
	I0819 17:45:39.981715  380723 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0819 17:45:39.981849  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHUsername
	I0819 17:45:39.982219  380723 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/addons-347256/id_rsa Username:docker}
	I0819 17:45:39.982355  380723 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0819 17:45:39.982371  380723 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0819 17:45:39.982389  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHHostname
	I0819 17:45:39.984121  380723 main.go:141] libmachine: (addons-347256) Calling .DriverName
	I0819 17:45:39.984392  380723 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0819 17:45:39.985630  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:39.985739  380723 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0819 17:45:39.986003  380723 main.go:141] libmachine: (addons-347256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:9a:be", ip: ""} in network mk-addons-347256: {Iface:virbr1 ExpiryTime:2024-08-19 18:45:08 +0000 UTC Type:0 Mac:52:54:00:96:9a:be Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-347256 Clientid:01:52:54:00:96:9a:be}
	I0819 17:45:39.986035  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined IP address 192.168.39.18 and MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:39.986211  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHPort
	I0819 17:45:39.986384  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHKeyPath
	I0819 17:45:39.986532  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHUsername
	I0819 17:45:39.986652  380723 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/addons-347256/id_rsa Username:docker}
	I0819 17:45:39.986774  380723 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0819 17:45:39.986779  380723 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0819 17:45:39.986793  380723 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0819 17:45:39.986811  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHHostname
	I0819 17:45:39.987978  380723 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0819 17:45:39.989093  380723 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0819 17:45:39.989545  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:39.989944  380723 main.go:141] libmachine: (addons-347256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:9a:be", ip: ""} in network mk-addons-347256: {Iface:virbr1 ExpiryTime:2024-08-19 18:45:08 +0000 UTC Type:0 Mac:52:54:00:96:9a:be Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-347256 Clientid:01:52:54:00:96:9a:be}
	I0819 17:45:39.989964  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined IP address 192.168.39.18 and MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:39.990125  380723 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0819 17:45:39.990136  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHPort
	I0819 17:45:39.990142  380723 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0819 17:45:39.990157  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHHostname
	I0819 17:45:39.990792  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHKeyPath
	I0819 17:45:39.990985  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHUsername
	I0819 17:45:39.991156  380723 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/addons-347256/id_rsa Username:docker}
	I0819 17:45:39.992033  380723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46249
	I0819 17:45:39.992365  380723 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:45:39.992804  380723 main.go:141] libmachine: Using API Version  1
	I0819 17:45:39.992820  380723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:45:39.993418  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:39.993667  380723 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:45:39.993732  380723 main.go:141] libmachine: (addons-347256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:9a:be", ip: ""} in network mk-addons-347256: {Iface:virbr1 ExpiryTime:2024-08-19 18:45:08 +0000 UTC Type:0 Mac:52:54:00:96:9a:be Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-347256 Clientid:01:52:54:00:96:9a:be}
	I0819 17:45:39.993746  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined IP address 192.168.39.18 and MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:39.993913  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHPort
	I0819 17:45:39.994105  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHKeyPath
	I0819 17:45:39.994161  380723 main.go:141] libmachine: (addons-347256) Calling .GetState
	I0819 17:45:39.994202  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHUsername
	I0819 17:45:39.994293  380723 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/addons-347256/id_rsa Username:docker}
	I0819 17:45:39.995574  380723 main.go:141] libmachine: (addons-347256) Calling .DriverName
	I0819 17:45:39.999725  380723 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0819 17:45:39.999759  380723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35859
	I0819 17:45:40.000273  380723 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:45:40.000749  380723 main.go:141] libmachine: Using API Version  1
	I0819 17:45:40.000769  380723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:45:40.001185  380723 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:45:40.001318  380723 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0819 17:45:40.001332  380723 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0819 17:45:40.001348  380723 main.go:141] libmachine: (addons-347256) Calling .GetState
	I0819 17:45:40.001352  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHHostname
	I0819 17:45:40.003498  380723 main.go:141] libmachine: (addons-347256) Calling .DriverName
	I0819 17:45:40.004460  380723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38705
	I0819 17:45:40.005131  380723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35121
	I0819 17:45:40.005217  380723 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 17:45:40.005299  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:40.005637  380723 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:45:40.005700  380723 main.go:141] libmachine: (addons-347256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:9a:be", ip: ""} in network mk-addons-347256: {Iface:virbr1 ExpiryTime:2024-08-19 18:45:08 +0000 UTC Type:0 Mac:52:54:00:96:9a:be Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-347256 Clientid:01:52:54:00:96:9a:be}
	I0819 17:45:40.005717  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined IP address 192.168.39.18 and MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:40.005842  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHPort
	I0819 17:45:40.006041  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHKeyPath
	I0819 17:45:40.006175  380723 main.go:141] libmachine: Using API Version  1
	I0819 17:45:40.006191  380723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:45:40.006200  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHUsername
	I0819 17:45:40.006543  380723 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:45:40.006560  380723 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 17:45:40.006575  380723 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 17:45:40.006592  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHHostname
	I0819 17:45:40.006593  380723 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/addons-347256/id_rsa Username:docker}
	I0819 17:45:40.006547  380723 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:45:40.006879  380723 main.go:141] libmachine: (addons-347256) Calling .GetState
	I0819 17:45:40.007081  380723 main.go:141] libmachine: Using API Version  1
	I0819 17:45:40.007266  380723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:45:40.008163  380723 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:45:40.008353  380723 main.go:141] libmachine: (addons-347256) Calling .GetState
	I0819 17:45:40.009168  380723 main.go:141] libmachine: (addons-347256) Calling .DriverName
	I0819 17:45:40.009981  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:40.010608  380723 main.go:141] libmachine: (addons-347256) Calling .DriverName
	I0819 17:45:40.010686  380723 main.go:141] libmachine: (addons-347256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:9a:be", ip: ""} in network mk-addons-347256: {Iface:virbr1 ExpiryTime:2024-08-19 18:45:08 +0000 UTC Type:0 Mac:52:54:00:96:9a:be Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-347256 Clientid:01:52:54:00:96:9a:be}
	I0819 17:45:40.010703  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined IP address 192.168.39.18 and MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:40.010847  380723 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0819 17:45:40.010912  380723 main.go:141] libmachine: Making call to close driver server
	I0819 17:45:40.010925  380723 main.go:141] libmachine: (addons-347256) Calling .Close
	I0819 17:45:40.010965  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHPort
	I0819 17:45:40.011912  380723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43141
	I0819 17:45:40.011917  380723 main.go:141] libmachine: (addons-347256) DBG | Closing plugin on server side
	I0819 17:45:40.011947  380723 main.go:141] libmachine: Successfully made call to close driver server
	I0819 17:45:40.011953  380723 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 17:45:40.011962  380723 main.go:141] libmachine: Making call to close driver server
	I0819 17:45:40.011971  380723 main.go:141] libmachine: (addons-347256) Calling .Close
	I0819 17:45:40.012014  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHKeyPath
	I0819 17:45:40.012110  380723 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0819 17:45:40.012125  380723 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0819 17:45:40.012144  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHHostname
	I0819 17:45:40.012187  380723 main.go:141] libmachine: Successfully made call to close driver server
	I0819 17:45:40.012198  380723 main.go:141] libmachine: Making call to close connection to plugin binary
	W0819 17:45:40.012305  380723 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0819 17:45:40.013040  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHUsername
	I0819 17:45:40.013068  380723 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:45:40.013293  380723 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/addons-347256/id_rsa Username:docker}
	I0819 17:45:40.013658  380723 main.go:141] libmachine: Using API Version  1
	I0819 17:45:40.013676  380723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:45:40.014000  380723 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:45:40.014197  380723 main.go:141] libmachine: (addons-347256) Calling .GetState
	I0819 17:45:40.015527  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:40.015997  380723 main.go:141] libmachine: (addons-347256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:9a:be", ip: ""} in network mk-addons-347256: {Iface:virbr1 ExpiryTime:2024-08-19 18:45:08 +0000 UTC Type:0 Mac:52:54:00:96:9a:be Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-347256 Clientid:01:52:54:00:96:9a:be}
	I0819 17:45:40.016035  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined IP address 192.168.39.18 and MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:40.016119  380723 main.go:141] libmachine: (addons-347256) Calling .DriverName
	I0819 17:45:40.016500  380723 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 17:45:40.016509  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHPort
	I0819 17:45:40.016520  380723 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 17:45:40.016540  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHHostname
	I0819 17:45:40.016688  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHKeyPath
	I0819 17:45:40.016839  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHUsername
	I0819 17:45:40.016970  380723 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/addons-347256/id_rsa Username:docker}
	I0819 17:45:40.019485  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:40.019989  380723 main.go:141] libmachine: (addons-347256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:9a:be", ip: ""} in network mk-addons-347256: {Iface:virbr1 ExpiryTime:2024-08-19 18:45:08 +0000 UTC Type:0 Mac:52:54:00:96:9a:be Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-347256 Clientid:01:52:54:00:96:9a:be}
	I0819 17:45:40.020016  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined IP address 192.168.39.18 and MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:40.020188  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHPort
	I0819 17:45:40.020377  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHKeyPath
	I0819 17:45:40.020490  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHUsername
	I0819 17:45:40.020592  380723 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/addons-347256/id_rsa Username:docker}
	I0819 17:45:40.027726  380723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34013
	I0819 17:45:40.028177  380723 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:45:40.028568  380723 main.go:141] libmachine: Using API Version  1
	I0819 17:45:40.028581  380723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:45:40.028933  380723 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:45:40.029072  380723 main.go:141] libmachine: (addons-347256) Calling .GetState
	I0819 17:45:40.030568  380723 main.go:141] libmachine: (addons-347256) Calling .DriverName
	I0819 17:45:40.032286  380723 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0819 17:45:40.033646  380723 out.go:177]   - Using image docker.io/busybox:stable
	I0819 17:45:40.034899  380723 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0819 17:45:40.034916  380723 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0819 17:45:40.034933  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHHostname
	I0819 17:45:40.037472  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:40.037796  380723 main.go:141] libmachine: (addons-347256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:9a:be", ip: ""} in network mk-addons-347256: {Iface:virbr1 ExpiryTime:2024-08-19 18:45:08 +0000 UTC Type:0 Mac:52:54:00:96:9a:be Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-347256 Clientid:01:52:54:00:96:9a:be}
	I0819 17:45:40.037822  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined IP address 192.168.39.18 and MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:40.037987  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHPort
	I0819 17:45:40.038159  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHKeyPath
	I0819 17:45:40.038289  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHUsername
	I0819 17:45:40.038428  380723 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/addons-347256/id_rsa Username:docker}
	I0819 17:45:40.371649  380723 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 17:45:40.371721  380723 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0819 17:45:40.396603  380723 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0819 17:45:40.396632  380723 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0819 17:45:40.397254  380723 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0819 17:45:40.397274  380723 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0819 17:45:40.466741  380723 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0819 17:45:40.466773  380723 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0819 17:45:40.488284  380723 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0819 17:45:40.488320  380723 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0819 17:45:40.500237  380723 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0819 17:45:40.502036  380723 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0819 17:45:40.502068  380723 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0819 17:45:40.531483  380723 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0819 17:45:40.558724  380723 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0819 17:45:40.558747  380723 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0819 17:45:40.560447  380723 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0819 17:45:40.560465  380723 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0819 17:45:40.563990  380723 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0819 17:45:40.564007  380723 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0819 17:45:40.565471  380723 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0819 17:45:40.569573  380723 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0819 17:45:40.601794  380723 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0819 17:45:40.601825  380723 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0819 17:45:40.603693  380723 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 17:45:40.619428  380723 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0819 17:45:40.653004  380723 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0819 17:45:40.653035  380723 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0819 17:45:40.654545  380723 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0819 17:45:40.654562  380723 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0819 17:45:40.668442  380723 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0819 17:45:40.668473  380723 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0819 17:45:40.682845  380723 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 17:45:40.729278  380723 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0819 17:45:40.729310  380723 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0819 17:45:40.737396  380723 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0819 17:45:40.737422  380723 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0819 17:45:40.738692  380723 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0819 17:45:40.798959  380723 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0819 17:45:40.798991  380723 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0819 17:45:40.812153  380723 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0819 17:45:40.812185  380723 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0819 17:45:40.840237  380723 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0819 17:45:40.840264  380723 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0819 17:45:40.938995  380723 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0819 17:45:40.969479  380723 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0819 17:45:40.969502  380723 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0819 17:45:40.989420  380723 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 17:45:40.989454  380723 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0819 17:45:41.026265  380723 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0819 17:45:41.026301  380723 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0819 17:45:41.084753  380723 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0819 17:45:41.084780  380723 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0819 17:45:41.088560  380723 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0819 17:45:41.088580  380723 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0819 17:45:41.092243  380723 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0819 17:45:41.118957  380723 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0819 17:45:41.118990  380723 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0819 17:45:41.159935  380723 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 17:45:41.253316  380723 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0819 17:45:41.253347  380723 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0819 17:45:41.262531  380723 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0819 17:45:41.262559  380723 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0819 17:45:41.283884  380723 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0819 17:45:41.283906  380723 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0819 17:45:41.503618  380723 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0819 17:45:41.503651  380723 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0819 17:45:41.594582  380723 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0819 17:45:41.594631  380723 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0819 17:45:41.602212  380723 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0819 17:45:41.760294  380723 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0819 17:45:41.760317  380723 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0819 17:45:41.833334  380723 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0819 17:45:41.833368  380723 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0819 17:45:42.042869  380723 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0819 17:45:42.120663  380723 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0819 17:45:42.120709  380723 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0819 17:45:42.465964  380723 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0819 17:45:42.465992  380723 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0819 17:45:42.794515  380723 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.422757477s)
	I0819 17:45:42.794561  380723 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0819 17:45:42.794534  380723 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.422853317s)
	I0819 17:45:42.795353  380723 node_ready.go:35] waiting up to 6m0s for node "addons-347256" to be "Ready" ...
	I0819 17:45:42.803483  380723 node_ready.go:49] node "addons-347256" has status "Ready":"True"
	I0819 17:45:42.803514  380723 node_ready.go:38] duration metric: took 8.11951ms for node "addons-347256" to be "Ready" ...
	I0819 17:45:42.803529  380723 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 17:45:42.833996  380723 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-77256" in "kube-system" namespace to be "Ready" ...
	I0819 17:45:42.853452  380723 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0819 17:45:42.853482  380723 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0819 17:45:43.311446  380723 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-347256" context rescaled to 1 replicas
	I0819 17:45:43.320040  380723 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0819 17:45:44.880986  380723 pod_ready.go:103] pod "coredns-6f6b679f8f-77256" in "kube-system" namespace has status "Ready":"False"
	I0819 17:45:46.985038  380723 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0819 17:45:46.985086  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHHostname
	I0819 17:45:46.988084  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:46.988625  380723 main.go:141] libmachine: (addons-347256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:9a:be", ip: ""} in network mk-addons-347256: {Iface:virbr1 ExpiryTime:2024-08-19 18:45:08 +0000 UTC Type:0 Mac:52:54:00:96:9a:be Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-347256 Clientid:01:52:54:00:96:9a:be}
	I0819 17:45:46.988658  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined IP address 192.168.39.18 and MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:46.988864  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHPort
	I0819 17:45:46.989113  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHKeyPath
	I0819 17:45:46.989319  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHUsername
	I0819 17:45:46.989515  380723 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/addons-347256/id_rsa Username:docker}
	I0819 17:45:47.301963  380723 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0819 17:45:47.394293  380723 addons.go:234] Setting addon gcp-auth=true in "addons-347256"
	I0819 17:45:47.394362  380723 host.go:66] Checking if "addons-347256" exists ...
	I0819 17:45:47.394793  380723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:45:47.394830  380723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:45:47.411543  380723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41651
	I0819 17:45:47.412021  380723 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:45:47.412617  380723 main.go:141] libmachine: Using API Version  1
	I0819 17:45:47.412646  380723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:45:47.412998  380723 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:45:47.413486  380723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:45:47.413512  380723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:45:47.427830  380723 pod_ready.go:103] pod "coredns-6f6b679f8f-77256" in "kube-system" namespace has status "Ready":"False"
	I0819 17:45:47.429244  380723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45003
	I0819 17:45:47.429805  380723 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:45:47.430451  380723 main.go:141] libmachine: Using API Version  1
	I0819 17:45:47.430480  380723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:45:47.430836  380723 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:45:47.431043  380723 main.go:141] libmachine: (addons-347256) Calling .GetState
	I0819 17:45:47.432699  380723 main.go:141] libmachine: (addons-347256) Calling .DriverName
	I0819 17:45:47.432977  380723 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0819 17:45:47.433001  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHHostname
	I0819 17:45:47.436157  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:47.436568  380723 main.go:141] libmachine: (addons-347256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:9a:be", ip: ""} in network mk-addons-347256: {Iface:virbr1 ExpiryTime:2024-08-19 18:45:08 +0000 UTC Type:0 Mac:52:54:00:96:9a:be Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-347256 Clientid:01:52:54:00:96:9a:be}
	I0819 17:45:47.436601  380723 main.go:141] libmachine: (addons-347256) DBG | domain addons-347256 has defined IP address 192.168.39.18 and MAC address 52:54:00:96:9a:be in network mk-addons-347256
	I0819 17:45:47.436821  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHPort
	I0819 17:45:47.437039  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHKeyPath
	I0819 17:45:47.437285  380723 main.go:141] libmachine: (addons-347256) Calling .GetSSHUsername
	I0819 17:45:47.437461  380723 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/addons-347256/id_rsa Username:docker}
	I0819 17:45:47.656742  380723 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.156468649s)
	I0819 17:45:47.656796  380723 main.go:141] libmachine: Making call to close driver server
	I0819 17:45:47.656813  380723 main.go:141] libmachine: (addons-347256) Calling .Close
	I0819 17:45:47.656861  380723 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.125342383s)
	I0819 17:45:47.656917  380723 main.go:141] libmachine: Making call to close driver server
	I0819 17:45:47.656930  380723 main.go:141] libmachine: (addons-347256) Calling .Close
	I0819 17:45:47.656941  380723 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.091435532s)
	I0819 17:45:47.656965  380723 main.go:141] libmachine: Making call to close driver server
	I0819 17:45:47.656976  380723 main.go:141] libmachine: (addons-347256) Calling .Close
	I0819 17:45:47.656979  380723 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.087378027s)
	I0819 17:45:47.657037  380723 main.go:141] libmachine: Making call to close driver server
	I0819 17:45:47.657049  380723 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.053335955s)
	I0819 17:45:47.657074  380723 main.go:141] libmachine: Making call to close driver server
	I0819 17:45:47.657085  380723 main.go:141] libmachine: (addons-347256) Calling .Close
	I0819 17:45:47.657124  380723 main.go:141] libmachine: Successfully made call to close driver server
	I0819 17:45:47.657141  380723 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 17:45:47.657154  380723 main.go:141] libmachine: Making call to close driver server
	I0819 17:45:47.657164  380723 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.974295861s)
	I0819 17:45:47.657140  380723 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.03768664s)
	I0819 17:45:47.657193  380723 main.go:141] libmachine: Making call to close driver server
	I0819 17:45:47.657199  380723 main.go:141] libmachine: Making call to close driver server
	I0819 17:45:47.657205  380723 main.go:141] libmachine: (addons-347256) Calling .Close
	I0819 17:45:47.657211  380723 main.go:141] libmachine: (addons-347256) Calling .Close
	I0819 17:45:47.657242  380723 main.go:141] libmachine: Successfully made call to close driver server
	I0819 17:45:47.657258  380723 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 17:45:47.657268  380723 main.go:141] libmachine: Making call to close driver server
	I0819 17:45:47.657278  380723 main.go:141] libmachine: (addons-347256) Calling .Close
	I0819 17:45:47.657054  380723 main.go:141] libmachine: (addons-347256) Calling .Close
	I0819 17:45:47.657285  380723 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (6.918569289s)
	I0819 17:45:47.657306  380723 main.go:141] libmachine: Making call to close driver server
	I0819 17:45:47.657316  380723 main.go:141] libmachine: (addons-347256) Calling .Close
	I0819 17:45:47.657169  380723 main.go:141] libmachine: (addons-347256) Calling .Close
	I0819 17:45:47.657401  380723 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.718378622s)
	I0819 17:45:47.657417  380723 main.go:141] libmachine: Making call to close driver server
	I0819 17:45:47.657424  380723 main.go:141] libmachine: (addons-347256) Calling .Close
	I0819 17:45:47.657640  380723 main.go:141] libmachine: (addons-347256) DBG | Closing plugin on server side
	I0819 17:45:47.657677  380723 main.go:141] libmachine: Successfully made call to close driver server
	I0819 17:45:47.657684  380723 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 17:45:47.657691  380723 main.go:141] libmachine: Making call to close driver server
	I0819 17:45:47.657697  380723 main.go:141] libmachine: (addons-347256) Calling .Close
	I0819 17:45:47.657881  380723 main.go:141] libmachine: (addons-347256) DBG | Closing plugin on server side
	I0819 17:45:47.657918  380723 main.go:141] libmachine: (addons-347256) DBG | Closing plugin on server side
	I0819 17:45:47.657933  380723 main.go:141] libmachine: (addons-347256) DBG | Closing plugin on server side
	I0819 17:45:47.657952  380723 main.go:141] libmachine: Successfully made call to close driver server
	I0819 17:45:47.657958  380723 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 17:45:47.657966  380723 main.go:141] libmachine: Making call to close driver server
	I0819 17:45:47.657983  380723 main.go:141] libmachine: (addons-347256) Calling .Close
	I0819 17:45:47.658033  380723 main.go:141] libmachine: Successfully made call to close driver server
	I0819 17:45:47.658041  380723 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 17:45:47.658300  380723 main.go:141] libmachine: (addons-347256) DBG | Closing plugin on server side
	I0819 17:45:47.658333  380723 main.go:141] libmachine: Successfully made call to close driver server
	I0819 17:45:47.658341  380723 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 17:45:47.658539  380723 main.go:141] libmachine: (addons-347256) DBG | Closing plugin on server side
	I0819 17:45:47.658560  380723 main.go:141] libmachine: Successfully made call to close driver server
	I0819 17:45:47.658566  380723 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 17:45:47.658574  380723 main.go:141] libmachine: Making call to close driver server
	I0819 17:45:47.658581  380723 main.go:141] libmachine: (addons-347256) Calling .Close
	I0819 17:45:47.659359  380723 main.go:141] libmachine: (addons-347256) DBG | Closing plugin on server side
	I0819 17:45:47.659411  380723 main.go:141] libmachine: (addons-347256) DBG | Closing plugin on server side
	I0819 17:45:47.659428  380723 main.go:141] libmachine: (addons-347256) DBG | Closing plugin on server side
	I0819 17:45:47.659450  380723 main.go:141] libmachine: Successfully made call to close driver server
	I0819 17:45:47.659458  380723 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 17:45:47.659656  380723 main.go:141] libmachine: Successfully made call to close driver server
	I0819 17:45:47.659667  380723 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 17:45:47.659691  380723 addons.go:475] Verifying addon ingress=true in "addons-347256"
	I0819 17:45:47.659839  380723 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.567571493s)
	I0819 17:45:47.659867  380723 main.go:141] libmachine: Making call to close driver server
	I0819 17:45:47.659879  380723 main.go:141] libmachine: (addons-347256) Calling .Close
	I0819 17:45:47.659981  380723 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.500015623s)
	I0819 17:45:47.659995  380723 main.go:141] libmachine: Making call to close driver server
	I0819 17:45:47.660004  380723 main.go:141] libmachine: (addons-347256) Calling .Close
	I0819 17:45:47.660217  380723 main.go:141] libmachine: Successfully made call to close driver server
	I0819 17:45:47.660264  380723 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 17:45:47.660363  380723 main.go:141] libmachine: (addons-347256) DBG | Closing plugin on server side
	I0819 17:45:47.660400  380723 main.go:141] libmachine: Successfully made call to close driver server
	I0819 17:45:47.660414  380723 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 17:45:47.660422  380723 main.go:141] libmachine: Making call to close driver server
	I0819 17:45:47.660429  380723 main.go:141] libmachine: (addons-347256) Calling .Close
	I0819 17:45:47.660483  380723 main.go:141] libmachine: (addons-347256) DBG | Closing plugin on server side
	I0819 17:45:47.660504  380723 main.go:141] libmachine: Successfully made call to close driver server
	I0819 17:45:47.660511  380723 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 17:45:47.660522  380723 main.go:141] libmachine: Making call to close driver server
	I0819 17:45:47.660530  380723 main.go:141] libmachine: (addons-347256) Calling .Close
	I0819 17:45:47.660631  380723 main.go:141] libmachine: (addons-347256) DBG | Closing plugin on server side
	I0819 17:45:47.660676  380723 main.go:141] libmachine: Successfully made call to close driver server
	I0819 17:45:47.660687  380723 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 17:45:47.660696  380723 addons.go:475] Verifying addon metrics-server=true in "addons-347256"
	I0819 17:45:47.659764  380723 main.go:141] libmachine: Successfully made call to close driver server
	I0819 17:45:47.660878  380723 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 17:45:47.660890  380723 main.go:141] libmachine: Making call to close driver server
	I0819 17:45:47.660899  380723 main.go:141] libmachine: (addons-347256) Calling .Close
	I0819 17:45:47.661224  380723 main.go:141] libmachine: (addons-347256) DBG | Closing plugin on server side
	I0819 17:45:47.661248  380723 main.go:141] libmachine: (addons-347256) DBG | Closing plugin on server side
	I0819 17:45:47.661277  380723 main.go:141] libmachine: Successfully made call to close driver server
	I0819 17:45:47.661284  380723 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 17:45:47.661292  380723 main.go:141] libmachine: Making call to close driver server
	I0819 17:45:47.661312  380723 main.go:141] libmachine: (addons-347256) Calling .Close
	I0819 17:45:47.661382  380723 main.go:141] libmachine: Successfully made call to close driver server
	I0819 17:45:47.661391  380723 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 17:45:47.661398  380723 main.go:141] libmachine: Making call to close driver server
	I0819 17:45:47.661405  380723 main.go:141] libmachine: (addons-347256) Calling .Close
	I0819 17:45:47.661450  380723 main.go:141] libmachine: (addons-347256) DBG | Closing plugin on server side
	I0819 17:45:47.661473  380723 main.go:141] libmachine: Successfully made call to close driver server
	I0819 17:45:47.661480  380723 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 17:45:47.661487  380723 main.go:141] libmachine: Making call to close driver server
	I0819 17:45:47.661493  380723 main.go:141] libmachine: (addons-347256) Calling .Close
	I0819 17:45:47.661739  380723 main.go:141] libmachine: (addons-347256) DBG | Closing plugin on server side
	I0819 17:45:47.661772  380723 main.go:141] libmachine: Successfully made call to close driver server
	I0819 17:45:47.661779  380723 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 17:45:47.662068  380723 main.go:141] libmachine: (addons-347256) DBG | Closing plugin on server side
	I0819 17:45:47.662103  380723 main.go:141] libmachine: Successfully made call to close driver server
	I0819 17:45:47.662114  380723 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 17:45:47.662122  380723 addons.go:475] Verifying addon registry=true in "addons-347256"
	I0819 17:45:47.662821  380723 main.go:141] libmachine: (addons-347256) DBG | Closing plugin on server side
	I0819 17:45:47.662877  380723 main.go:141] libmachine: Successfully made call to close driver server
	I0819 17:45:47.662899  380723 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 17:45:47.662942  380723 main.go:141] libmachine: (addons-347256) DBG | Closing plugin on server side
	I0819 17:45:47.662986  380723 main.go:141] libmachine: Successfully made call to close driver server
	I0819 17:45:47.662993  380723 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 17:45:47.663262  380723 main.go:141] libmachine: Successfully made call to close driver server
	I0819 17:45:47.663293  380723 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 17:45:47.664263  380723 out.go:177] * Verifying ingress addon...
	I0819 17:45:47.664639  380723 out.go:177] * Verifying registry addon...
	I0819 17:45:47.664710  380723 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-347256 service yakd-dashboard -n yakd-dashboard
	
	I0819 17:45:47.666530  380723 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0819 17:45:47.667096  380723 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0819 17:45:47.682207  380723 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0819 17:45:47.682235  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:45:47.682342  380723 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0819 17:45:47.682359  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:45:47.707450  380723 main.go:141] libmachine: Making call to close driver server
	I0819 17:45:47.707475  380723 main.go:141] libmachine: (addons-347256) Calling .Close
	I0819 17:45:47.707769  380723 main.go:141] libmachine: Successfully made call to close driver server
	I0819 17:45:47.707792  380723 main.go:141] libmachine: Making call to close connection to plugin binary
	W0819 17:45:47.707895  380723 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0819 17:45:47.719955  380723 main.go:141] libmachine: Making call to close driver server
	I0819 17:45:47.719986  380723 main.go:141] libmachine: (addons-347256) Calling .Close
	I0819 17:45:47.720314  380723 main.go:141] libmachine: Successfully made call to close driver server
	I0819 17:45:47.720338  380723 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 17:45:48.199440  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:45:48.199617  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:45:48.508355  380723 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.906084869s)
	W0819 17:45:48.508413  380723 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0819 17:45:48.508418  380723 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.465499848s)
	I0819 17:45:48.508465  380723 main.go:141] libmachine: Making call to close driver server
	I0819 17:45:48.508481  380723 main.go:141] libmachine: (addons-347256) Calling .Close
	I0819 17:45:48.508477  380723 retry.go:31] will retry after 209.756832ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0819 17:45:48.508776  380723 main.go:141] libmachine: (addons-347256) DBG | Closing plugin on server side
	I0819 17:45:48.508832  380723 main.go:141] libmachine: Successfully made call to close driver server
	I0819 17:45:48.508842  380723 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 17:45:48.508858  380723 main.go:141] libmachine: Making call to close driver server
	I0819 17:45:48.508870  380723 main.go:141] libmachine: (addons-347256) Calling .Close
	I0819 17:45:48.509113  380723 main.go:141] libmachine: Successfully made call to close driver server
	I0819 17:45:48.509131  380723 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 17:45:48.699192  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:45:48.700051  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:45:48.719273  380723 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0819 17:45:49.171415  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:45:49.171614  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:45:49.671515  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:45:49.671797  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:45:49.861006  380723 pod_ready.go:103] pod "coredns-6f6b679f8f-77256" in "kube-system" namespace has status "Ready":"False"
	I0819 17:45:50.141545  380723 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.821438478s)
	I0819 17:45:50.141559  380723 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.708555525s)
	I0819 17:45:50.141605  380723 main.go:141] libmachine: Making call to close driver server
	I0819 17:45:50.141621  380723 main.go:141] libmachine: (addons-347256) Calling .Close
	I0819 17:45:50.141915  380723 main.go:141] libmachine: Successfully made call to close driver server
	I0819 17:45:50.141973  380723 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 17:45:50.141988  380723 main.go:141] libmachine: Making call to close driver server
	I0819 17:45:50.142001  380723 main.go:141] libmachine: (addons-347256) Calling .Close
	I0819 17:45:50.142004  380723 main.go:141] libmachine: (addons-347256) DBG | Closing plugin on server side
	I0819 17:45:50.142353  380723 main.go:141] libmachine: Successfully made call to close driver server
	I0819 17:45:50.142374  380723 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 17:45:50.142386  380723 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-347256"
	I0819 17:45:50.142391  380723 main.go:141] libmachine: (addons-347256) DBG | Closing plugin on server side
	I0819 17:45:50.143335  380723 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0819 17:45:50.144127  380723 out.go:177] * Verifying csi-hostpath-driver addon...
	I0819 17:45:50.145589  380723 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0819 17:45:50.146391  380723 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0819 17:45:50.146960  380723 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0819 17:45:50.146976  380723 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0819 17:45:50.160892  380723 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0819 17:45:50.160916  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:45:50.190635  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:45:50.191469  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:45:50.260757  380723 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0819 17:45:50.260790  380723 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0819 17:45:50.308446  380723 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0819 17:45:50.308477  380723 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0819 17:45:50.342349  380723 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0819 17:45:50.652779  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:45:50.671144  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:45:50.671420  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:45:51.019310  380723 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.299985163s)
	I0819 17:45:51.019376  380723 main.go:141] libmachine: Making call to close driver server
	I0819 17:45:51.019393  380723 main.go:141] libmachine: (addons-347256) Calling .Close
	I0819 17:45:51.019725  380723 main.go:141] libmachine: Successfully made call to close driver server
	I0819 17:45:51.019743  380723 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 17:45:51.019753  380723 main.go:141] libmachine: Making call to close driver server
	I0819 17:45:51.019761  380723 main.go:141] libmachine: (addons-347256) Calling .Close
	I0819 17:45:51.020011  380723 main.go:141] libmachine: Successfully made call to close driver server
	I0819 17:45:51.020071  380723 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 17:45:51.020044  380723 main.go:141] libmachine: (addons-347256) DBG | Closing plugin on server side
	I0819 17:45:51.151527  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:45:51.170353  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:45:51.171932  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:45:51.666628  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:45:51.715004  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:45:51.716612  380723 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.374218153s)
	I0819 17:45:51.716676  380723 main.go:141] libmachine: Making call to close driver server
	I0819 17:45:51.716690  380723 main.go:141] libmachine: (addons-347256) Calling .Close
	I0819 17:45:51.717029  380723 main.go:141] libmachine: Successfully made call to close driver server
	I0819 17:45:51.717067  380723 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 17:45:51.717077  380723 main.go:141] libmachine: Making call to close driver server
	I0819 17:45:51.717085  380723 main.go:141] libmachine: (addons-347256) Calling .Close
	I0819 17:45:51.717088  380723 main.go:141] libmachine: (addons-347256) DBG | Closing plugin on server side
	I0819 17:45:51.717349  380723 main.go:141] libmachine: Successfully made call to close driver server
	I0819 17:45:51.717366  380723 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 17:45:51.719498  380723 addons.go:475] Verifying addon gcp-auth=true in "addons-347256"
	I0819 17:45:51.721361  380723 out.go:177] * Verifying gcp-auth addon...
	I0819 17:45:51.723644  380723 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0819 17:45:51.740101  380723 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0819 17:45:51.740140  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:45:51.740242  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:45:52.154370  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:45:52.172650  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:45:52.173017  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:45:52.228241  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:45:52.341085  380723 pod_ready.go:103] pod "coredns-6f6b679f8f-77256" in "kube-system" namespace has status "Ready":"False"
	I0819 17:45:52.652512  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:45:52.754360  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:45:52.754367  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:45:52.754453  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:45:53.152493  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:45:53.252309  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:45:53.252343  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:45:53.252454  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:45:53.652072  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:45:53.672284  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:45:53.672579  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:45:53.751534  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:45:54.152001  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:45:54.170809  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:45:54.171084  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:45:54.227395  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:45:54.650657  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:45:54.670176  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:45:54.672237  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:45:54.728246  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:45:54.840058  380723 pod_ready.go:103] pod "coredns-6f6b679f8f-77256" in "kube-system" namespace has status "Ready":"False"
	I0819 17:45:55.478775  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:45:55.579781  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:45:55.580421  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:45:55.580563  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:45:55.651874  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:45:55.670109  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:45:55.671684  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:45:55.727482  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:45:56.151248  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:45:56.171644  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:45:56.172624  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:45:56.227229  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:45:56.650498  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:45:56.670050  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:45:56.672432  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:45:56.727304  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:45:56.841046  380723 pod_ready.go:103] pod "coredns-6f6b679f8f-77256" in "kube-system" namespace has status "Ready":"False"
	I0819 17:45:57.151254  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:45:57.171006  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:45:57.171916  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:45:57.227769  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:45:57.340949  380723 pod_ready.go:98] pod "coredns-6f6b679f8f-77256" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-19 17:45:57 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-19 17:45:40 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-19 17:45:40 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-19 17:45:40 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-19 17:45:40 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.18 HostIPs:[{IP:192.168.39.
18}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-08-19 17:45:40 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-08-19 17:45:45 +0000 UTC,FinishedAt:2024-08-19 17:45:55 +0000 UTC,ContainerID:cri-o://de82d178296b49a61386a18d626e8a0b47d3af5002f63b18fb061ff4fdcb95b7,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://de82d178296b49a61386a18d626e8a0b47d3af5002f63b18fb061ff4fdcb95b7 Started:0xc002938f20 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0021394f0} {Name:kube-api-access-l97x8 MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc002139500}] User:nil
AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0819 17:45:57.340988  380723 pod_ready.go:82] duration metric: took 14.506958093s for pod "coredns-6f6b679f8f-77256" in "kube-system" namespace to be "Ready" ...
	E0819 17:45:57.341009  380723 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-6f6b679f8f-77256" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-19 17:45:57 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-19 17:45:40 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-19 17:45:40 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-19 17:45:40 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-19 17:45:40 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.18 HostIPs:[{IP:192.168.39.18}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-08-19 17:45:40 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-08-19 17:45:45 +0000 UTC,FinishedAt:2024-08-19 17:45:55 +0000 UTC,ContainerID:cri-o://de82d178296b49a61386a18d626e8a0b47d3af5002f63b18fb061ff4fdcb95b7,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://de82d178296b49a61386a18d626e8a0b47d3af5002f63b18fb061ff4fdcb95b7 Started:0xc002938f20 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0021394f0} {Name:kube-api-access-l97x8 MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRead
Only:0xc002139500}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0819 17:45:57.341023  380723 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-tljrk" in "kube-system" namespace to be "Ready" ...
	I0819 17:45:57.652833  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:45:57.672404  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:45:57.678834  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:45:57.727705  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:45:58.376382  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:45:58.381054  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:45:58.381283  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:45:58.381831  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:45:58.651051  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:45:58.670751  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:45:58.670963  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:45:58.727819  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:45:59.151295  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:45:59.171726  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:45:59.171891  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:45:59.227466  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:45:59.347011  380723 pod_ready.go:103] pod "coredns-6f6b679f8f-tljrk" in "kube-system" namespace has status "Ready":"False"
	I0819 17:45:59.652459  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:45:59.671098  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:45:59.671353  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:45:59.728006  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:00.152041  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:00.170975  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:00.171289  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:00.227975  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:00.650734  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:00.670879  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:00.671408  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:00.726968  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:01.151702  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:01.172147  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:01.172158  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:01.228076  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:01.347891  380723 pod_ready.go:103] pod "coredns-6f6b679f8f-tljrk" in "kube-system" namespace has status "Ready":"False"
	I0819 17:46:01.651067  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:01.672043  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:01.672352  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:01.727112  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:02.151784  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:02.251520  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:02.251589  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:02.251868  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:02.650887  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:02.670498  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:02.671190  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:02.727894  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:03.152373  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:03.173355  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:03.173608  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:03.252079  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:03.348701  380723 pod_ready.go:103] pod "coredns-6f6b679f8f-tljrk" in "kube-system" namespace has status "Ready":"False"
	I0819 17:46:03.650560  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:03.670584  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:03.671619  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:03.727971  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:04.152279  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:04.171901  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:04.171926  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:04.227790  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:04.652016  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:04.670906  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:04.671259  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:04.727600  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:05.151415  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:05.171056  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:05.171854  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:05.226843  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:05.650500  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:05.671853  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:05.671876  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:05.727481  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:05.847042  380723 pod_ready.go:103] pod "coredns-6f6b679f8f-tljrk" in "kube-system" namespace has status "Ready":"False"
	I0819 17:46:06.154376  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:06.171330  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:06.171791  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:06.228140  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:06.651358  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:06.671968  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:06.672611  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:06.728119  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:07.151345  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:07.171482  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:07.172033  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:07.227995  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:07.651724  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:07.671238  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:07.672833  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:07.727648  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:07.847271  380723 pod_ready.go:103] pod "coredns-6f6b679f8f-tljrk" in "kube-system" namespace has status "Ready":"False"
	I0819 17:46:08.152418  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:08.171219  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:08.171858  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:08.227301  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:08.650958  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:08.671623  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:08.672167  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:08.732311  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:09.152242  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:09.171787  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:09.174068  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:09.227398  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:09.651149  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:09.679634  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:09.679800  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:09.727099  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:09.847978  380723 pod_ready.go:103] pod "coredns-6f6b679f8f-tljrk" in "kube-system" namespace has status "Ready":"False"
	I0819 17:46:10.152007  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:10.171537  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:10.172252  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:10.227414  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:10.651497  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:10.671074  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:10.671824  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:10.727372  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:11.151927  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:11.171319  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:11.171642  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:11.227402  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:11.650959  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:11.671712  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:11.671895  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:11.727046  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:12.151166  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:12.171267  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:12.171870  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:12.227798  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:12.347832  380723 pod_ready.go:103] pod "coredns-6f6b679f8f-tljrk" in "kube-system" namespace has status "Ready":"False"
	I0819 17:46:12.651180  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:12.672301  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:12.672716  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:12.727790  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:13.150794  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:13.172858  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:13.173312  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:13.228256  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:13.651353  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:13.671206  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:13.671329  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:13.727206  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:14.151619  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:14.170410  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:14.170777  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:14.227605  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:14.650707  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:14.670630  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:14.671536  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:14.727350  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:14.847694  380723 pod_ready.go:103] pod "coredns-6f6b679f8f-tljrk" in "kube-system" namespace has status "Ready":"False"
	I0819 17:46:15.152079  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:15.171719  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:15.172125  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:15.227190  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:15.651919  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:15.672079  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:15.672193  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:15.751805  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:16.150888  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:16.171221  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:16.171358  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:16.227481  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:16.652037  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:16.670480  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:16.672101  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:16.727645  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:17.151453  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:17.170375  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:17.171355  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:17.227957  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:17.356864  380723 pod_ready.go:103] pod "coredns-6f6b679f8f-tljrk" in "kube-system" namespace has status "Ready":"False"
	I0819 17:46:17.652314  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:17.671961  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:17.672105  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:17.728109  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:18.151472  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:18.171635  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:18.172010  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:18.227342  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:18.650748  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:18.670967  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:18.672046  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:18.727710  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:19.152581  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:19.171075  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:19.171371  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:19.226952  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:19.651790  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:19.670669  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:19.672411  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:19.753947  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:19.849046  380723 pod_ready.go:103] pod "coredns-6f6b679f8f-tljrk" in "kube-system" namespace has status "Ready":"False"
	I0819 17:46:20.151555  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:20.170964  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:20.172003  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:20.228416  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:20.804995  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:20.805383  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:20.807245  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:20.807464  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:21.151765  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:21.170601  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:21.172594  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:21.228518  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:21.652639  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:21.670790  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:21.671737  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:21.751457  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:22.151170  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:22.170469  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:22.172432  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:22.226945  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:22.348721  380723 pod_ready.go:103] pod "coredns-6f6b679f8f-tljrk" in "kube-system" namespace has status "Ready":"False"
	I0819 17:46:22.651978  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:22.671791  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:22.672170  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:22.728108  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:23.153420  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:23.172246  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:23.172323  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:23.507494  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:23.507503  380723 pod_ready.go:93] pod "coredns-6f6b679f8f-tljrk" in "kube-system" namespace has status "Ready":"True"
	I0819 17:46:23.507555  380723 pod_ready.go:82] duration metric: took 26.166518536s for pod "coredns-6f6b679f8f-tljrk" in "kube-system" namespace to be "Ready" ...
	I0819 17:46:23.507570  380723 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-347256" in "kube-system" namespace to be "Ready" ...
	I0819 17:46:23.543198  380723 pod_ready.go:93] pod "etcd-addons-347256" in "kube-system" namespace has status "Ready":"True"
	I0819 17:46:23.543227  380723 pod_ready.go:82] duration metric: took 35.64844ms for pod "etcd-addons-347256" in "kube-system" namespace to be "Ready" ...
	I0819 17:46:23.543242  380723 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-347256" in "kube-system" namespace to be "Ready" ...
	I0819 17:46:23.550189  380723 pod_ready.go:93] pod "kube-apiserver-addons-347256" in "kube-system" namespace has status "Ready":"True"
	I0819 17:46:23.550219  380723 pod_ready.go:82] duration metric: took 6.968452ms for pod "kube-apiserver-addons-347256" in "kube-system" namespace to be "Ready" ...
	I0819 17:46:23.550233  380723 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-347256" in "kube-system" namespace to be "Ready" ...
	I0819 17:46:23.564849  380723 pod_ready.go:93] pod "kube-controller-manager-addons-347256" in "kube-system" namespace has status "Ready":"True"
	I0819 17:46:23.564880  380723 pod_ready.go:82] duration metric: took 14.637248ms for pod "kube-controller-manager-addons-347256" in "kube-system" namespace to be "Ready" ...
	I0819 17:46:23.564900  380723 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-72dbf" in "kube-system" namespace to be "Ready" ...
	I0819 17:46:23.575024  380723 pod_ready.go:93] pod "kube-proxy-72dbf" in "kube-system" namespace has status "Ready":"True"
	I0819 17:46:23.575057  380723 pod_ready.go:82] duration metric: took 10.14737ms for pod "kube-proxy-72dbf" in "kube-system" namespace to be "Ready" ...
	I0819 17:46:23.575070  380723 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-347256" in "kube-system" namespace to be "Ready" ...
	I0819 17:46:23.651861  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:23.675201  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:23.675635  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:23.750701  380723 pod_ready.go:93] pod "kube-scheduler-addons-347256" in "kube-system" namespace has status "Ready":"True"
	I0819 17:46:23.750736  380723 pod_ready.go:82] duration metric: took 175.65538ms for pod "kube-scheduler-addons-347256" in "kube-system" namespace to be "Ready" ...
	I0819 17:46:23.750751  380723 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-x924x" in "kube-system" namespace to be "Ready" ...
	I0819 17:46:23.752728  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:24.146209  380723 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-x924x" in "kube-system" namespace has status "Ready":"True"
	I0819 17:46:24.146236  380723 pod_ready.go:82] duration metric: took 395.476606ms for pod "nvidia-device-plugin-daemonset-x924x" in "kube-system" namespace to be "Ready" ...
	I0819 17:46:24.146247  380723 pod_ready.go:39] duration metric: took 41.342703446s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 17:46:24.146269  380723 api_server.go:52] waiting for apiserver process to appear ...
	I0819 17:46:24.146364  380723 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 17:46:24.160374  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:24.166910  380723 api_server.go:72] duration metric: took 44.315873738s to wait for apiserver process to appear ...
	I0819 17:46:24.166938  380723 api_server.go:88] waiting for apiserver healthz status ...
	I0819 17:46:24.166961  380723 api_server.go:253] Checking apiserver healthz at https://192.168.39.18:8443/healthz ...
	I0819 17:46:24.172887  380723 api_server.go:279] https://192.168.39.18:8443/healthz returned 200:
	ok
	I0819 17:46:24.173144  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:24.173923  380723 api_server.go:141] control plane version: v1.31.0
	I0819 17:46:24.173944  380723 api_server.go:131] duration metric: took 6.998235ms to wait for apiserver health ...
	I0819 17:46:24.173952  380723 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 17:46:24.174500  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:24.227338  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:24.349491  380723 system_pods.go:59] 18 kube-system pods found
	I0819 17:46:24.349525  380723 system_pods.go:61] "coredns-6f6b679f8f-tljrk" [6c9217a4-6879-4b7e-a6b5-78dfa1b85ee4] Running
	I0819 17:46:24.349533  380723 system_pods.go:61] "csi-hostpath-attacher-0" [e128fb19-e720-44a6-a1e9-c5f242968b55] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0819 17:46:24.349540  380723 system_pods.go:61] "csi-hostpath-resizer-0" [734bcf24-7c89-469e-9020-fdb24d47cb83] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0819 17:46:24.349550  380723 system_pods.go:61] "csi-hostpathplugin-hkr5d" [16796ce0-7f87-46f8-a9a7-0afa96f3f575] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0819 17:46:24.349555  380723 system_pods.go:61] "etcd-addons-347256" [e9c774cf-14f4-433f-8c4b-96d30f1b8f0f] Running
	I0819 17:46:24.349559  380723 system_pods.go:61] "kube-apiserver-addons-347256" [e35199f6-4a80-4d84-9a30-6e285696f02e] Running
	I0819 17:46:24.349562  380723 system_pods.go:61] "kube-controller-manager-addons-347256" [b9b2d2d8-7f8f-4373-a0a7-cb3dc9d46969] Running
	I0819 17:46:24.349566  380723 system_pods.go:61] "kube-ingress-dns-minikube" [44cd9847-645d-4375-b58a-d153a852f2c7] Running
	I0819 17:46:24.349572  380723 system_pods.go:61] "kube-proxy-72dbf" [a50d76ee-c7cb-4141-9bc3-2b530cb531e3] Running
	I0819 17:46:24.349578  380723 system_pods.go:61] "kube-scheduler-addons-347256" [0367e97e-fee8-48cf-bebc-b3d55381da8f] Running
	I0819 17:46:24.349586  380723 system_pods.go:61] "metrics-server-8988944d9-xkj9p" [2cb192e0-5048-46b0-b74e-86ad5e4d39ea] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 17:46:24.349597  380723 system_pods.go:61] "nvidia-device-plugin-daemonset-x924x" [b28534d9-e3b6-474a-90ca-04048cd59d85] Running
	I0819 17:46:24.349603  380723 system_pods.go:61] "registry-6fb4cdfc84-szv4z" [9388e4e2-9cbc-4408-8be6-ec9be4b5737f] Running
	I0819 17:46:24.349613  380723 system_pods.go:61] "registry-proxy-9q2l4" [73b6c461-1963-4b13-bb12-e75024c4c5d7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0819 17:46:24.349623  380723 system_pods.go:61] "snapshot-controller-56fcc65765-4jtx2" [bcc4eb99-92c0-4fe4-815c-ef9576839c9c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0819 17:46:24.349633  380723 system_pods.go:61] "snapshot-controller-56fcc65765-d7mhz" [2d8e7bbb-d917-42da-9c13-63cfd7e933ce] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0819 17:46:24.349637  380723 system_pods.go:61] "storage-provisioner" [8349a726-cf5d-472f-aec7-5dc582e1d9db] Running
	I0819 17:46:24.349643  380723 system_pods.go:61] "tiller-deploy-b48cc5f79-bqbr9" [801ad1ee-bac9-4f5e-9d38-655f7fbf1779] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0819 17:46:24.349651  380723 system_pods.go:74] duration metric: took 175.691658ms to wait for pod list to return data ...
	I0819 17:46:24.349661  380723 default_sa.go:34] waiting for default service account to be created ...
	I0819 17:46:24.546382  380723 default_sa.go:45] found service account: "default"
	I0819 17:46:24.546414  380723 default_sa.go:55] duration metric: took 196.745659ms for default service account to be created ...
	I0819 17:46:24.546423  380723 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 17:46:24.652817  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:24.672680  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:24.672755  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:24.729568  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:24.752623  380723 system_pods.go:86] 18 kube-system pods found
	I0819 17:46:24.752651  380723 system_pods.go:89] "coredns-6f6b679f8f-tljrk" [6c9217a4-6879-4b7e-a6b5-78dfa1b85ee4] Running
	I0819 17:46:24.752662  380723 system_pods.go:89] "csi-hostpath-attacher-0" [e128fb19-e720-44a6-a1e9-c5f242968b55] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0819 17:46:24.752668  380723 system_pods.go:89] "csi-hostpath-resizer-0" [734bcf24-7c89-469e-9020-fdb24d47cb83] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0819 17:46:24.752676  380723 system_pods.go:89] "csi-hostpathplugin-hkr5d" [16796ce0-7f87-46f8-a9a7-0afa96f3f575] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0819 17:46:24.752681  380723 system_pods.go:89] "etcd-addons-347256" [e9c774cf-14f4-433f-8c4b-96d30f1b8f0f] Running
	I0819 17:46:24.752685  380723 system_pods.go:89] "kube-apiserver-addons-347256" [e35199f6-4a80-4d84-9a30-6e285696f02e] Running
	I0819 17:46:24.752688  380723 system_pods.go:89] "kube-controller-manager-addons-347256" [b9b2d2d8-7f8f-4373-a0a7-cb3dc9d46969] Running
	I0819 17:46:24.752694  380723 system_pods.go:89] "kube-ingress-dns-minikube" [44cd9847-645d-4375-b58a-d153a852f2c7] Running
	I0819 17:46:24.752697  380723 system_pods.go:89] "kube-proxy-72dbf" [a50d76ee-c7cb-4141-9bc3-2b530cb531e3] Running
	I0819 17:46:24.752701  380723 system_pods.go:89] "kube-scheduler-addons-347256" [0367e97e-fee8-48cf-bebc-b3d55381da8f] Running
	I0819 17:46:24.752705  380723 system_pods.go:89] "metrics-server-8988944d9-xkj9p" [2cb192e0-5048-46b0-b74e-86ad5e4d39ea] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 17:46:24.752709  380723 system_pods.go:89] "nvidia-device-plugin-daemonset-x924x" [b28534d9-e3b6-474a-90ca-04048cd59d85] Running
	I0819 17:46:24.752714  380723 system_pods.go:89] "registry-6fb4cdfc84-szv4z" [9388e4e2-9cbc-4408-8be6-ec9be4b5737f] Running
	I0819 17:46:24.752719  380723 system_pods.go:89] "registry-proxy-9q2l4" [73b6c461-1963-4b13-bb12-e75024c4c5d7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0819 17:46:24.752729  380723 system_pods.go:89] "snapshot-controller-56fcc65765-4jtx2" [bcc4eb99-92c0-4fe4-815c-ef9576839c9c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0819 17:46:24.752736  380723 system_pods.go:89] "snapshot-controller-56fcc65765-d7mhz" [2d8e7bbb-d917-42da-9c13-63cfd7e933ce] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0819 17:46:24.752740  380723 system_pods.go:89] "storage-provisioner" [8349a726-cf5d-472f-aec7-5dc582e1d9db] Running
	I0819 17:46:24.752745  380723 system_pods.go:89] "tiller-deploy-b48cc5f79-bqbr9" [801ad1ee-bac9-4f5e-9d38-655f7fbf1779] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0819 17:46:24.752752  380723 system_pods.go:126] duration metric: took 206.324075ms to wait for k8s-apps to be running ...
	I0819 17:46:24.752759  380723 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 17:46:24.752807  380723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 17:46:24.768631  380723 system_svc.go:56] duration metric: took 15.858708ms WaitForService to wait for kubelet
	I0819 17:46:24.768665  380723 kubeadm.go:582] duration metric: took 44.917633684s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 17:46:24.768695  380723 node_conditions.go:102] verifying NodePressure condition ...
	I0819 17:46:24.950905  380723 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 17:46:24.950956  380723 node_conditions.go:123] node cpu capacity is 2
	I0819 17:46:24.950975  380723 node_conditions.go:105] duration metric: took 182.272659ms to run NodePressure ...
	I0819 17:46:24.950993  380723 start.go:241] waiting for startup goroutines ...
	I0819 17:46:25.152346  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:25.171137  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:25.171736  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:25.227848  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:25.651876  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:25.671621  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:25.671859  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:25.727593  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:26.151523  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:26.171339  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:26.172176  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:26.227416  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:26.650772  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:26.673202  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:26.678854  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:26.727222  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:27.151790  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:27.171755  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:27.172033  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:27.228058  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:27.651657  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:27.671825  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:27.672050  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:27.727727  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:28.151579  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:28.171703  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:28.172403  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:28.227528  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:28.651619  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:28.671562  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:28.672259  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:28.727897  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:29.462650  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:29.462697  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:29.463082  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:29.463205  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:29.651360  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:29.672494  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:29.673186  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:29.726693  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:30.151664  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:30.171907  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:46:30.172130  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:30.227706  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:30.652539  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:30.671926  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:30.672391  380723 kapi.go:107] duration metric: took 43.005296913s to wait for kubernetes.io/minikube-addons=registry ...
	I0819 17:46:30.727591  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:31.150621  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:31.170289  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:31.227938  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:31.654479  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:31.671392  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:31.727020  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:32.151987  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:32.171114  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:32.227372  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:32.650785  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:32.670669  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:32.726990  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:33.151420  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:33.172387  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:33.226685  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:33.651026  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:33.678844  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:33.778453  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:34.152053  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:34.171162  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:34.227559  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:34.650950  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:34.670597  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:34.726609  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:35.151207  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:35.171222  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:35.227752  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:35.651071  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:35.672496  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:35.727005  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:36.151590  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:36.170849  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:36.227904  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:36.710733  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:36.711248  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:36.807655  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:37.150513  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:37.170696  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:37.226955  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:37.651784  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:37.670589  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:37.726918  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:38.153916  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:38.171307  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:38.227822  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:38.651401  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:38.671400  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:38.727081  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:39.152252  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:39.171083  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:39.227576  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:39.651504  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:39.670614  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:39.727660  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:40.152872  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:40.251575  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:40.252365  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:40.651382  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:40.671484  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:40.726959  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:41.151615  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:41.170458  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:41.229137  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:41.651234  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:41.671132  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:41.727803  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:42.151183  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:42.171510  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:42.227845  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:42.652299  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:42.672296  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:42.727758  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:43.155214  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:43.174258  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:43.227090  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:43.653881  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:43.671931  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:43.727249  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:44.153406  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:44.176632  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:44.252028  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:44.652214  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:44.672680  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:44.727362  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:45.154451  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:45.258011  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:45.258234  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:45.652045  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:45.671573  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:45.727496  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:46.155034  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:46.170732  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:46.227544  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:46.650568  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:46.675037  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:47.055482  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:47.155921  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:47.257747  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:47.260662  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:47.651018  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:47.672151  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:47.727157  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:48.151887  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:48.170997  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:48.227878  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:48.651832  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:48.751776  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:48.752506  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:49.151553  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:49.171317  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:49.227745  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:49.651278  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:49.670993  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:49.727982  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:50.152335  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:50.253793  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:50.254255  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:50.653415  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:50.671449  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:50.726947  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:51.156947  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:51.181024  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:51.228759  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:51.652066  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:51.673383  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:51.727472  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:52.153170  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:52.172807  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:52.264190  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:52.654284  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:52.671264  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:52.727796  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:53.151624  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:53.172024  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:53.228206  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:53.653092  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:53.670767  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:53.727401  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:54.354971  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:54.363069  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:54.363342  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:54.651263  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:54.671100  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:54.727388  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:55.151830  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:55.170325  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:55.228037  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:55.651345  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:55.671479  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:55.726830  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:56.151792  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:56.170475  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:56.226773  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:56.651262  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:56.672679  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:56.752678  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:57.592115  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:57.592901  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:57.593319  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:57.651733  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:57.671151  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:57.728530  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:58.151662  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:58.171346  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:58.226660  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:58.651302  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:58.671375  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:58.726804  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:59.154735  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:59.171044  380723 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:46:59.254147  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:46:59.651638  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:46:59.671555  380723 kapi.go:107] duration metric: took 1m12.00502431s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0819 17:46:59.727840  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:47:00.155878  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:47:00.228822  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:47:00.734679  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:47:00.734978  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:47:01.151888  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:47:01.227923  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:47:01.652806  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:47:01.727415  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:47:02.151078  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:47:02.227416  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:47:02.651090  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:47:02.727617  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:47:03.151881  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:47:03.227555  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:47:03.650892  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:47:03.728049  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:47:04.151906  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:47:04.227326  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:47:04.652026  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:47:04.756014  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:47:05.151650  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:47:05.251760  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:47:05.651390  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:47:05.728199  380723 kapi.go:107] duration metric: took 1m14.004546899s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0819 17:47:05.730105  380723 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-347256 cluster.
	I0819 17:47:05.731619  380723 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0819 17:47:05.732836  380723 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0819 17:47:06.152029  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:47:06.656148  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:47:07.152018  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:47:07.651959  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:47:08.152802  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:47:08.651467  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:47:09.153047  380723 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:47:09.652133  380723 kapi.go:107] duration metric: took 1m19.505736593s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0819 17:47:09.654184  380723 out.go:177] * Enabled addons: ingress-dns, helm-tiller, metrics-server, cloud-spanner, nvidia-device-plugin, storage-provisioner, yakd, default-storageclass, inspektor-gadget, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0819 17:47:09.655607  380723 addons.go:510] duration metric: took 1m29.804551273s for enable addons: enabled=[ingress-dns helm-tiller metrics-server cloud-spanner nvidia-device-plugin storage-provisioner yakd default-storageclass inspektor-gadget volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0819 17:47:09.655666  380723 start.go:246] waiting for cluster config update ...
	I0819 17:47:09.655707  380723 start.go:255] writing updated cluster config ...
	I0819 17:47:09.656070  380723 ssh_runner.go:195] Run: rm -f paused
	I0819 17:47:09.712496  380723 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 17:47:09.714377  380723 out.go:177] * Done! kubectl is now configured to use "addons-347256" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 19 17:53:08 addons-347256 crio[688]: time="2024-08-19 17:53:08.983979331Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724089988983952857,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d52c27b8-bf13-4fd8-a51a-2e873cc266d9 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 17:53:08 addons-347256 crio[688]: time="2024-08-19 17:53:08.984744556Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a889961b-bec7-4dd9-b1f8-6f95021d0d76 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:53:08 addons-347256 crio[688]: time="2024-08-19 17:53:08.984799443Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a889961b-bec7-4dd9-b1f8-6f95021d0d76 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:53:08 addons-347256 crio[688]: time="2024-08-19 17:53:08.985109182Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8476ea9b84c5677585c184c2a19989966218130ebd87014f16b5d47b610a7bf8,PodSandboxId:3779df5f1e98a011b2ea48e7cbe08ffddc7b5d281e7a8764c73a15dc3a6f7517,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1724089821391952812,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-8qm2m,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5af6036b-6c99-4583-8178-c1691586b4ac,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e7d5836b553936a23f393970ad8deefd6feed3820d8bde258550dfc206c2757,PodSandboxId:2d699655663d41aaf47ea8ea106b572637b60af2d425235bb90695e5764e6820,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1724089681587967061,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9632e6a7-a0a4-4456-ab6f-c0eab065596d,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0d586f0f8bb9c527dd638f71ad853fe948c4a48d287293667d737a19e672a35,PodSandboxId:08bede11788c378bfb268308132a6da06e96de1be20084257a399f82227c9f67,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724089633212039725,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 718e1c4f-05f0-49bc-b
fc2-7dda02db3d8e,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8d158637619c277ef9f9387b183af1877f4383af2f22aaddac3e7716ede8a08,PodSandboxId:3eaf4c04776d5d54255f718eb3e644edd9c7a10fe8ba78ea9e2654d6d547990b,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1724089592959813692,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-dtqhx,io.kubernetes.pod.name
space: local-path-storage,io.kubernetes.pod.uid: eb249463-f4d8-4b25-812f-c1e2f481cffd,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0ca4eb985ce7b35dd9be14e75a8600b776dd7af34e703f47903f17d58fe8638,PodSandboxId:6d360799061341a4737e1d4f98f5669097f47bf25108d2d14f200cb506dec1cc,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1724089583620988219,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metr
ics-server-8988944d9-xkj9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cb192e0-5048-46b0-b74e-86ad5e4d39ea,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07d88fd518a67130b2f237c0f5c0e12a105bb8b22bb8cf868165a3ab5c86352d,PodSandboxId:cd735c6b862bd1c732fb68674d6cd41ed8978073a26332dc9a9d99ed0d624e6e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724089579484877734,Labels
:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8349a726-cf5d-472f-aec7-5dc582e1d9db,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f50bdc241404bd0f8b1a2869be786334bc01a4037c0b7eb743716d47d703a708,PodSandboxId:cd735c6b862bd1c732fb68674d6cd41ed8978073a26332dc9a9d99ed0d624e6e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724089547450120897,Labels:map[string]s
tring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8349a726-cf5d-472f-aec7-5dc582e1d9db,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:817e8e7f4f6f81e0aa32cdefd4ad54c86e041eae7b0332ae4b220e0e9677f3d1,PodSandboxId:b9ef181e4bc2fb5d786c49ce9f20820cd5cd872892e88631e663df6d9859b1e0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724089545288887438,Labels:map[string]string{io.kubernetes.cont
ainer.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-tljrk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c9217a4-6879-4b7e-a6b5-78dfa1b85ee4,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9dd9c82747258473cc2ae88bf2e75164e2fbd3d2a2a5328ce9d086eb5cb4b4f2,PodSandboxId:ce7c12bfd63c75fe8a79e1405a6266f4a0c6d99e7c466ad7b948ae213dd82f9c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifi
edImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724089543193544008,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-72dbf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a50d76ee-c7cb-4141-9bc3-2b530cb531e3,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b291bb577fbcf3d0301dc0987ddfdd1f8dfb5a7f0993ecb1d0ef1f47343437cd,PodSandboxId:27479f4e0074355c624d9156652214795ce8aca3f7a41bfd5ec5bbc60b915d11,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Ima
geRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724089529692719593,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-347256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e13dfe28931d8226b47ef9afcba9b2fd,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14ae317fb30359ba2162a6f9a30ed7046710d5c6d66c44e99c36726de0db7be6,PodSandboxId:3668e32c15f5ec68dd6cda9b7bd2395a2ebc5b147a52395661b1b3eca5df2e5b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897
f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724089529683588593,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-347256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7b56cf9e890da5718f47889ffe5ca70,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dd9c53de4477b97df49b744ad39714d3fcc1e7ae85e213bdde3870d7bcc820d,PodSandboxId:a096a9ccc820f95f0a8edcb6822851979b2988c8a35ffa8a7ed8f86d94e49f26,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b158
06c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724089529733666189,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-347256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0045d8f687a843989cf184298e3dad56,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a11a191406026de18aba4209c1728adf2b7209255477f04b146aedbeda0efda6,PodSandboxId:ab5cb0c3935f4f7e1b3abd77178d77768afe8585fb1730e95646bb8797e29e76,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2
ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724089529643074083,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-347256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0aab542bc468b2cc945d8e0cb0ebc09,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a889961b-bec7-4dd9-b1f8-6f95021d0d76 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:53:09 addons-347256 crio[688]: time="2024-08-19 17:53:09.024603558Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4fc76c26-c7df-4d43-b893-7b00cd088c78 name=/runtime.v1.RuntimeService/Version
	Aug 19 17:53:09 addons-347256 crio[688]: time="2024-08-19 17:53:09.024697876Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4fc76c26-c7df-4d43-b893-7b00cd088c78 name=/runtime.v1.RuntimeService/Version
	Aug 19 17:53:09 addons-347256 crio[688]: time="2024-08-19 17:53:09.026006080Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=685a24e5-d4a0-4707-8680-909fbef37880 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 17:53:09 addons-347256 crio[688]: time="2024-08-19 17:53:09.027263562Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724089989027233779,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=685a24e5-d4a0-4707-8680-909fbef37880 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 17:53:09 addons-347256 crio[688]: time="2024-08-19 17:53:09.027910050Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4046d130-fe8b-48f0-94ce-eaf9200d3fee name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:53:09 addons-347256 crio[688]: time="2024-08-19 17:53:09.027985247Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4046d130-fe8b-48f0-94ce-eaf9200d3fee name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:53:09 addons-347256 crio[688]: time="2024-08-19 17:53:09.028282706Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8476ea9b84c5677585c184c2a19989966218130ebd87014f16b5d47b610a7bf8,PodSandboxId:3779df5f1e98a011b2ea48e7cbe08ffddc7b5d281e7a8764c73a15dc3a6f7517,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1724089821391952812,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-8qm2m,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5af6036b-6c99-4583-8178-c1691586b4ac,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e7d5836b553936a23f393970ad8deefd6feed3820d8bde258550dfc206c2757,PodSandboxId:2d699655663d41aaf47ea8ea106b572637b60af2d425235bb90695e5764e6820,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1724089681587967061,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9632e6a7-a0a4-4456-ab6f-c0eab065596d,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0d586f0f8bb9c527dd638f71ad853fe948c4a48d287293667d737a19e672a35,PodSandboxId:08bede11788c378bfb268308132a6da06e96de1be20084257a399f82227c9f67,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724089633212039725,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 718e1c4f-05f0-49bc-b
fc2-7dda02db3d8e,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8d158637619c277ef9f9387b183af1877f4383af2f22aaddac3e7716ede8a08,PodSandboxId:3eaf4c04776d5d54255f718eb3e644edd9c7a10fe8ba78ea9e2654d6d547990b,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1724089592959813692,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-dtqhx,io.kubernetes.pod.name
space: local-path-storage,io.kubernetes.pod.uid: eb249463-f4d8-4b25-812f-c1e2f481cffd,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0ca4eb985ce7b35dd9be14e75a8600b776dd7af34e703f47903f17d58fe8638,PodSandboxId:6d360799061341a4737e1d4f98f5669097f47bf25108d2d14f200cb506dec1cc,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1724089583620988219,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metr
ics-server-8988944d9-xkj9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cb192e0-5048-46b0-b74e-86ad5e4d39ea,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07d88fd518a67130b2f237c0f5c0e12a105bb8b22bb8cf868165a3ab5c86352d,PodSandboxId:cd735c6b862bd1c732fb68674d6cd41ed8978073a26332dc9a9d99ed0d624e6e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724089579484877734,Labels
:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8349a726-cf5d-472f-aec7-5dc582e1d9db,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f50bdc241404bd0f8b1a2869be786334bc01a4037c0b7eb743716d47d703a708,PodSandboxId:cd735c6b862bd1c732fb68674d6cd41ed8978073a26332dc9a9d99ed0d624e6e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724089547450120897,Labels:map[string]s
tring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8349a726-cf5d-472f-aec7-5dc582e1d9db,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:817e8e7f4f6f81e0aa32cdefd4ad54c86e041eae7b0332ae4b220e0e9677f3d1,PodSandboxId:b9ef181e4bc2fb5d786c49ce9f20820cd5cd872892e88631e663df6d9859b1e0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724089545288887438,Labels:map[string]string{io.kubernetes.cont
ainer.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-tljrk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c9217a4-6879-4b7e-a6b5-78dfa1b85ee4,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9dd9c82747258473cc2ae88bf2e75164e2fbd3d2a2a5328ce9d086eb5cb4b4f2,PodSandboxId:ce7c12bfd63c75fe8a79e1405a6266f4a0c6d99e7c466ad7b948ae213dd82f9c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifi
edImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724089543193544008,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-72dbf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a50d76ee-c7cb-4141-9bc3-2b530cb531e3,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b291bb577fbcf3d0301dc0987ddfdd1f8dfb5a7f0993ecb1d0ef1f47343437cd,PodSandboxId:27479f4e0074355c624d9156652214795ce8aca3f7a41bfd5ec5bbc60b915d11,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Ima
geRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724089529692719593,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-347256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e13dfe28931d8226b47ef9afcba9b2fd,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14ae317fb30359ba2162a6f9a30ed7046710d5c6d66c44e99c36726de0db7be6,PodSandboxId:3668e32c15f5ec68dd6cda9b7bd2395a2ebc5b147a52395661b1b3eca5df2e5b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897
f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724089529683588593,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-347256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7b56cf9e890da5718f47889ffe5ca70,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dd9c53de4477b97df49b744ad39714d3fcc1e7ae85e213bdde3870d7bcc820d,PodSandboxId:a096a9ccc820f95f0a8edcb6822851979b2988c8a35ffa8a7ed8f86d94e49f26,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b158
06c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724089529733666189,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-347256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0045d8f687a843989cf184298e3dad56,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a11a191406026de18aba4209c1728adf2b7209255477f04b146aedbeda0efda6,PodSandboxId:ab5cb0c3935f4f7e1b3abd77178d77768afe8585fb1730e95646bb8797e29e76,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2
ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724089529643074083,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-347256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0aab542bc468b2cc945d8e0cb0ebc09,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4046d130-fe8b-48f0-94ce-eaf9200d3fee name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:53:09 addons-347256 crio[688]: time="2024-08-19 17:53:09.074940244Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e3b8b094-5dd3-4875-a138-ef8c8e1cb84b name=/runtime.v1.RuntimeService/Version
	Aug 19 17:53:09 addons-347256 crio[688]: time="2024-08-19 17:53:09.075155895Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e3b8b094-5dd3-4875-a138-ef8c8e1cb84b name=/runtime.v1.RuntimeService/Version
	Aug 19 17:53:09 addons-347256 crio[688]: time="2024-08-19 17:53:09.079290751Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e011e3b2-abfd-4405-bbf3-a16530717b4e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 17:53:09 addons-347256 crio[688]: time="2024-08-19 17:53:09.080633890Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724089989080604935,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e011e3b2-abfd-4405-bbf3-a16530717b4e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 17:53:09 addons-347256 crio[688]: time="2024-08-19 17:53:09.081454446Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2b93b996-e9cd-4b80-9dcd-c044a873fe86 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:53:09 addons-347256 crio[688]: time="2024-08-19 17:53:09.081533076Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2b93b996-e9cd-4b80-9dcd-c044a873fe86 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:53:09 addons-347256 crio[688]: time="2024-08-19 17:53:09.081819091Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8476ea9b84c5677585c184c2a19989966218130ebd87014f16b5d47b610a7bf8,PodSandboxId:3779df5f1e98a011b2ea48e7cbe08ffddc7b5d281e7a8764c73a15dc3a6f7517,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1724089821391952812,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-8qm2m,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5af6036b-6c99-4583-8178-c1691586b4ac,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e7d5836b553936a23f393970ad8deefd6feed3820d8bde258550dfc206c2757,PodSandboxId:2d699655663d41aaf47ea8ea106b572637b60af2d425235bb90695e5764e6820,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1724089681587967061,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9632e6a7-a0a4-4456-ab6f-c0eab065596d,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0d586f0f8bb9c527dd638f71ad853fe948c4a48d287293667d737a19e672a35,PodSandboxId:08bede11788c378bfb268308132a6da06e96de1be20084257a399f82227c9f67,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724089633212039725,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 718e1c4f-05f0-49bc-b
fc2-7dda02db3d8e,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8d158637619c277ef9f9387b183af1877f4383af2f22aaddac3e7716ede8a08,PodSandboxId:3eaf4c04776d5d54255f718eb3e644edd9c7a10fe8ba78ea9e2654d6d547990b,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1724089592959813692,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-dtqhx,io.kubernetes.pod.name
space: local-path-storage,io.kubernetes.pod.uid: eb249463-f4d8-4b25-812f-c1e2f481cffd,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0ca4eb985ce7b35dd9be14e75a8600b776dd7af34e703f47903f17d58fe8638,PodSandboxId:6d360799061341a4737e1d4f98f5669097f47bf25108d2d14f200cb506dec1cc,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1724089583620988219,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metr
ics-server-8988944d9-xkj9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cb192e0-5048-46b0-b74e-86ad5e4d39ea,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07d88fd518a67130b2f237c0f5c0e12a105bb8b22bb8cf868165a3ab5c86352d,PodSandboxId:cd735c6b862bd1c732fb68674d6cd41ed8978073a26332dc9a9d99ed0d624e6e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724089579484877734,Labels
:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8349a726-cf5d-472f-aec7-5dc582e1d9db,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f50bdc241404bd0f8b1a2869be786334bc01a4037c0b7eb743716d47d703a708,PodSandboxId:cd735c6b862bd1c732fb68674d6cd41ed8978073a26332dc9a9d99ed0d624e6e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724089547450120897,Labels:map[string]s
tring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8349a726-cf5d-472f-aec7-5dc582e1d9db,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:817e8e7f4f6f81e0aa32cdefd4ad54c86e041eae7b0332ae4b220e0e9677f3d1,PodSandboxId:b9ef181e4bc2fb5d786c49ce9f20820cd5cd872892e88631e663df6d9859b1e0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724089545288887438,Labels:map[string]string{io.kubernetes.cont
ainer.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-tljrk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c9217a4-6879-4b7e-a6b5-78dfa1b85ee4,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9dd9c82747258473cc2ae88bf2e75164e2fbd3d2a2a5328ce9d086eb5cb4b4f2,PodSandboxId:ce7c12bfd63c75fe8a79e1405a6266f4a0c6d99e7c466ad7b948ae213dd82f9c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifi
edImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724089543193544008,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-72dbf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a50d76ee-c7cb-4141-9bc3-2b530cb531e3,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b291bb577fbcf3d0301dc0987ddfdd1f8dfb5a7f0993ecb1d0ef1f47343437cd,PodSandboxId:27479f4e0074355c624d9156652214795ce8aca3f7a41bfd5ec5bbc60b915d11,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Ima
geRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724089529692719593,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-347256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e13dfe28931d8226b47ef9afcba9b2fd,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14ae317fb30359ba2162a6f9a30ed7046710d5c6d66c44e99c36726de0db7be6,PodSandboxId:3668e32c15f5ec68dd6cda9b7bd2395a2ebc5b147a52395661b1b3eca5df2e5b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897
f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724089529683588593,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-347256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7b56cf9e890da5718f47889ffe5ca70,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dd9c53de4477b97df49b744ad39714d3fcc1e7ae85e213bdde3870d7bcc820d,PodSandboxId:a096a9ccc820f95f0a8edcb6822851979b2988c8a35ffa8a7ed8f86d94e49f26,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b158
06c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724089529733666189,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-347256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0045d8f687a843989cf184298e3dad56,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a11a191406026de18aba4209c1728adf2b7209255477f04b146aedbeda0efda6,PodSandboxId:ab5cb0c3935f4f7e1b3abd77178d77768afe8585fb1730e95646bb8797e29e76,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2
ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724089529643074083,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-347256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0aab542bc468b2cc945d8e0cb0ebc09,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2b93b996-e9cd-4b80-9dcd-c044a873fe86 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:53:09 addons-347256 crio[688]: time="2024-08-19 17:53:09.117114884Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3c22394b-8502-4222-b90b-a737bbf6d76f name=/runtime.v1.RuntimeService/Version
	Aug 19 17:53:09 addons-347256 crio[688]: time="2024-08-19 17:53:09.117211722Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3c22394b-8502-4222-b90b-a737bbf6d76f name=/runtime.v1.RuntimeService/Version
	Aug 19 17:53:09 addons-347256 crio[688]: time="2024-08-19 17:53:09.118524055Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=010f7542-06f1-49d1-a490-6be18f91e8db name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 17:53:09 addons-347256 crio[688]: time="2024-08-19 17:53:09.119999798Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724089989119973970,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=010f7542-06f1-49d1-a490-6be18f91e8db name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 17:53:09 addons-347256 crio[688]: time="2024-08-19 17:53:09.120659490Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=821f10a9-faf3-4089-b133-e2510a41e053 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:53:09 addons-347256 crio[688]: time="2024-08-19 17:53:09.121037120Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=821f10a9-faf3-4089-b133-e2510a41e053 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:53:09 addons-347256 crio[688]: time="2024-08-19 17:53:09.121785474Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8476ea9b84c5677585c184c2a19989966218130ebd87014f16b5d47b610a7bf8,PodSandboxId:3779df5f1e98a011b2ea48e7cbe08ffddc7b5d281e7a8764c73a15dc3a6f7517,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1724089821391952812,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-8qm2m,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5af6036b-6c99-4583-8178-c1691586b4ac,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e7d5836b553936a23f393970ad8deefd6feed3820d8bde258550dfc206c2757,PodSandboxId:2d699655663d41aaf47ea8ea106b572637b60af2d425235bb90695e5764e6820,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1724089681587967061,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9632e6a7-a0a4-4456-ab6f-c0eab065596d,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0d586f0f8bb9c527dd638f71ad853fe948c4a48d287293667d737a19e672a35,PodSandboxId:08bede11788c378bfb268308132a6da06e96de1be20084257a399f82227c9f67,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724089633212039725,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 718e1c4f-05f0-49bc-b
fc2-7dda02db3d8e,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8d158637619c277ef9f9387b183af1877f4383af2f22aaddac3e7716ede8a08,PodSandboxId:3eaf4c04776d5d54255f718eb3e644edd9c7a10fe8ba78ea9e2654d6d547990b,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1724089592959813692,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-dtqhx,io.kubernetes.pod.name
space: local-path-storage,io.kubernetes.pod.uid: eb249463-f4d8-4b25-812f-c1e2f481cffd,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0ca4eb985ce7b35dd9be14e75a8600b776dd7af34e703f47903f17d58fe8638,PodSandboxId:6d360799061341a4737e1d4f98f5669097f47bf25108d2d14f200cb506dec1cc,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1724089583620988219,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metr
ics-server-8988944d9-xkj9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cb192e0-5048-46b0-b74e-86ad5e4d39ea,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07d88fd518a67130b2f237c0f5c0e12a105bb8b22bb8cf868165a3ab5c86352d,PodSandboxId:cd735c6b862bd1c732fb68674d6cd41ed8978073a26332dc9a9d99ed0d624e6e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724089579484877734,Labels
:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8349a726-cf5d-472f-aec7-5dc582e1d9db,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f50bdc241404bd0f8b1a2869be786334bc01a4037c0b7eb743716d47d703a708,PodSandboxId:cd735c6b862bd1c732fb68674d6cd41ed8978073a26332dc9a9d99ed0d624e6e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724089547450120897,Labels:map[string]s
tring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8349a726-cf5d-472f-aec7-5dc582e1d9db,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:817e8e7f4f6f81e0aa32cdefd4ad54c86e041eae7b0332ae4b220e0e9677f3d1,PodSandboxId:b9ef181e4bc2fb5d786c49ce9f20820cd5cd872892e88631e663df6d9859b1e0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724089545288887438,Labels:map[string]string{io.kubernetes.cont
ainer.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-tljrk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c9217a4-6879-4b7e-a6b5-78dfa1b85ee4,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9dd9c82747258473cc2ae88bf2e75164e2fbd3d2a2a5328ce9d086eb5cb4b4f2,PodSandboxId:ce7c12bfd63c75fe8a79e1405a6266f4a0c6d99e7c466ad7b948ae213dd82f9c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifi
edImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724089543193544008,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-72dbf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a50d76ee-c7cb-4141-9bc3-2b530cb531e3,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b291bb577fbcf3d0301dc0987ddfdd1f8dfb5a7f0993ecb1d0ef1f47343437cd,PodSandboxId:27479f4e0074355c624d9156652214795ce8aca3f7a41bfd5ec5bbc60b915d11,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Ima
geRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724089529692719593,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-347256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e13dfe28931d8226b47ef9afcba9b2fd,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14ae317fb30359ba2162a6f9a30ed7046710d5c6d66c44e99c36726de0db7be6,PodSandboxId:3668e32c15f5ec68dd6cda9b7bd2395a2ebc5b147a52395661b1b3eca5df2e5b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897
f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724089529683588593,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-347256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7b56cf9e890da5718f47889ffe5ca70,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dd9c53de4477b97df49b744ad39714d3fcc1e7ae85e213bdde3870d7bcc820d,PodSandboxId:a096a9ccc820f95f0a8edcb6822851979b2988c8a35ffa8a7ed8f86d94e49f26,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b158
06c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724089529733666189,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-347256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0045d8f687a843989cf184298e3dad56,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a11a191406026de18aba4209c1728adf2b7209255477f04b146aedbeda0efda6,PodSandboxId:ab5cb0c3935f4f7e1b3abd77178d77768afe8585fb1730e95646bb8797e29e76,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2
ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724089529643074083,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-347256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0aab542bc468b2cc945d8e0cb0ebc09,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=821f10a9-faf3-4089-b133-e2510a41e053 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8476ea9b84c56       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   2 minutes ago       Running             hello-world-app           0                   3779df5f1e98a       hello-world-app-55bf9c44b4-8qm2m
	1e7d5836b5539       docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0                         5 minutes ago       Running             nginx                     0                   2d699655663d4       nginx
	f0d586f0f8bb9       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     5 minutes ago       Running             busybox                   0                   08bede11788c3       busybox
	b8d158637619c       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef        6 minutes ago       Running             local-path-provisioner    0                   3eaf4c04776d5       local-path-provisioner-86d989889c-dtqhx
	e0ca4eb985ce7       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872   6 minutes ago       Running             metrics-server            0                   6d36079906134       metrics-server-8988944d9-xkj9p
	07d88fd518a67       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        6 minutes ago       Running             storage-provisioner       1                   cd735c6b862bd       storage-provisioner
	f50bdc241404b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        7 minutes ago       Exited              storage-provisioner       0                   cd735c6b862bd       storage-provisioner
	817e8e7f4f6f8       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                        7 minutes ago       Running             coredns                   0                   b9ef181e4bc2f       coredns-6f6b679f8f-tljrk
	9dd9c82747258       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                                        7 minutes ago       Running             kube-proxy                0                   ce7c12bfd63c7       kube-proxy-72dbf
	5dd9c53de4477       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                                        7 minutes ago       Running             kube-controller-manager   0                   a096a9ccc820f       kube-controller-manager-addons-347256
	b291bb577fbcf       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                                        7 minutes ago       Running             kube-apiserver            0                   27479f4e00743       kube-apiserver-addons-347256
	14ae317fb3035       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                                        7 minutes ago       Running             kube-scheduler            0                   3668e32c15f5e       kube-scheduler-addons-347256
	a11a191406026       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                        7 minutes ago       Running             etcd                      0                   ab5cb0c3935f4       etcd-addons-347256
	
	
	==> coredns [817e8e7f4f6f81e0aa32cdefd4ad54c86e041eae7b0332ae4b220e0e9677f3d1] <==
	[INFO] 10.244.0.7:43484 - 18645 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000301851s
	[INFO] 10.244.0.7:48952 - 36607 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000093621s
	[INFO] 10.244.0.7:48952 - 39906 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000083434s
	[INFO] 10.244.0.7:40146 - 55082 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000072444s
	[INFO] 10.244.0.7:40146 - 29992 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000073758s
	[INFO] 10.244.0.7:36128 - 19863 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000110019s
	[INFO] 10.244.0.7:36128 - 17809 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000125615s
	[INFO] 10.244.0.7:43453 - 51962 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000083457s
	[INFO] 10.244.0.7:43453 - 28152 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000062594s
	[INFO] 10.244.0.7:50170 - 32131 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000091098s
	[INFO] 10.244.0.7:50170 - 46214 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000107393s
	[INFO] 10.244.0.7:47291 - 64507 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000034975s
	[INFO] 10.244.0.7:47291 - 2554 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000130142s
	[INFO] 10.244.0.7:37877 - 27398 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000071341s
	[INFO] 10.244.0.7:37877 - 27904 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000136128s
	[INFO] 10.244.0.22:50041 - 46218 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000339654s
	[INFO] 10.244.0.22:56671 - 22756 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00027197s
	[INFO] 10.244.0.22:51830 - 41490 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000346503s
	[INFO] 10.244.0.22:37631 - 51261 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000138901s
	[INFO] 10.244.0.22:39029 - 38277 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000200513s
	[INFO] 10.244.0.22:58825 - 34536 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000117903s
	[INFO] 10.244.0.22:35318 - 16680 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.001008787s
	[INFO] 10.244.0.22:37761 - 15849 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000848032s
	[INFO] 10.244.0.26:41999 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000777692s
	[INFO] 10.244.0.26:44907 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000201588s
	
	
	==> describe nodes <==
	Name:               addons-347256
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-347256
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9c2db9d51ec33b5c53a86e9ba3d384ee332e3411
	                    minikube.k8s.io/name=addons-347256
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T17_45_35_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-347256
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 17:45:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-347256
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 17:53:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 17:50:41 +0000   Mon, 19 Aug 2024 17:45:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 17:50:41 +0000   Mon, 19 Aug 2024 17:45:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 17:50:41 +0000   Mon, 19 Aug 2024 17:45:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 17:50:41 +0000   Mon, 19 Aug 2024 17:45:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.18
	  Hostname:    addons-347256
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 35472ba804c549e2b72c7b2d4f9a9d4d
	  System UUID:                35472ba8-04c5-49e2-b72c-7b2d4f9a9d4d
	  Boot ID:                    6579c4ae-0068-42a1-8c4f-735c1b3576dd
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m
	  default                     hello-world-app-55bf9c44b4-8qm2m           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m51s
	  default                     nginx                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m12s
	  kube-system                 coredns-6f6b679f8f-tljrk                   100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     7m29s
	  kube-system                 etcd-addons-347256                         100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         7m35s
	  kube-system                 kube-apiserver-addons-347256               250m (12%)    0 (0%)      0 (0%)           0 (0%)         7m35s
	  kube-system                 kube-controller-manager-addons-347256      200m (10%)    0 (0%)      0 (0%)           0 (0%)         7m35s
	  kube-system                 kube-proxy-72dbf                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m29s
	  kube-system                 kube-scheduler-addons-347256               100m (5%)     0 (0%)      0 (0%)           0 (0%)         7m35s
	  kube-system                 metrics-server-8988944d9-xkj9p             100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         7m24s
	  kube-system                 storage-provisioner                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m24s
	  local-path-storage          local-path-provisioner-86d989889c-dtqhx    0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             370Mi (9%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m20s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  7m40s (x8 over 7m40s)  kubelet          Node addons-347256 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m40s (x8 over 7m40s)  kubelet          Node addons-347256 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m40s (x7 over 7m40s)  kubelet          Node addons-347256 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m40s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 7m35s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m35s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m34s                  kubelet          Node addons-347256 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m34s                  kubelet          Node addons-347256 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m34s                  kubelet          Node addons-347256 status is now: NodeHasSufficientPID
	  Normal  NodeReady                7m33s                  kubelet          Node addons-347256 status is now: NodeReady
	  Normal  RegisteredNode           7m30s                  node-controller  Node addons-347256 event: Registered Node addons-347256 in Controller
	
	
	==> dmesg <==
	[  +6.434381] kauditd_printk_skb: 54 callbacks suppressed
	[Aug19 17:46] kauditd_printk_skb: 5 callbacks suppressed
	[ +19.176114] kauditd_printk_skb: 2 callbacks suppressed
	[  +8.782463] kauditd_printk_skb: 6 callbacks suppressed
	[ +11.926528] kauditd_printk_skb: 44 callbacks suppressed
	[  +5.205048] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.026315] kauditd_printk_skb: 60 callbacks suppressed
	[  +5.395662] kauditd_printk_skb: 17 callbacks suppressed
	[Aug19 17:47] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.151355] kauditd_printk_skb: 44 callbacks suppressed
	[ +13.843201] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.923942] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.192477] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.005470] kauditd_printk_skb: 55 callbacks suppressed
	[  +5.126435] kauditd_printk_skb: 44 callbacks suppressed
	[  +6.278976] kauditd_printk_skb: 27 callbacks suppressed
	[  +5.594347] kauditd_printk_skb: 15 callbacks suppressed
	[Aug19 17:48] kauditd_printk_skb: 23 callbacks suppressed
	[  +8.389893] kauditd_printk_skb: 18 callbacks suppressed
	[  +7.296726] kauditd_printk_skb: 11 callbacks suppressed
	[  +8.750720] kauditd_printk_skb: 7 callbacks suppressed
	[ +14.570377] kauditd_printk_skb: 7 callbacks suppressed
	[  +8.304267] kauditd_printk_skb: 33 callbacks suppressed
	[Aug19 17:50] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.228384] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [a11a191406026de18aba4209c1728adf2b7209255477f04b146aedbeda0efda6] <==
	{"level":"warn","ts":"2024-08-19T17:47:00.708904Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T17:47:00.377462Z","time spent":"331.432132ms","remote":"127.0.0.1:53006","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1136,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	{"level":"warn","ts":"2024-08-19T17:47:00.714434Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T17:47:00.240889Z","time spent":"466.531973ms","remote":"127.0.0.1:53094","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":482,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/snapshot-controller-leader\" mod_revision:1120 > success:<request_put:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" value_size:419 >> failure:<request_range:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" > >"}
	{"level":"info","ts":"2024-08-19T17:47:39.134963Z","caller":"traceutil/trace.go:171","msg":"trace[1961973197] linearizableReadLoop","detail":"{readStateIndex:1437; appliedIndex:1436; }","duration":"162.911752ms","start":"2024-08-19T17:47:38.972027Z","end":"2024-08-19T17:47:39.134938Z","steps":["trace[1961973197] 'read index received'  (duration: 162.798404ms)","trace[1961973197] 'applied index is now lower than readState.Index'  (duration: 112.865µs)"],"step_count":2}
	{"level":"info","ts":"2024-08-19T17:47:39.135173Z","caller":"traceutil/trace.go:171","msg":"trace[1694973947] transaction","detail":"{read_only:false; response_revision:1393; number_of_response:1; }","duration":"195.398548ms","start":"2024-08-19T17:47:38.939763Z","end":"2024-08-19T17:47:39.135162Z","steps":["trace[1694973947] 'process raft request'  (duration: 195.099287ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T17:47:39.135355Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"126.642251ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T17:47:39.135456Z","caller":"traceutil/trace.go:171","msg":"trace[1782506604] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1393; }","duration":"126.813933ms","start":"2024-08-19T17:47:39.008632Z","end":"2024-08-19T17:47:39.135446Z","steps":["trace[1782506604] 'agreement among raft nodes before linearized reading'  (duration: 126.597125ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T17:47:39.135630Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"163.654065ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-08-19T17:47:39.135678Z","caller":"traceutil/trace.go:171","msg":"trace[1312877723] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1393; }","duration":"163.717625ms","start":"2024-08-19T17:47:38.971951Z","end":"2024-08-19T17:47:39.135668Z","steps":["trace[1312877723] 'agreement among raft nodes before linearized reading'  (duration: 163.612739ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T17:47:39.136068Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"115.798609ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/snapshot.storage.k8s.io/volumesnapshotclasses/\" range_end:\"/registry/snapshot.storage.k8s.io/volumesnapshotclasses0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-08-19T17:47:39.137935Z","caller":"traceutil/trace.go:171","msg":"trace[1064821462] range","detail":"{range_begin:/registry/snapshot.storage.k8s.io/volumesnapshotclasses/; range_end:/registry/snapshot.storage.k8s.io/volumesnapshotclasses0; response_count:0; response_revision:1393; }","duration":"117.666562ms","start":"2024-08-19T17:47:39.020253Z","end":"2024-08-19T17:47:39.137919Z","steps":["trace[1064821462] 'agreement among raft nodes before linearized reading'  (duration: 115.780855ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T17:47:39.138076Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.323354ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/default/hpvc\" ","response":"range_response_count:1 size:822"}
	{"level":"info","ts":"2024-08-19T17:47:39.138134Z","caller":"traceutil/trace.go:171","msg":"trace[1843115512] range","detail":"{range_begin:/registry/persistentvolumeclaims/default/hpvc; range_end:; response_count:1; response_revision:1393; }","duration":"124.377661ms","start":"2024-08-19T17:47:39.013740Z","end":"2024-08-19T17:47:39.138118Z","steps":["trace[1843115512] 'agreement among raft nodes before linearized reading'  (duration: 124.260806ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T17:48:01.461905Z","caller":"traceutil/trace.go:171","msg":"trace[11828694] transaction","detail":"{read_only:false; response_revision:1566; number_of_response:1; }","duration":"144.808741ms","start":"2024-08-19T17:48:01.317070Z","end":"2024-08-19T17:48:01.461879Z","steps":["trace[11828694] 'process raft request'  (duration: 144.718123ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T17:48:09.629024Z","caller":"traceutil/trace.go:171","msg":"trace[1950804823] linearizableReadLoop","detail":"{readStateIndex:1689; appliedIndex:1688; }","duration":"131.807394ms","start":"2024-08-19T17:48:09.497201Z","end":"2024-08-19T17:48:09.629009Z","steps":["trace[1950804823] 'read index received'  (duration: 70.793225ms)","trace[1950804823] 'applied index is now lower than readState.Index'  (duration: 61.012927ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-19T17:48:09.629696Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"132.303298ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-08-19T17:48:09.629940Z","caller":"traceutil/trace.go:171","msg":"trace[1035633527] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1632; }","duration":"132.726889ms","start":"2024-08-19T17:48:09.497197Z","end":"2024-08-19T17:48:09.629924Z","steps":["trace[1035633527] 'agreement among raft nodes before linearized reading'  (duration: 132.022215ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T17:48:17.222506Z","caller":"traceutil/trace.go:171","msg":"trace[146959868] transaction","detail":"{read_only:false; response_revision:1655; number_of_response:1; }","duration":"430.624283ms","start":"2024-08-19T17:48:16.791861Z","end":"2024-08-19T17:48:17.222485Z","steps":["trace[146959868] 'process raft request'  (duration: 430.431459ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T17:48:17.222785Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T17:48:16.791841Z","time spent":"430.794325ms","remote":"127.0.0.1:53094","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":539,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" mod_revision:1648 > success:<request_put:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" value_size:452 >> failure:<request_range:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" > >"}
	{"level":"info","ts":"2024-08-19T17:48:17.223157Z","caller":"traceutil/trace.go:171","msg":"trace[784703065] linearizableReadLoop","detail":"{readStateIndex:1713; appliedIndex:1712; }","duration":"214.749179ms","start":"2024-08-19T17:48:17.008400Z","end":"2024-08-19T17:48:17.223149Z","steps":["trace[784703065] 'read index received'  (duration: 213.830506ms)","trace[784703065] 'applied index is now lower than readState.Index'  (duration: 917.789µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-19T17:48:17.244061Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"235.641873ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T17:48:17.244127Z","caller":"traceutil/trace.go:171","msg":"trace[1618827778] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1655; }","duration":"235.721741ms","start":"2024-08-19T17:48:17.008392Z","end":"2024-08-19T17:48:17.244114Z","steps":["trace[1618827778] 'agreement among raft nodes before linearized reading'  (duration: 215.718599ms)","trace[1618827778] 'range keys from in-memory index tree'  (duration: 19.90863ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-19T17:48:17.244453Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"197.181071ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T17:48:17.244478Z","caller":"traceutil/trace.go:171","msg":"trace[1829104736] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1655; }","duration":"197.211814ms","start":"2024-08-19T17:48:17.047258Z","end":"2024-08-19T17:48:17.244470Z","steps":["trace[1829104736] 'agreement among raft nodes before linearized reading'  (duration: 176.882024ms)","trace[1829104736] 'range keys from in-memory index tree'  (duration: 20.285588ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-19T17:48:17.245573Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"118.113069ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T17:48:17.245655Z","caller":"traceutil/trace.go:171","msg":"trace[1580977739] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1655; }","duration":"118.201402ms","start":"2024-08-19T17:48:17.127445Z","end":"2024-08-19T17:48:17.245646Z","steps":["trace[1580977739] 'agreement among raft nodes before linearized reading'  (duration: 96.703234ms)","trace[1580977739] 'range keys from in-memory index tree'  (duration: 21.391611ms)"],"step_count":2}
	
	
	==> kernel <==
	 17:53:09 up 8 min,  0 users,  load average: 0.26, 0.77, 0.55
	Linux addons-347256 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [b291bb577fbcf3d0301dc0987ddfdd1f8dfb5a7f0993ecb1d0ef1f47343437cd] <==
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0819 17:47:25.898229       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.167.193:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.167.193:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.167.193:443: connect: connection refused" logger="UnhandledError"
	E0819 17:47:25.905201       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.167.193:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.167.193:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.167.193:443: connect: connection refused" logger="UnhandledError"
	I0819 17:47:25.971666       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0819 17:47:52.009142       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0819 17:47:53.042030       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0819 17:47:57.028448       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0819 17:47:57.236173       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.101.225.232"}
	I0819 17:48:04.952667       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.100.254.220"}
	I0819 17:48:25.195432       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0819 17:48:48.987259       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 17:48:48.991505       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0819 17:48:49.028010       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 17:48:49.028083       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0819 17:48:49.081406       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 17:48:49.081566       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0819 17:48:49.128525       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 17:48:49.128672       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0819 17:48:49.156622       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 17:48:49.156672       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0819 17:48:50.128687       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0819 17:48:50.157431       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0819 17:48:50.243303       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0819 17:50:18.500584       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.111.72.73"}
	
	
	==> kube-controller-manager [5dd9c53de4477b97df49b744ad39714d3fcc1e7ae85e213bdde3870d7bcc820d] <==
	W0819 17:50:52.070136       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 17:50:52.070171       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 17:50:59.763524       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 17:50:59.763588       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 17:51:13.843538       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 17:51:13.843705       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 17:51:23.734245       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 17:51:23.734295       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 17:51:36.070128       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 17:51:36.070408       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 17:51:48.107948       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 17:51:48.108183       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 17:52:04.578640       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 17:52:04.578755       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 17:52:21.190180       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 17:52:21.190240       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 17:52:29.391683       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 17:52:29.391837       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 17:52:32.395613       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 17:52:32.395725       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 17:52:55.150610       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 17:52:55.150656       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 17:52:57.046383       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 17:52:57.046512       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0819 17:53:08.102797       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-8988944d9" duration="12.494µs"
	
	
	==> kube-proxy [9dd9c82747258473cc2ae88bf2e75164e2fbd3d2a2a5328ce9d086eb5cb4b4f2] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 17:45:48.713427       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0819 17:45:48.772286       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.18"]
	E0819 17:45:48.773016       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 17:45:48.931067       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 17:45:48.931190       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 17:45:48.931217       1 server_linux.go:169] "Using iptables Proxier"
	I0819 17:45:48.935507       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 17:45:48.935737       1 server.go:483] "Version info" version="v1.31.0"
	I0819 17:45:48.935766       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 17:45:49.011363       1 config.go:197] "Starting service config controller"
	I0819 17:45:49.013605       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 17:45:49.013796       1 config.go:104] "Starting endpoint slice config controller"
	I0819 17:45:49.013804       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 17:45:49.022834       1 config.go:326] "Starting node config controller"
	I0819 17:45:49.022867       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 17:45:49.113787       1 shared_informer.go:320] Caches are synced for service config
	I0819 17:45:49.113873       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0819 17:45:49.124284       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [14ae317fb30359ba2162a6f9a30ed7046710d5c6d66c44e99c36726de0db7be6] <==
	W0819 17:45:32.371521       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 17:45:32.371539       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 17:45:32.371594       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0819 17:45:32.371631       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 17:45:32.371753       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0819 17:45:32.371783       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 17:45:32.371880       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0819 17:45:32.371908       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 17:45:32.371965       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0819 17:45:32.371993       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 17:45:32.372462       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0819 17:45:32.372551       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 17:45:33.208177       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0819 17:45:33.208249       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 17:45:33.283557       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0819 17:45:33.283814       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 17:45:33.377512       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0819 17:45:33.377562       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 17:45:33.389127       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0819 17:45:33.389259       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 17:45:33.550958       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0819 17:45:33.551012       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 17:45:33.815203       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0819 17:45:33.815368       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0819 17:45:36.659223       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 19 17:52:25 addons-347256 kubelet[1228]: E0819 17:52:25.602450    1228 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724089945601955961,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:52:34 addons-347256 kubelet[1228]: E0819 17:52:34.881958    1228 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 17:52:34 addons-347256 kubelet[1228]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 17:52:34 addons-347256 kubelet[1228]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 17:52:34 addons-347256 kubelet[1228]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 17:52:34 addons-347256 kubelet[1228]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 17:52:35 addons-347256 kubelet[1228]: E0819 17:52:35.604857    1228 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724089955604473031,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:52:35 addons-347256 kubelet[1228]: E0819 17:52:35.604898    1228 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724089955604473031,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:52:45 addons-347256 kubelet[1228]: E0819 17:52:45.607597    1228 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724089965607155533,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:52:45 addons-347256 kubelet[1228]: E0819 17:52:45.607647    1228 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724089965607155533,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:52:55 addons-347256 kubelet[1228]: E0819 17:52:55.610548    1228 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724089975610072931,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:52:55 addons-347256 kubelet[1228]: E0819 17:52:55.610881    1228 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724089975610072931,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:53:05 addons-347256 kubelet[1228]: E0819 17:53:05.613941    1228 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724089985613503480,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:53:05 addons-347256 kubelet[1228]: E0819 17:53:05.613981    1228 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724089985613503480,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:53:08 addons-347256 kubelet[1228]: I0819 17:53:08.132203    1228 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-55bf9c44b4-8qm2m" podStartSLOduration=167.63755747 podStartE2EDuration="2m50.132157901s" podCreationTimestamp="2024-08-19 17:50:18 +0000 UTC" firstStartedPulling="2024-08-19 17:50:18.882914744 +0000 UTC m=+284.146121996" lastFinishedPulling="2024-08-19 17:50:21.377515174 +0000 UTC m=+286.640722427" observedRunningTime="2024-08-19 17:50:22.028958047 +0000 UTC m=+287.292165318" watchObservedRunningTime="2024-08-19 17:53:08.132157901 +0000 UTC m=+453.395365166"
	Aug 19 17:53:09 addons-347256 kubelet[1228]: I0819 17:53:09.538415    1228 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/2cb192e0-5048-46b0-b74e-86ad5e4d39ea-tmp-dir\") pod \"2cb192e0-5048-46b0-b74e-86ad5e4d39ea\" (UID: \"2cb192e0-5048-46b0-b74e-86ad5e4d39ea\") "
	Aug 19 17:53:09 addons-347256 kubelet[1228]: I0819 17:53:09.539273    1228 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2cb192e0-5048-46b0-b74e-86ad5e4d39ea-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "2cb192e0-5048-46b0-b74e-86ad5e4d39ea" (UID: "2cb192e0-5048-46b0-b74e-86ad5e4d39ea"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Aug 19 17:53:09 addons-347256 kubelet[1228]: I0819 17:53:09.539298    1228 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dnbjh\" (UniqueName: \"kubernetes.io/projected/2cb192e0-5048-46b0-b74e-86ad5e4d39ea-kube-api-access-dnbjh\") pod \"2cb192e0-5048-46b0-b74e-86ad5e4d39ea\" (UID: \"2cb192e0-5048-46b0-b74e-86ad5e4d39ea\") "
	Aug 19 17:53:09 addons-347256 kubelet[1228]: I0819 17:53:09.539476    1228 reconciler_common.go:288] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/2cb192e0-5048-46b0-b74e-86ad5e4d39ea-tmp-dir\") on node \"addons-347256\" DevicePath \"\""
	Aug 19 17:53:09 addons-347256 kubelet[1228]: I0819 17:53:09.551542    1228 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cb192e0-5048-46b0-b74e-86ad5e4d39ea-kube-api-access-dnbjh" (OuterVolumeSpecName: "kube-api-access-dnbjh") pod "2cb192e0-5048-46b0-b74e-86ad5e4d39ea" (UID: "2cb192e0-5048-46b0-b74e-86ad5e4d39ea"). InnerVolumeSpecName "kube-api-access-dnbjh". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 19 17:53:09 addons-347256 kubelet[1228]: I0819 17:53:09.640122    1228 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-dnbjh\" (UniqueName: \"kubernetes.io/projected/2cb192e0-5048-46b0-b74e-86ad5e4d39ea-kube-api-access-dnbjh\") on node \"addons-347256\" DevicePath \"\""
	Aug 19 17:53:09 addons-347256 kubelet[1228]: I0819 17:53:09.684144    1228 scope.go:117] "RemoveContainer" containerID="e0ca4eb985ce7b35dd9be14e75a8600b776dd7af34e703f47903f17d58fe8638"
	Aug 19 17:53:09 addons-347256 kubelet[1228]: I0819 17:53:09.731200    1228 scope.go:117] "RemoveContainer" containerID="e0ca4eb985ce7b35dd9be14e75a8600b776dd7af34e703f47903f17d58fe8638"
	Aug 19 17:53:09 addons-347256 kubelet[1228]: E0819 17:53:09.731840    1228 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0ca4eb985ce7b35dd9be14e75a8600b776dd7af34e703f47903f17d58fe8638\": container with ID starting with e0ca4eb985ce7b35dd9be14e75a8600b776dd7af34e703f47903f17d58fe8638 not found: ID does not exist" containerID="e0ca4eb985ce7b35dd9be14e75a8600b776dd7af34e703f47903f17d58fe8638"
	Aug 19 17:53:09 addons-347256 kubelet[1228]: I0819 17:53:09.731884    1228 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0ca4eb985ce7b35dd9be14e75a8600b776dd7af34e703f47903f17d58fe8638"} err="failed to get container status \"e0ca4eb985ce7b35dd9be14e75a8600b776dd7af34e703f47903f17d58fe8638\": rpc error: code = NotFound desc = could not find container \"e0ca4eb985ce7b35dd9be14e75a8600b776dd7af34e703f47903f17d58fe8638\": container with ID starting with e0ca4eb985ce7b35dd9be14e75a8600b776dd7af34e703f47903f17d58fe8638 not found: ID does not exist"
	
	
	==> storage-provisioner [07d88fd518a67130b2f237c0f5c0e12a105bb8b22bb8cf868165a3ab5c86352d] <==
	I0819 17:46:19.640054       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0819 17:46:19.653780       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0819 17:46:19.653845       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0819 17:46:19.668042       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0819 17:46:19.668211       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-347256_f07fd665-8ebe-45c8-941a-b2b23a7a38b0!
	I0819 17:46:19.673594       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b5e7af78-e96a-4168-93cd-a759afdeb66d", APIVersion:"v1", ResourceVersion:"940", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-347256_f07fd665-8ebe-45c8-941a-b2b23a7a38b0 became leader
	I0819 17:46:19.768444       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-347256_f07fd665-8ebe-45c8-941a-b2b23a7a38b0!
	
	
	==> storage-provisioner [f50bdc241404bd0f8b1a2869be786334bc01a4037c0b7eb743716d47d703a708] <==
	I0819 17:45:49.262885       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0819 17:46:19.266603       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-347256 -n addons-347256
helpers_test.go:261: (dbg) Run:  kubectl --context addons-347256 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (329.30s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.4s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-347256
addons_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-347256: exit status 82 (2m0.526064652s)

                                                
                                                
-- stdout --
	* Stopping node "addons-347256"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:176: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-347256" : exit status 82
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-347256
addons_test.go:178: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-347256: exit status 11 (21.591088116s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.18:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:180: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-347256" : exit status 11
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-347256
addons_test.go:182: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-347256: exit status 11 (6.141745163s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.18:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:184: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-347256" : exit status 11
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-347256
addons_test.go:187: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-347256: exit status 11 (6.14509278s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.18:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:189: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-347256" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.40s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (2.09s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-499773 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:833: etcd is not Ready: {Phase:Running Conditions:[{Type:PodReadyToStartContainers Status:True} {Type:Initialized Status:True} {Type:Ready Status:False} {Type:ContainersReady Status:True} {Type:PodScheduled Status:True}] Message: Reason: HostIP:192.168.39.36 PodIP:192.168.39.36 StartTime:2024-08-19 17:59:05 +0000 UTC ContainerStatuses:[{Name:etcd State:{Waiting:<nil> Running:0xc000a7d458 Terminated:<nil>} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:0xc0006d40e0} Ready:true RestartCount:3 Image:registry.k8s.io/etcd:3.5.15-0 ImageID:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4 ContainerID:cri-o://cebbd9bcd1a46bb674b25e582ffcff78db0498b863ea2dd9ab4f1211a7f60c7b}]}
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:833: kube-apiserver is not Ready: {Phase:Running Conditions:[{Type:PodReadyToStartContainers Status:True} {Type:Initialized Status:True} {Type:Ready Status:False} {Type:ContainersReady Status:False} {Type:PodScheduled Status:True}] Message: Reason: HostIP:192.168.39.36 PodIP:192.168.39.36 StartTime:2024-08-19 18:00:13 +0000 UTC ContainerStatuses:[{Name:kube-apiserver State:{Waiting:<nil> Running:0xc000a7d4b8 Terminated:<nil>} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:<nil>} Ready:false RestartCount:0 Image:registry.k8s.io/kube-apiserver:v1.31.0 ImageID:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3 ContainerID:cri-o://f68033720a9433d5115cdf21b78946b310d73cf279b9a21b28e2b83835ad8760}]}
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:833: kube-controller-manager is not Ready: {Phase:Running Conditions:[{Type:PodReadyToStartContainers Status:True} {Type:Initialized Status:True} {Type:Ready Status:False} {Type:ContainersReady Status:True} {Type:PodScheduled Status:True}] Message: Reason: HostIP:192.168.39.36 PodIP:192.168.39.36 StartTime:2024-08-19 17:59:05 +0000 UTC ContainerStatuses:[{Name:kube-controller-manager State:{Waiting:<nil> Running:0xc000a7d518 Terminated:<nil>} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:0xc0006d41c0} Ready:true RestartCount:3 Image:registry.k8s.io/kube-controller-manager:v1.31.0 ImageID:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1 ContainerID:cri-o://b63e3e21994af74fa7e9a484ca3fcabe5b950afc830348a99f19c0c40f4fd60f}]}
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:833: kube-scheduler is not Ready: {Phase:Running Conditions:[{Type:PodReadyToStartContainers Status:True} {Type:Initialized Status:True} {Type:Ready Status:False} {Type:ContainersReady Status:True} {Type:PodScheduled Status:True}] Message: Reason: HostIP:192.168.39.36 PodIP:192.168.39.36 StartTime:2024-08-19 17:59:05 +0000 UTC ContainerStatuses:[{Name:kube-scheduler State:{Waiting:<nil> Running:0xc000a7d578 Terminated:<nil>} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:0xc0006d4230} Ready:true RestartCount:3 Image:registry.k8s.io/kube-scheduler:v1.31.0 ImageID:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94 ContainerID:cri-o://e9644334b65041e7a4cc7d15012ec3a463bcad8696d138c2daeecadf6fd2ac80}]}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-499773 -n functional-499773
helpers_test.go:244: <<< TestFunctional/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-499773 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-499773 logs -n 25: (1.409674353s)
helpers_test.go:252: TestFunctional/serial/ComponentHealth logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| unpause | nospam-744800 --log_dir                                                  | nospam-744800     | jenkins | v1.33.1 | 19 Aug 24 17:56 UTC | 19 Aug 24 17:56 UTC |
	|         | /tmp/nospam-744800 unpause                                               |                   |         |         |                     |                     |
	| unpause | nospam-744800 --log_dir                                                  | nospam-744800     | jenkins | v1.33.1 | 19 Aug 24 17:56 UTC | 19 Aug 24 17:56 UTC |
	|         | /tmp/nospam-744800 unpause                                               |                   |         |         |                     |                     |
	| unpause | nospam-744800 --log_dir                                                  | nospam-744800     | jenkins | v1.33.1 | 19 Aug 24 17:56 UTC | 19 Aug 24 17:56 UTC |
	|         | /tmp/nospam-744800 unpause                                               |                   |         |         |                     |                     |
	| stop    | nospam-744800 --log_dir                                                  | nospam-744800     | jenkins | v1.33.1 | 19 Aug 24 17:56 UTC | 19 Aug 24 17:56 UTC |
	|         | /tmp/nospam-744800 stop                                                  |                   |         |         |                     |                     |
	| stop    | nospam-744800 --log_dir                                                  | nospam-744800     | jenkins | v1.33.1 | 19 Aug 24 17:56 UTC | 19 Aug 24 17:56 UTC |
	|         | /tmp/nospam-744800 stop                                                  |                   |         |         |                     |                     |
	| stop    | nospam-744800 --log_dir                                                  | nospam-744800     | jenkins | v1.33.1 | 19 Aug 24 17:56 UTC | 19 Aug 24 17:56 UTC |
	|         | /tmp/nospam-744800 stop                                                  |                   |         |         |                     |                     |
	| delete  | -p nospam-744800                                                         | nospam-744800     | jenkins | v1.33.1 | 19 Aug 24 17:56 UTC | 19 Aug 24 17:56 UTC |
	| start   | -p functional-499773                                                     | functional-499773 | jenkins | v1.33.1 | 19 Aug 24 17:56 UTC | 19 Aug 24 17:58 UTC |
	|         | --memory=4000                                                            |                   |         |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                   |         |         |                     |                     |
	|         | --wait=all --driver=kvm2                                                 |                   |         |         |                     |                     |
	|         | --container-runtime=crio                                                 |                   |         |         |                     |                     |
	| start   | -p functional-499773                                                     | functional-499773 | jenkins | v1.33.1 | 19 Aug 24 17:58 UTC | 19 Aug 24 17:58 UTC |
	|         | --alsologtostderr -v=8                                                   |                   |         |         |                     |                     |
	| cache   | functional-499773 cache add                                              | functional-499773 | jenkins | v1.33.1 | 19 Aug 24 17:58 UTC | 19 Aug 24 17:58 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |         |         |                     |                     |
	| cache   | functional-499773 cache add                                              | functional-499773 | jenkins | v1.33.1 | 19 Aug 24 17:58 UTC | 19 Aug 24 17:58 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |         |         |                     |                     |
	| cache   | functional-499773 cache add                                              | functional-499773 | jenkins | v1.33.1 | 19 Aug 24 17:58 UTC | 19 Aug 24 17:58 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | functional-499773 cache add                                              | functional-499773 | jenkins | v1.33.1 | 19 Aug 24 17:58 UTC | 19 Aug 24 17:58 UTC |
	|         | minikube-local-cache-test:functional-499773                              |                   |         |         |                     |                     |
	| cache   | functional-499773 cache delete                                           | functional-499773 | jenkins | v1.33.1 | 19 Aug 24 17:58 UTC | 19 Aug 24 17:58 UTC |
	|         | minikube-local-cache-test:functional-499773                              |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.33.1 | 19 Aug 24 17:58 UTC | 19 Aug 24 17:58 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |         |         |                     |                     |
	| cache   | list                                                                     | minikube          | jenkins | v1.33.1 | 19 Aug 24 17:58 UTC | 19 Aug 24 17:58 UTC |
	| ssh     | functional-499773 ssh sudo                                               | functional-499773 | jenkins | v1.33.1 | 19 Aug 24 17:58 UTC | 19 Aug 24 17:58 UTC |
	|         | crictl images                                                            |                   |         |         |                     |                     |
	| ssh     | functional-499773                                                        | functional-499773 | jenkins | v1.33.1 | 19 Aug 24 17:58 UTC | 19 Aug 24 17:58 UTC |
	|         | ssh sudo crictl rmi                                                      |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| ssh     | functional-499773 ssh                                                    | functional-499773 | jenkins | v1.33.1 | 19 Aug 24 17:58 UTC |                     |
	|         | sudo crictl inspecti                                                     |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | functional-499773 cache reload                                           | functional-499773 | jenkins | v1.33.1 | 19 Aug 24 17:58 UTC | 19 Aug 24 17:58 UTC |
	| ssh     | functional-499773 ssh                                                    | functional-499773 | jenkins | v1.33.1 | 19 Aug 24 17:58 UTC | 19 Aug 24 17:58 UTC |
	|         | sudo crictl inspecti                                                     |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.33.1 | 19 Aug 24 17:58 UTC | 19 Aug 24 17:58 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.33.1 | 19 Aug 24 17:58 UTC | 19 Aug 24 17:58 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| kubectl | functional-499773 kubectl --                                             | functional-499773 | jenkins | v1.33.1 | 19 Aug 24 17:58 UTC | 19 Aug 24 17:58 UTC |
	|         | --context functional-499773                                              |                   |         |         |                     |                     |
	|         | get pods                                                                 |                   |         |         |                     |                     |
	| start   | -p functional-499773                                                     | functional-499773 | jenkins | v1.33.1 | 19 Aug 24 17:58 UTC | 19 Aug 24 18:00 UTC |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                   |         |         |                     |                     |
	|         | --wait=all                                                               |                   |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 17:58:48
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 17:58:48.939110  387292 out.go:345] Setting OutFile to fd 1 ...
	I0819 17:58:48.939213  387292 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 17:58:48.939217  387292 out.go:358] Setting ErrFile to fd 2...
	I0819 17:58:48.939220  387292 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 17:58:48.939402  387292 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19468-372744/.minikube/bin
	I0819 17:58:48.939958  387292 out.go:352] Setting JSON to false
	I0819 17:58:48.940965  387292 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":6072,"bootTime":1724084257,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 17:58:48.941017  387292 start.go:139] virtualization: kvm guest
	I0819 17:58:48.942977  387292 out.go:177] * [functional-499773] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 17:58:48.944069  387292 notify.go:220] Checking for updates...
	I0819 17:58:48.944091  387292 out.go:177]   - MINIKUBE_LOCATION=19468
	I0819 17:58:48.945197  387292 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 17:58:48.946321  387292 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19468-372744/kubeconfig
	I0819 17:58:48.947563  387292 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19468-372744/.minikube
	I0819 17:58:48.948861  387292 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 17:58:48.950016  387292 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 17:58:48.951496  387292 config.go:182] Loaded profile config "functional-499773": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 17:58:48.951570  387292 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 17:58:48.952037  387292 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:58:48.952074  387292 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:58:48.967393  387292 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46585
	I0819 17:58:48.967836  387292 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:58:48.968422  387292 main.go:141] libmachine: Using API Version  1
	I0819 17:58:48.968444  387292 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:58:48.968769  387292 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:58:48.968921  387292 main.go:141] libmachine: (functional-499773) Calling .DriverName
	I0819 17:58:49.001645  387292 out.go:177] * Using the kvm2 driver based on existing profile
	I0819 17:58:49.002809  387292 start.go:297] selected driver: kvm2
	I0819 17:58:49.002824  387292 start.go:901] validating driver "kvm2" against &{Name:functional-499773 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:functional-499773 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.36 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 17:58:49.002929  387292 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 17:58:49.003290  387292 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 17:58:49.003357  387292 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19468-372744/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 17:58:49.018553  387292 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 17:58:49.019570  387292 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 17:58:49.019650  387292 cni.go:84] Creating CNI manager for ""
	I0819 17:58:49.019659  387292 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 17:58:49.019753  387292 start.go:340] cluster config:
	{Name:functional-499773 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-499773 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.36 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 17:58:49.019874  387292 iso.go:125] acquiring lock: {Name:mk4c0ac1c3202b1a296739df622960e7a0bd8566 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 17:58:49.022702  387292 out.go:177] * Starting "functional-499773" primary control-plane node in "functional-499773" cluster
	I0819 17:58:49.023994  387292 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 17:58:49.024026  387292 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0819 17:58:49.024033  387292 cache.go:56] Caching tarball of preloaded images
	I0819 17:58:49.024122  387292 preload.go:172] Found /home/jenkins/minikube-integration/19468-372744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 17:58:49.024128  387292 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 17:58:49.024220  387292 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/functional-499773/config.json ...
	I0819 17:58:49.024393  387292 start.go:360] acquireMachinesLock for functional-499773: {Name:mk24ba67a747357e9ce40f1e460d2bb0bc59cc75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 17:58:49.024431  387292 start.go:364] duration metric: took 24.119µs to acquireMachinesLock for "functional-499773"
	I0819 17:58:49.024442  387292 start.go:96] Skipping create...Using existing machine configuration
	I0819 17:58:49.024446  387292 fix.go:54] fixHost starting: 
	I0819 17:58:49.024708  387292 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:58:49.024737  387292 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:58:49.039835  387292 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38105
	I0819 17:58:49.040246  387292 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:58:49.040717  387292 main.go:141] libmachine: Using API Version  1
	I0819 17:58:49.040733  387292 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:58:49.041148  387292 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:58:49.041321  387292 main.go:141] libmachine: (functional-499773) Calling .DriverName
	I0819 17:58:49.041476  387292 main.go:141] libmachine: (functional-499773) Calling .GetState
	I0819 17:58:49.042884  387292 fix.go:112] recreateIfNeeded on functional-499773: state=Running err=<nil>
	W0819 17:58:49.042900  387292 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 17:58:49.045150  387292 out.go:177] * Updating the running kvm2 "functional-499773" VM ...
	I0819 17:58:49.046466  387292 machine.go:93] provisionDockerMachine start ...
	I0819 17:58:49.046483  387292 main.go:141] libmachine: (functional-499773) Calling .DriverName
	I0819 17:58:49.046741  387292 main.go:141] libmachine: (functional-499773) Calling .GetSSHHostname
	I0819 17:58:49.049394  387292 main.go:141] libmachine: (functional-499773) DBG | domain functional-499773 has defined MAC address 52:54:00:16:1d:03 in network mk-functional-499773
	I0819 17:58:49.049818  387292 main.go:141] libmachine: (functional-499773) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:1d:03", ip: ""} in network mk-functional-499773: {Iface:virbr1 ExpiryTime:2024-08-19 18:56:56 +0000 UTC Type:0 Mac:52:54:00:16:1d:03 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:functional-499773 Clientid:01:52:54:00:16:1d:03}
	I0819 17:58:49.049841  387292 main.go:141] libmachine: (functional-499773) DBG | domain functional-499773 has defined IP address 192.168.39.36 and MAC address 52:54:00:16:1d:03 in network mk-functional-499773
	I0819 17:58:49.050021  387292 main.go:141] libmachine: (functional-499773) Calling .GetSSHPort
	I0819 17:58:49.050217  387292 main.go:141] libmachine: (functional-499773) Calling .GetSSHKeyPath
	I0819 17:58:49.050363  387292 main.go:141] libmachine: (functional-499773) Calling .GetSSHKeyPath
	I0819 17:58:49.050484  387292 main.go:141] libmachine: (functional-499773) Calling .GetSSHUsername
	I0819 17:58:49.050634  387292 main.go:141] libmachine: Using SSH client type: native
	I0819 17:58:49.050827  387292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0819 17:58:49.050831  387292 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 17:58:49.152422  387292 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-499773
	
	I0819 17:58:49.152440  387292 main.go:141] libmachine: (functional-499773) Calling .GetMachineName
	I0819 17:58:49.152708  387292 buildroot.go:166] provisioning hostname "functional-499773"
	I0819 17:58:49.152730  387292 main.go:141] libmachine: (functional-499773) Calling .GetMachineName
	I0819 17:58:49.152935  387292 main.go:141] libmachine: (functional-499773) Calling .GetSSHHostname
	I0819 17:58:49.155937  387292 main.go:141] libmachine: (functional-499773) DBG | domain functional-499773 has defined MAC address 52:54:00:16:1d:03 in network mk-functional-499773
	I0819 17:58:49.156346  387292 main.go:141] libmachine: (functional-499773) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:1d:03", ip: ""} in network mk-functional-499773: {Iface:virbr1 ExpiryTime:2024-08-19 18:56:56 +0000 UTC Type:0 Mac:52:54:00:16:1d:03 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:functional-499773 Clientid:01:52:54:00:16:1d:03}
	I0819 17:58:49.156372  387292 main.go:141] libmachine: (functional-499773) DBG | domain functional-499773 has defined IP address 192.168.39.36 and MAC address 52:54:00:16:1d:03 in network mk-functional-499773
	I0819 17:58:49.156509  387292 main.go:141] libmachine: (functional-499773) Calling .GetSSHPort
	I0819 17:58:49.156744  387292 main.go:141] libmachine: (functional-499773) Calling .GetSSHKeyPath
	I0819 17:58:49.156897  387292 main.go:141] libmachine: (functional-499773) Calling .GetSSHKeyPath
	I0819 17:58:49.157015  387292 main.go:141] libmachine: (functional-499773) Calling .GetSSHUsername
	I0819 17:58:49.157200  387292 main.go:141] libmachine: Using SSH client type: native
	I0819 17:58:49.157369  387292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0819 17:58:49.157375  387292 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-499773 && echo "functional-499773" | sudo tee /etc/hostname
	I0819 17:58:49.274366  387292 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-499773
	
	I0819 17:58:49.274384  387292 main.go:141] libmachine: (functional-499773) Calling .GetSSHHostname
	I0819 17:58:49.277425  387292 main.go:141] libmachine: (functional-499773) DBG | domain functional-499773 has defined MAC address 52:54:00:16:1d:03 in network mk-functional-499773
	I0819 17:58:49.277683  387292 main.go:141] libmachine: (functional-499773) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:1d:03", ip: ""} in network mk-functional-499773: {Iface:virbr1 ExpiryTime:2024-08-19 18:56:56 +0000 UTC Type:0 Mac:52:54:00:16:1d:03 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:functional-499773 Clientid:01:52:54:00:16:1d:03}
	I0819 17:58:49.277707  387292 main.go:141] libmachine: (functional-499773) DBG | domain functional-499773 has defined IP address 192.168.39.36 and MAC address 52:54:00:16:1d:03 in network mk-functional-499773
	I0819 17:58:49.277900  387292 main.go:141] libmachine: (functional-499773) Calling .GetSSHPort
	I0819 17:58:49.278075  387292 main.go:141] libmachine: (functional-499773) Calling .GetSSHKeyPath
	I0819 17:58:49.278194  387292 main.go:141] libmachine: (functional-499773) Calling .GetSSHKeyPath
	I0819 17:58:49.278279  387292 main.go:141] libmachine: (functional-499773) Calling .GetSSHUsername
	I0819 17:58:49.278387  387292 main.go:141] libmachine: Using SSH client type: native
	I0819 17:58:49.278548  387292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0819 17:58:49.278561  387292 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-499773' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-499773/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-499773' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 17:58:49.380631  387292 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 17:58:49.380652  387292 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19468-372744/.minikube CaCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19468-372744/.minikube}
	I0819 17:58:49.380701  387292 buildroot.go:174] setting up certificates
	I0819 17:58:49.380711  387292 provision.go:84] configureAuth start
	I0819 17:58:49.380720  387292 main.go:141] libmachine: (functional-499773) Calling .GetMachineName
	I0819 17:58:49.381075  387292 main.go:141] libmachine: (functional-499773) Calling .GetIP
	I0819 17:58:49.383767  387292 main.go:141] libmachine: (functional-499773) DBG | domain functional-499773 has defined MAC address 52:54:00:16:1d:03 in network mk-functional-499773
	I0819 17:58:49.384123  387292 main.go:141] libmachine: (functional-499773) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:1d:03", ip: ""} in network mk-functional-499773: {Iface:virbr1 ExpiryTime:2024-08-19 18:56:56 +0000 UTC Type:0 Mac:52:54:00:16:1d:03 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:functional-499773 Clientid:01:52:54:00:16:1d:03}
	I0819 17:58:49.384148  387292 main.go:141] libmachine: (functional-499773) DBG | domain functional-499773 has defined IP address 192.168.39.36 and MAC address 52:54:00:16:1d:03 in network mk-functional-499773
	I0819 17:58:49.384267  387292 main.go:141] libmachine: (functional-499773) Calling .GetSSHHostname
	I0819 17:58:49.386490  387292 main.go:141] libmachine: (functional-499773) DBG | domain functional-499773 has defined MAC address 52:54:00:16:1d:03 in network mk-functional-499773
	I0819 17:58:49.386816  387292 main.go:141] libmachine: (functional-499773) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:1d:03", ip: ""} in network mk-functional-499773: {Iface:virbr1 ExpiryTime:2024-08-19 18:56:56 +0000 UTC Type:0 Mac:52:54:00:16:1d:03 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:functional-499773 Clientid:01:52:54:00:16:1d:03}
	I0819 17:58:49.386835  387292 main.go:141] libmachine: (functional-499773) DBG | domain functional-499773 has defined IP address 192.168.39.36 and MAC address 52:54:00:16:1d:03 in network mk-functional-499773
	I0819 17:58:49.386919  387292 provision.go:143] copyHostCerts
	I0819 17:58:49.386990  387292 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem, removing ...
	I0819 17:58:49.387010  387292 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem
	I0819 17:58:49.387081  387292 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem (1675 bytes)
	I0819 17:58:49.387171  387292 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem, removing ...
	I0819 17:58:49.387174  387292 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem
	I0819 17:58:49.387198  387292 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem (1082 bytes)
	I0819 17:58:49.387257  387292 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem, removing ...
	I0819 17:58:49.387260  387292 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem
	I0819 17:58:49.387279  387292 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem (1123 bytes)
	I0819 17:58:49.387334  387292 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem org=jenkins.functional-499773 san=[127.0.0.1 192.168.39.36 functional-499773 localhost minikube]
	I0819 17:58:49.508714  387292 provision.go:177] copyRemoteCerts
	I0819 17:58:49.508763  387292 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 17:58:49.508787  387292 main.go:141] libmachine: (functional-499773) Calling .GetSSHHostname
	I0819 17:58:49.511265  387292 main.go:141] libmachine: (functional-499773) DBG | domain functional-499773 has defined MAC address 52:54:00:16:1d:03 in network mk-functional-499773
	I0819 17:58:49.511505  387292 main.go:141] libmachine: (functional-499773) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:1d:03", ip: ""} in network mk-functional-499773: {Iface:virbr1 ExpiryTime:2024-08-19 18:56:56 +0000 UTC Type:0 Mac:52:54:00:16:1d:03 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:functional-499773 Clientid:01:52:54:00:16:1d:03}
	I0819 17:58:49.511521  387292 main.go:141] libmachine: (functional-499773) DBG | domain functional-499773 has defined IP address 192.168.39.36 and MAC address 52:54:00:16:1d:03 in network mk-functional-499773
	I0819 17:58:49.511753  387292 main.go:141] libmachine: (functional-499773) Calling .GetSSHPort
	I0819 17:58:49.511947  387292 main.go:141] libmachine: (functional-499773) Calling .GetSSHKeyPath
	I0819 17:58:49.512098  387292 main.go:141] libmachine: (functional-499773) Calling .GetSSHUsername
	I0819 17:58:49.512222  387292 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/functional-499773/id_rsa Username:docker}
	I0819 17:58:49.596126  387292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 17:58:49.623617  387292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0819 17:58:49.650512  387292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 17:58:49.677427  387292 provision.go:87] duration metric: took 296.698772ms to configureAuth
	I0819 17:58:49.677452  387292 buildroot.go:189] setting minikube options for container-runtime
	I0819 17:58:49.677658  387292 config.go:182] Loaded profile config "functional-499773": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 17:58:49.677739  387292 main.go:141] libmachine: (functional-499773) Calling .GetSSHHostname
	I0819 17:58:49.680424  387292 main.go:141] libmachine: (functional-499773) DBG | domain functional-499773 has defined MAC address 52:54:00:16:1d:03 in network mk-functional-499773
	I0819 17:58:49.680763  387292 main.go:141] libmachine: (functional-499773) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:1d:03", ip: ""} in network mk-functional-499773: {Iface:virbr1 ExpiryTime:2024-08-19 18:56:56 +0000 UTC Type:0 Mac:52:54:00:16:1d:03 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:functional-499773 Clientid:01:52:54:00:16:1d:03}
	I0819 17:58:49.680784  387292 main.go:141] libmachine: (functional-499773) DBG | domain functional-499773 has defined IP address 192.168.39.36 and MAC address 52:54:00:16:1d:03 in network mk-functional-499773
	I0819 17:58:49.680978  387292 main.go:141] libmachine: (functional-499773) Calling .GetSSHPort
	I0819 17:58:49.681151  387292 main.go:141] libmachine: (functional-499773) Calling .GetSSHKeyPath
	I0819 17:58:49.681297  387292 main.go:141] libmachine: (functional-499773) Calling .GetSSHKeyPath
	I0819 17:58:49.681418  387292 main.go:141] libmachine: (functional-499773) Calling .GetSSHUsername
	I0819 17:58:49.681539  387292 main.go:141] libmachine: Using SSH client type: native
	I0819 17:58:49.681713  387292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0819 17:58:49.681722  387292 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 17:58:55.299431  387292 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 17:58:55.299447  387292 machine.go:96] duration metric: took 6.252973509s to provisionDockerMachine
	I0819 17:58:55.299458  387292 start.go:293] postStartSetup for "functional-499773" (driver="kvm2")
	I0819 17:58:55.299467  387292 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 17:58:55.299499  387292 main.go:141] libmachine: (functional-499773) Calling .DriverName
	I0819 17:58:55.299933  387292 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 17:58:55.299963  387292 main.go:141] libmachine: (functional-499773) Calling .GetSSHHostname
	I0819 17:58:55.302863  387292 main.go:141] libmachine: (functional-499773) DBG | domain functional-499773 has defined MAC address 52:54:00:16:1d:03 in network mk-functional-499773
	I0819 17:58:55.303225  387292 main.go:141] libmachine: (functional-499773) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:1d:03", ip: ""} in network mk-functional-499773: {Iface:virbr1 ExpiryTime:2024-08-19 18:56:56 +0000 UTC Type:0 Mac:52:54:00:16:1d:03 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:functional-499773 Clientid:01:52:54:00:16:1d:03}
	I0819 17:58:55.303248  387292 main.go:141] libmachine: (functional-499773) DBG | domain functional-499773 has defined IP address 192.168.39.36 and MAC address 52:54:00:16:1d:03 in network mk-functional-499773
	I0819 17:58:55.303379  387292 main.go:141] libmachine: (functional-499773) Calling .GetSSHPort
	I0819 17:58:55.303562  387292 main.go:141] libmachine: (functional-499773) Calling .GetSSHKeyPath
	I0819 17:58:55.303743  387292 main.go:141] libmachine: (functional-499773) Calling .GetSSHUsername
	I0819 17:58:55.303936  387292 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/functional-499773/id_rsa Username:docker}
	I0819 17:58:55.381910  387292 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 17:58:55.386274  387292 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 17:58:55.386292  387292 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-372744/.minikube/addons for local assets ...
	I0819 17:58:55.386351  387292 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-372744/.minikube/files for local assets ...
	I0819 17:58:55.386419  387292 filesync.go:149] local asset: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem -> 3800092.pem in /etc/ssl/certs
	I0819 17:58:55.386485  387292 filesync.go:149] local asset: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/test/nested/copy/380009/hosts -> hosts in /etc/test/nested/copy/380009
	I0819 17:58:55.386521  387292 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/380009
	I0819 17:58:55.395711  387292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem --> /etc/ssl/certs/3800092.pem (1708 bytes)
	I0819 17:58:55.419910  387292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/test/nested/copy/380009/hosts --> /etc/test/nested/copy/380009/hosts (40 bytes)
	I0819 17:58:55.445022  387292 start.go:296] duration metric: took 145.549131ms for postStartSetup
	I0819 17:58:55.445059  387292 fix.go:56] duration metric: took 6.420612404s for fixHost
	I0819 17:58:55.445081  387292 main.go:141] libmachine: (functional-499773) Calling .GetSSHHostname
	I0819 17:58:55.448076  387292 main.go:141] libmachine: (functional-499773) DBG | domain functional-499773 has defined MAC address 52:54:00:16:1d:03 in network mk-functional-499773
	I0819 17:58:55.448487  387292 main.go:141] libmachine: (functional-499773) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:1d:03", ip: ""} in network mk-functional-499773: {Iface:virbr1 ExpiryTime:2024-08-19 18:56:56 +0000 UTC Type:0 Mac:52:54:00:16:1d:03 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:functional-499773 Clientid:01:52:54:00:16:1d:03}
	I0819 17:58:55.448516  387292 main.go:141] libmachine: (functional-499773) DBG | domain functional-499773 has defined IP address 192.168.39.36 and MAC address 52:54:00:16:1d:03 in network mk-functional-499773
	I0819 17:58:55.448746  387292 main.go:141] libmachine: (functional-499773) Calling .GetSSHPort
	I0819 17:58:55.448957  387292 main.go:141] libmachine: (functional-499773) Calling .GetSSHKeyPath
	I0819 17:58:55.449150  387292 main.go:141] libmachine: (functional-499773) Calling .GetSSHKeyPath
	I0819 17:58:55.449257  387292 main.go:141] libmachine: (functional-499773) Calling .GetSSHUsername
	I0819 17:58:55.449425  387292 main.go:141] libmachine: Using SSH client type: native
	I0819 17:58:55.449635  387292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0819 17:58:55.449641  387292 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 17:58:55.552292  387292 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724090335.536728443
	
	I0819 17:58:55.552305  387292 fix.go:216] guest clock: 1724090335.536728443
	I0819 17:58:55.552313  387292 fix.go:229] Guest: 2024-08-19 17:58:55.536728443 +0000 UTC Remote: 2024-08-19 17:58:55.445061521 +0000 UTC m=+6.541333434 (delta=91.666922ms)
	I0819 17:58:55.552339  387292 fix.go:200] guest clock delta is within tolerance: 91.666922ms
	I0819 17:58:55.552346  387292 start.go:83] releasing machines lock for "functional-499773", held for 6.527907965s
	I0819 17:58:55.552377  387292 main.go:141] libmachine: (functional-499773) Calling .DriverName
	I0819 17:58:55.552684  387292 main.go:141] libmachine: (functional-499773) Calling .GetIP
	I0819 17:58:55.555358  387292 main.go:141] libmachine: (functional-499773) DBG | domain functional-499773 has defined MAC address 52:54:00:16:1d:03 in network mk-functional-499773
	I0819 17:58:55.555766  387292 main.go:141] libmachine: (functional-499773) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:1d:03", ip: ""} in network mk-functional-499773: {Iface:virbr1 ExpiryTime:2024-08-19 18:56:56 +0000 UTC Type:0 Mac:52:54:00:16:1d:03 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:functional-499773 Clientid:01:52:54:00:16:1d:03}
	I0819 17:58:55.555791  387292 main.go:141] libmachine: (functional-499773) DBG | domain functional-499773 has defined IP address 192.168.39.36 and MAC address 52:54:00:16:1d:03 in network mk-functional-499773
	I0819 17:58:55.555932  387292 main.go:141] libmachine: (functional-499773) Calling .DriverName
	I0819 17:58:55.556424  387292 main.go:141] libmachine: (functional-499773) Calling .DriverName
	I0819 17:58:55.556599  387292 main.go:141] libmachine: (functional-499773) Calling .DriverName
	I0819 17:58:55.556692  387292 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 17:58:55.556725  387292 main.go:141] libmachine: (functional-499773) Calling .GetSSHHostname
	I0819 17:58:55.556830  387292 ssh_runner.go:195] Run: cat /version.json
	I0819 17:58:55.556846  387292 main.go:141] libmachine: (functional-499773) Calling .GetSSHHostname
	I0819 17:58:55.559438  387292 main.go:141] libmachine: (functional-499773) DBG | domain functional-499773 has defined MAC address 52:54:00:16:1d:03 in network mk-functional-499773
	I0819 17:58:55.559732  387292 main.go:141] libmachine: (functional-499773) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:1d:03", ip: ""} in network mk-functional-499773: {Iface:virbr1 ExpiryTime:2024-08-19 18:56:56 +0000 UTC Type:0 Mac:52:54:00:16:1d:03 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:functional-499773 Clientid:01:52:54:00:16:1d:03}
	I0819 17:58:55.559753  387292 main.go:141] libmachine: (functional-499773) DBG | domain functional-499773 has defined IP address 192.168.39.36 and MAC address 52:54:00:16:1d:03 in network mk-functional-499773
	I0819 17:58:55.559770  387292 main.go:141] libmachine: (functional-499773) DBG | domain functional-499773 has defined MAC address 52:54:00:16:1d:03 in network mk-functional-499773
	I0819 17:58:55.559910  387292 main.go:141] libmachine: (functional-499773) Calling .GetSSHPort
	I0819 17:58:55.560085  387292 main.go:141] libmachine: (functional-499773) Calling .GetSSHKeyPath
	I0819 17:58:55.560118  387292 main.go:141] libmachine: (functional-499773) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:1d:03", ip: ""} in network mk-functional-499773: {Iface:virbr1 ExpiryTime:2024-08-19 18:56:56 +0000 UTC Type:0 Mac:52:54:00:16:1d:03 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:functional-499773 Clientid:01:52:54:00:16:1d:03}
	I0819 17:58:55.560137  387292 main.go:141] libmachine: (functional-499773) DBG | domain functional-499773 has defined IP address 192.168.39.36 and MAC address 52:54:00:16:1d:03 in network mk-functional-499773
	I0819 17:58:55.560236  387292 main.go:141] libmachine: (functional-499773) Calling .GetSSHUsername
	I0819 17:58:55.560295  387292 main.go:141] libmachine: (functional-499773) Calling .GetSSHPort
	I0819 17:58:55.560346  387292 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/functional-499773/id_rsa Username:docker}
	I0819 17:58:55.560476  387292 main.go:141] libmachine: (functional-499773) Calling .GetSSHKeyPath
	I0819 17:58:55.560603  387292 main.go:141] libmachine: (functional-499773) Calling .GetSSHUsername
	I0819 17:58:55.560715  387292 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/functional-499773/id_rsa Username:docker}
	I0819 17:58:55.655576  387292 ssh_runner.go:195] Run: systemctl --version
	I0819 17:58:55.661760  387292 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 17:58:55.807544  387292 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 17:58:55.813891  387292 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 17:58:55.813949  387292 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 17:58:55.822712  387292 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0819 17:58:55.822725  387292 start.go:495] detecting cgroup driver to use...
	I0819 17:58:55.822781  387292 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 17:58:55.840550  387292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 17:58:55.855074  387292 docker.go:217] disabling cri-docker service (if available) ...
	I0819 17:58:55.855134  387292 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 17:58:55.870127  387292 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 17:58:55.883573  387292 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 17:58:56.016956  387292 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 17:58:56.148217  387292 docker.go:233] disabling docker service ...
	I0819 17:58:56.148302  387292 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 17:58:56.165884  387292 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 17:58:56.180213  387292 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 17:58:56.312578  387292 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 17:58:56.444710  387292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 17:58:56.458846  387292 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 17:58:56.478588  387292 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 17:58:56.478650  387292 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:58:56.488978  387292 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 17:58:56.489033  387292 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:58:56.499326  387292 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:58:56.510408  387292 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:58:56.521136  387292 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 17:58:56.531835  387292 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:58:56.542192  387292 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:58:56.553648  387292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:58:56.565008  387292 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 17:58:56.575615  387292 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 17:58:56.585042  387292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 17:58:56.718035  387292 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 17:59:00.643576  387292 ssh_runner.go:235] Completed: sudo systemctl restart crio: (3.925513451s)
	I0819 17:59:00.643602  387292 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 17:59:00.643651  387292 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 17:59:00.648577  387292 start.go:563] Will wait 60s for crictl version
	I0819 17:59:00.648653  387292 ssh_runner.go:195] Run: which crictl
	I0819 17:59:00.652506  387292 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 17:59:00.686093  387292 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 17:59:00.686195  387292 ssh_runner.go:195] Run: crio --version
	I0819 17:59:00.714600  387292 ssh_runner.go:195] Run: crio --version
	I0819 17:59:00.745532  387292 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 17:59:00.746713  387292 main.go:141] libmachine: (functional-499773) Calling .GetIP
	I0819 17:59:00.749692  387292 main.go:141] libmachine: (functional-499773) DBG | domain functional-499773 has defined MAC address 52:54:00:16:1d:03 in network mk-functional-499773
	I0819 17:59:00.750028  387292 main.go:141] libmachine: (functional-499773) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:1d:03", ip: ""} in network mk-functional-499773: {Iface:virbr1 ExpiryTime:2024-08-19 18:56:56 +0000 UTC Type:0 Mac:52:54:00:16:1d:03 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:functional-499773 Clientid:01:52:54:00:16:1d:03}
	I0819 17:59:00.750049  387292 main.go:141] libmachine: (functional-499773) DBG | domain functional-499773 has defined IP address 192.168.39.36 and MAC address 52:54:00:16:1d:03 in network mk-functional-499773
	I0819 17:59:00.750260  387292 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 17:59:00.756188  387292 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0819 17:59:00.757367  387292 kubeadm.go:883] updating cluster {Name:functional-499773 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:functional-499773 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.36 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountS
tring:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 17:59:00.757478  387292 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 17:59:00.757526  387292 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 17:59:00.803319  387292 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 17:59:00.803332  387292 crio.go:433] Images already preloaded, skipping extraction
	I0819 17:59:00.803381  387292 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 17:59:00.838645  387292 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 17:59:00.838660  387292 cache_images.go:84] Images are preloaded, skipping loading
	I0819 17:59:00.838668  387292 kubeadm.go:934] updating node { 192.168.39.36 8441 v1.31.0 crio true true} ...
	I0819 17:59:00.838806  387292 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-499773 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.36
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:functional-499773 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 17:59:00.838876  387292 ssh_runner.go:195] Run: crio config
	I0819 17:59:00.887132  387292 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0819 17:59:00.887192  387292 cni.go:84] Creating CNI manager for ""
	I0819 17:59:00.887200  387292 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 17:59:00.887207  387292 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 17:59:00.887231  387292 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.36 APIServerPort:8441 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-499773 NodeName:functional-499773 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.36"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.36 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts
:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 17:59:00.887383  387292 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.36
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-499773"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.36
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.36"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 17:59:00.887439  387292 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 17:59:00.898191  387292 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 17:59:00.898268  387292 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 17:59:00.909009  387292 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0819 17:59:00.926561  387292 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 17:59:00.943490  387292 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2008 bytes)
	I0819 17:59:00.959805  387292 ssh_runner.go:195] Run: grep 192.168.39.36	control-plane.minikube.internal$ /etc/hosts
	I0819 17:59:00.963658  387292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 17:59:01.093875  387292 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 17:59:01.110323  387292 certs.go:68] Setting up /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/functional-499773 for IP: 192.168.39.36
	I0819 17:59:01.110339  387292 certs.go:194] generating shared ca certs ...
	I0819 17:59:01.110364  387292 certs.go:226] acquiring lock for ca certs: {Name:mk639e03f593e0bccac045f6e9f5ba3b96cc81e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:59:01.110555  387292 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key
	I0819 17:59:01.110590  387292 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key
	I0819 17:59:01.110595  387292 certs.go:256] generating profile certs ...
	I0819 17:59:01.110668  387292 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/functional-499773/client.key
	I0819 17:59:01.110706  387292 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/functional-499773/apiserver.key.efb20804
	I0819 17:59:01.110751  387292 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/functional-499773/proxy-client.key
	I0819 17:59:01.110888  387292 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem (1338 bytes)
	W0819 17:59:01.110916  387292 certs.go:480] ignoring /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009_empty.pem, impossibly tiny 0 bytes
	I0819 17:59:01.110922  387292 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 17:59:01.110945  387292 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem (1082 bytes)
	I0819 17:59:01.110968  387292 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem (1123 bytes)
	I0819 17:59:01.110988  387292 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem (1675 bytes)
	I0819 17:59:01.111021  387292 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem (1708 bytes)
	I0819 17:59:01.111718  387292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 17:59:01.136373  387292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 17:59:01.160011  387292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 17:59:01.185108  387292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 17:59:01.209955  387292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/functional-499773/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0819 17:59:01.233806  387292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/functional-499773/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 17:59:01.257671  387292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/functional-499773/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 17:59:01.282381  387292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/functional-499773/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 17:59:01.306715  387292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 17:59:01.330795  387292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem --> /usr/share/ca-certificates/380009.pem (1338 bytes)
	I0819 17:59:01.354945  387292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem --> /usr/share/ca-certificates/3800092.pem (1708 bytes)
	I0819 17:59:01.379582  387292 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 17:59:01.396766  387292 ssh_runner.go:195] Run: openssl version
	I0819 17:59:01.402414  387292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 17:59:01.413567  387292 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 17:59:01.418105  387292 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0819 17:59:01.418151  387292 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 17:59:01.423791  387292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 17:59:01.433947  387292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/380009.pem && ln -fs /usr/share/ca-certificates/380009.pem /etc/ssl/certs/380009.pem"
	I0819 17:59:01.445668  387292 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/380009.pem
	I0819 17:59:01.450772  387292 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 17:56 /usr/share/ca-certificates/380009.pem
	I0819 17:59:01.450825  387292 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/380009.pem
	I0819 17:59:01.456899  387292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/380009.pem /etc/ssl/certs/51391683.0"
	I0819 17:59:01.467291  387292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3800092.pem && ln -fs /usr/share/ca-certificates/3800092.pem /etc/ssl/certs/3800092.pem"
	I0819 17:59:01.478789  387292 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3800092.pem
	I0819 17:59:01.483515  387292 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 17:56 /usr/share/ca-certificates/3800092.pem
	I0819 17:59:01.483564  387292 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3800092.pem
	I0819 17:59:01.489536  387292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3800092.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 17:59:01.499842  387292 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 17:59:01.504904  387292 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 17:59:01.510531  387292 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 17:59:01.516165  387292 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 17:59:01.521808  387292 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 17:59:01.527372  387292 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 17:59:01.532856  387292 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 17:59:01.538318  387292 kubeadm.go:392] StartCluster: {Name:functional-499773 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:functional-499773 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.36 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountStri
ng:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 17:59:01.538399  387292 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 17:59:01.538437  387292 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 17:59:01.671379  387292 cri.go:89] found id: "025daf068d0c59cc15c36bb2d734fbc328aa1aef71913f043a85a6a45068b922"
	I0819 17:59:01.671392  387292 cri.go:89] found id: "6cff2dbf9ae566ee00ce29c3a39385cfbf2a9a7a7614e08adc35b3db9551d7ca"
	I0819 17:59:01.671394  387292 cri.go:89] found id: "99d88f225a5664f9fcfcdf6aab602575c6e2912c6970d8db80fe2375a226b812"
	I0819 17:59:01.671397  387292 cri.go:89] found id: "fadcbf7cbe5eedabf87425c2c06e99a1468f88a525cd21dd44694b7f9df6b03f"
	I0819 17:59:01.671398  387292 cri.go:89] found id: "946377ac74c7157ca70cc7bbc43cf1465b2ef12db3ce87c1b4653f4dbe60fe79"
	I0819 17:59:01.671401  387292 cri.go:89] found id: "83c579563bb0e50181fe4bc6267c22c7f54afbb5c9e0e68b9daec35ea4bd7792"
	I0819 17:59:01.671402  387292 cri.go:89] found id: "ed58d0c7be59c68ff19a2dc1d6c74242f73dde007ee2d97f9408c713f67f3338"
	I0819 17:59:01.671404  387292 cri.go:89] found id: "6b0431293c1fd7b247d32b1cbe4d5a21344d82c1bd2483472f23429e1c574168"
	I0819 17:59:01.671405  387292 cri.go:89] found id: "2e319ae3cde16bad06ad7d858771097e9161753aced3e11f5fe81b2635170789"
	I0819 17:59:01.671413  387292 cri.go:89] found id: "e565248bceaf84223238f0803826617f04a9f7e0e169c32d8e72851c4764610a"
	I0819 17:59:01.671434  387292 cri.go:89] found id: "4eb3765bbe008d273edbc807be72f94b7e1f6951e2f4eeea1b20897030e90527"
	I0819 17:59:01.671437  387292 cri.go:89] found id: "b6710018c6a7b4fcb4566649e3f74e71c1f4cad6a28e7d9322c2293778f3d438"
	I0819 17:59:01.671440  387292 cri.go:89] found id: "6bf08e8a9fcfb344bb5ea714025c5e18a4cc7b1bbc2ef8bae3a25a8890c66ad9"
	I0819 17:59:01.671443  387292 cri.go:89] found id: ""
	I0819 17:59:01.671496  387292 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 19 18:00:15 functional-499773 crio[5118]: time="2024-08-19 18:00:15.166625647Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090415166604202,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156677,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=61f4b357-84c5-4f8a-986a-6edd58d994b9 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:00:15 functional-499773 crio[5118]: time="2024-08-19 18:00:15.167077748Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6d637f02-30b2-4694-b7b1-e2c20559337a name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:00:15 functional-499773 crio[5118]: time="2024-08-19 18:00:15.167150194Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6d637f02-30b2-4694-b7b1-e2c20559337a name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:00:15 functional-499773 crio[5118]: time="2024-08-19 18:00:15.167433149Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f68033720a9433d5115cdf21b78946b310d73cf279b9a21b28e2b83835ad8760,PodSandboxId:7ddf5cdf45048f317af2953f2d5f49607505051287c42e0406fba00e17e34ec5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724090401987962870,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-499773,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b51ccc2eabf3dc8713ae36781dc82611,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ace4d30f302a645ef83900bc072f4d12707a818ef45b6d6afe5c48ebc4e47a49,PodSandboxId:c94c29ee0f461ca1ca74a9083a3e6c255371a2e1b9a4479caeb946b50e5446e0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724090389891874633,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 223744f1-1b75-4b9d-9955-b089f7da38e1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e2a8a7b13f329fcfaeb9cf69295c28fc46b25d4614879fc6204b2479459fa4e,PodSandboxId:d2334c90f543e6963ba2a27eda81e1c4675f04073fb997e6c0fcede58ac47826,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724090347151368645,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-92lgl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43308261-71aa-457d-b067-31eba47b806a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\"
:\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94c038cb09bf03ea5b2347760c68846208182a51d263b5f0091b42111c062e93,PodSandboxId:3eb6b5132307bbff93791163abfc95833a93f906648fd41e3f8c382b0f935e42,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:4,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724090347128158082,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5rc55,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dee53412-4e59-431
3-bb21-1c638ff80131,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cebbd9bcd1a46bb674b25e582ffcff78db0498b863ea2dd9ab4f1211a7f60c7b,PodSandboxId:141c3e73e78c0c6785dd24457f28ff8b6c05346f4264a02f2ff9c3a88da697d2,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724090342142061497,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-499773,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e3e779cdbb3d9352c2919eb93558f5e,},Annotations:map[string
]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b63e3e21994af74fa7e9a484ca3fcabe5b950afc830348a99f19c0c40f4fd60f,PodSandboxId:a22f58b60780bbde43866fb700dd304b6a3e3131f30a9fdbd5082d3a0725c178,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724090342057759699,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-499773,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85a5d6f780124b09e9d2a8bc6ed8546e,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9644334b65041e7a4cc7d15012ec3a463bcad8696d138c2daeecadf6fd2ac80,PodSandboxId:a686d45fac6caaf43b760f96dfa8f6c445d3b3fe0f94c4b1ef8fbe0860e12a6e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724090342062951056,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-499773,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9afb6fbe53b4198e2f38b9df3d29540,},Annotations:map[strin
g]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9172838165b29b3eaf8af4a1644c656ef08d8f11f470542ebda7a13d7da279f,PodSandboxId:3eb6b5132307bbff93791163abfc95833a93f906648fd41e3f8c382b0f935e42,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724090341904742504,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5rc55,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dee53412-4e59-4313-bb21-1c638ff80131,},Annotations:map[string]string{io.kubernetes.container.h
ash: 78ccb3c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:025daf068d0c59cc15c36bb2d734fbc328aa1aef71913f043a85a6a45068b922,PodSandboxId:f7327debbdff4d2e538dd7547a269661ea63004191f6dd452067cf499ef6180a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724090304613873034,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-92lgl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43308261-71aa-457d-b067-31eba47b806a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.conta
iner.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fadcbf7cbe5eedabf87425c2c06e99a1468f88a525cd21dd44694b7f9df6b03f,PodSandboxId:2fa7ed74db3e9083ad948ca47c7551c6380e1f0ee3a7186f78e008e59ec6f010,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724090300740086594,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sch
eduler-functional-499773,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9afb6fbe53b4198e2f38b9df3d29540,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:946377ac74c7157ca70cc7bbc43cf1465b2ef12db3ce87c1b4653f4dbe60fe79,PodSandboxId:4dc885e3a5d9a5be4bed3f4831e96da4f5867a41fee60fd8d98e7cbed838015b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724090300729044805,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube
-controller-manager-functional-499773,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85a5d6f780124b09e9d2a8bc6ed8546e,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed58d0c7be59c68ff19a2dc1d6c74242f73dde007ee2d97f9408c713f67f3338,PodSandboxId:22714fd2519f34c6b7c4235e99412d954afbd7b9a65c1c6dd350f7779947c1ca,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724090300705293020,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-499773,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e3e779cdbb3d9352c2919eb93558f5e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6d637f02-30b2-4694-b7b1-e2c20559337a name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:00:15 functional-499773 crio[5118]: time="2024-08-19 18:00:15.212907650Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fe4af329-5c7c-4916-a704-b3bde434df92 name=/runtime.v1.RuntimeService/Version
	Aug 19 18:00:15 functional-499773 crio[5118]: time="2024-08-19 18:00:15.212997418Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fe4af329-5c7c-4916-a704-b3bde434df92 name=/runtime.v1.RuntimeService/Version
	Aug 19 18:00:15 functional-499773 crio[5118]: time="2024-08-19 18:00:15.213991799Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9bf196e5-f557-4cb0-a551-6d2f8d72c6b2 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:00:15 functional-499773 crio[5118]: time="2024-08-19 18:00:15.214534572Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090415214511589,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156677,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9bf196e5-f557-4cb0-a551-6d2f8d72c6b2 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:00:15 functional-499773 crio[5118]: time="2024-08-19 18:00:15.215052937Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=09d554ee-a4db-46da-a09d-97905765fe5c name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:00:15 functional-499773 crio[5118]: time="2024-08-19 18:00:15.215126433Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=09d554ee-a4db-46da-a09d-97905765fe5c name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:00:15 functional-499773 crio[5118]: time="2024-08-19 18:00:15.215442124Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f68033720a9433d5115cdf21b78946b310d73cf279b9a21b28e2b83835ad8760,PodSandboxId:7ddf5cdf45048f317af2953f2d5f49607505051287c42e0406fba00e17e34ec5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724090401987962870,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-499773,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b51ccc2eabf3dc8713ae36781dc82611,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ace4d30f302a645ef83900bc072f4d12707a818ef45b6d6afe5c48ebc4e47a49,PodSandboxId:c94c29ee0f461ca1ca74a9083a3e6c255371a2e1b9a4479caeb946b50e5446e0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724090389891874633,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 223744f1-1b75-4b9d-9955-b089f7da38e1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e2a8a7b13f329fcfaeb9cf69295c28fc46b25d4614879fc6204b2479459fa4e,PodSandboxId:d2334c90f543e6963ba2a27eda81e1c4675f04073fb997e6c0fcede58ac47826,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724090347151368645,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-92lgl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43308261-71aa-457d-b067-31eba47b806a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\"
:\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94c038cb09bf03ea5b2347760c68846208182a51d263b5f0091b42111c062e93,PodSandboxId:3eb6b5132307bbff93791163abfc95833a93f906648fd41e3f8c382b0f935e42,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:4,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724090347128158082,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5rc55,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dee53412-4e59-431
3-bb21-1c638ff80131,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cebbd9bcd1a46bb674b25e582ffcff78db0498b863ea2dd9ab4f1211a7f60c7b,PodSandboxId:141c3e73e78c0c6785dd24457f28ff8b6c05346f4264a02f2ff9c3a88da697d2,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724090342142061497,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-499773,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e3e779cdbb3d9352c2919eb93558f5e,},Annotations:map[string
]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b63e3e21994af74fa7e9a484ca3fcabe5b950afc830348a99f19c0c40f4fd60f,PodSandboxId:a22f58b60780bbde43866fb700dd304b6a3e3131f30a9fdbd5082d3a0725c178,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724090342057759699,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-499773,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85a5d6f780124b09e9d2a8bc6ed8546e,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9644334b65041e7a4cc7d15012ec3a463bcad8696d138c2daeecadf6fd2ac80,PodSandboxId:a686d45fac6caaf43b760f96dfa8f6c445d3b3fe0f94c4b1ef8fbe0860e12a6e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724090342062951056,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-499773,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9afb6fbe53b4198e2f38b9df3d29540,},Annotations:map[strin
g]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9172838165b29b3eaf8af4a1644c656ef08d8f11f470542ebda7a13d7da279f,PodSandboxId:3eb6b5132307bbff93791163abfc95833a93f906648fd41e3f8c382b0f935e42,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724090341904742504,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5rc55,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dee53412-4e59-4313-bb21-1c638ff80131,},Annotations:map[string]string{io.kubernetes.container.h
ash: 78ccb3c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:025daf068d0c59cc15c36bb2d734fbc328aa1aef71913f043a85a6a45068b922,PodSandboxId:f7327debbdff4d2e538dd7547a269661ea63004191f6dd452067cf499ef6180a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724090304613873034,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-92lgl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43308261-71aa-457d-b067-31eba47b806a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.conta
iner.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fadcbf7cbe5eedabf87425c2c06e99a1468f88a525cd21dd44694b7f9df6b03f,PodSandboxId:2fa7ed74db3e9083ad948ca47c7551c6380e1f0ee3a7186f78e008e59ec6f010,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724090300740086594,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sch
eduler-functional-499773,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9afb6fbe53b4198e2f38b9df3d29540,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:946377ac74c7157ca70cc7bbc43cf1465b2ef12db3ce87c1b4653f4dbe60fe79,PodSandboxId:4dc885e3a5d9a5be4bed3f4831e96da4f5867a41fee60fd8d98e7cbed838015b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724090300729044805,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube
-controller-manager-functional-499773,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85a5d6f780124b09e9d2a8bc6ed8546e,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed58d0c7be59c68ff19a2dc1d6c74242f73dde007ee2d97f9408c713f67f3338,PodSandboxId:22714fd2519f34c6b7c4235e99412d954afbd7b9a65c1c6dd350f7779947c1ca,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724090300705293020,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-499773,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e3e779cdbb3d9352c2919eb93558f5e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=09d554ee-a4db-46da-a09d-97905765fe5c name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:00:15 functional-499773 crio[5118]: time="2024-08-19 18:00:15.260404333Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3b506a50-9b28-48f7-b048-93b00031f9fd name=/runtime.v1.RuntimeService/Version
	Aug 19 18:00:15 functional-499773 crio[5118]: time="2024-08-19 18:00:15.260500211Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3b506a50-9b28-48f7-b048-93b00031f9fd name=/runtime.v1.RuntimeService/Version
	Aug 19 18:00:15 functional-499773 crio[5118]: time="2024-08-19 18:00:15.262690610Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=70ca6154-c775-46ae-85ee-1c35147d2c08 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:00:15 functional-499773 crio[5118]: time="2024-08-19 18:00:15.263144395Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090415263122115,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156677,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=70ca6154-c775-46ae-85ee-1c35147d2c08 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:00:15 functional-499773 crio[5118]: time="2024-08-19 18:00:15.263671980Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f6f4754b-9b05-4ecb-abd4-ac8aeecf6f23 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:00:15 functional-499773 crio[5118]: time="2024-08-19 18:00:15.263744480Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f6f4754b-9b05-4ecb-abd4-ac8aeecf6f23 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:00:15 functional-499773 crio[5118]: time="2024-08-19 18:00:15.264050596Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f68033720a9433d5115cdf21b78946b310d73cf279b9a21b28e2b83835ad8760,PodSandboxId:7ddf5cdf45048f317af2953f2d5f49607505051287c42e0406fba00e17e34ec5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724090401987962870,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-499773,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b51ccc2eabf3dc8713ae36781dc82611,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ace4d30f302a645ef83900bc072f4d12707a818ef45b6d6afe5c48ebc4e47a49,PodSandboxId:c94c29ee0f461ca1ca74a9083a3e6c255371a2e1b9a4479caeb946b50e5446e0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724090389891874633,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 223744f1-1b75-4b9d-9955-b089f7da38e1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e2a8a7b13f329fcfaeb9cf69295c28fc46b25d4614879fc6204b2479459fa4e,PodSandboxId:d2334c90f543e6963ba2a27eda81e1c4675f04073fb997e6c0fcede58ac47826,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724090347151368645,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-92lgl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43308261-71aa-457d-b067-31eba47b806a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\"
:\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94c038cb09bf03ea5b2347760c68846208182a51d263b5f0091b42111c062e93,PodSandboxId:3eb6b5132307bbff93791163abfc95833a93f906648fd41e3f8c382b0f935e42,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:4,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724090347128158082,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5rc55,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dee53412-4e59-431
3-bb21-1c638ff80131,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cebbd9bcd1a46bb674b25e582ffcff78db0498b863ea2dd9ab4f1211a7f60c7b,PodSandboxId:141c3e73e78c0c6785dd24457f28ff8b6c05346f4264a02f2ff9c3a88da697d2,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724090342142061497,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-499773,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e3e779cdbb3d9352c2919eb93558f5e,},Annotations:map[string
]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b63e3e21994af74fa7e9a484ca3fcabe5b950afc830348a99f19c0c40f4fd60f,PodSandboxId:a22f58b60780bbde43866fb700dd304b6a3e3131f30a9fdbd5082d3a0725c178,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724090342057759699,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-499773,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85a5d6f780124b09e9d2a8bc6ed8546e,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9644334b65041e7a4cc7d15012ec3a463bcad8696d138c2daeecadf6fd2ac80,PodSandboxId:a686d45fac6caaf43b760f96dfa8f6c445d3b3fe0f94c4b1ef8fbe0860e12a6e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724090342062951056,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-499773,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9afb6fbe53b4198e2f38b9df3d29540,},Annotations:map[strin
g]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9172838165b29b3eaf8af4a1644c656ef08d8f11f470542ebda7a13d7da279f,PodSandboxId:3eb6b5132307bbff93791163abfc95833a93f906648fd41e3f8c382b0f935e42,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724090341904742504,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5rc55,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dee53412-4e59-4313-bb21-1c638ff80131,},Annotations:map[string]string{io.kubernetes.container.h
ash: 78ccb3c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:025daf068d0c59cc15c36bb2d734fbc328aa1aef71913f043a85a6a45068b922,PodSandboxId:f7327debbdff4d2e538dd7547a269661ea63004191f6dd452067cf499ef6180a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724090304613873034,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-92lgl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43308261-71aa-457d-b067-31eba47b806a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.conta
iner.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fadcbf7cbe5eedabf87425c2c06e99a1468f88a525cd21dd44694b7f9df6b03f,PodSandboxId:2fa7ed74db3e9083ad948ca47c7551c6380e1f0ee3a7186f78e008e59ec6f010,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724090300740086594,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sch
eduler-functional-499773,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9afb6fbe53b4198e2f38b9df3d29540,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:946377ac74c7157ca70cc7bbc43cf1465b2ef12db3ce87c1b4653f4dbe60fe79,PodSandboxId:4dc885e3a5d9a5be4bed3f4831e96da4f5867a41fee60fd8d98e7cbed838015b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724090300729044805,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube
-controller-manager-functional-499773,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85a5d6f780124b09e9d2a8bc6ed8546e,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed58d0c7be59c68ff19a2dc1d6c74242f73dde007ee2d97f9408c713f67f3338,PodSandboxId:22714fd2519f34c6b7c4235e99412d954afbd7b9a65c1c6dd350f7779947c1ca,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724090300705293020,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-499773,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e3e779cdbb3d9352c2919eb93558f5e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f6f4754b-9b05-4ecb-abd4-ac8aeecf6f23 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:00:15 functional-499773 crio[5118]: time="2024-08-19 18:00:15.297262707Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=44a2771b-73fd-4262-964f-b272127ddd4c name=/runtime.v1.RuntimeService/Version
	Aug 19 18:00:15 functional-499773 crio[5118]: time="2024-08-19 18:00:15.297337106Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=44a2771b-73fd-4262-964f-b272127ddd4c name=/runtime.v1.RuntimeService/Version
	Aug 19 18:00:15 functional-499773 crio[5118]: time="2024-08-19 18:00:15.298569163Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f22e1c70-e28e-4156-84fd-cd309925382a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:00:15 functional-499773 crio[5118]: time="2024-08-19 18:00:15.299064249Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090415299038455,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156677,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f22e1c70-e28e-4156-84fd-cd309925382a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:00:15 functional-499773 crio[5118]: time="2024-08-19 18:00:15.299588673Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3a919898-1f46-406d-ab16-287dfe7d553a name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:00:15 functional-499773 crio[5118]: time="2024-08-19 18:00:15.299661553Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3a919898-1f46-406d-ab16-287dfe7d553a name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:00:15 functional-499773 crio[5118]: time="2024-08-19 18:00:15.299943209Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f68033720a9433d5115cdf21b78946b310d73cf279b9a21b28e2b83835ad8760,PodSandboxId:7ddf5cdf45048f317af2953f2d5f49607505051287c42e0406fba00e17e34ec5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724090401987962870,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-499773,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b51ccc2eabf3dc8713ae36781dc82611,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ace4d30f302a645ef83900bc072f4d12707a818ef45b6d6afe5c48ebc4e47a49,PodSandboxId:c94c29ee0f461ca1ca74a9083a3e6c255371a2e1b9a4479caeb946b50e5446e0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724090389891874633,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 223744f1-1b75-4b9d-9955-b089f7da38e1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e2a8a7b13f329fcfaeb9cf69295c28fc46b25d4614879fc6204b2479459fa4e,PodSandboxId:d2334c90f543e6963ba2a27eda81e1c4675f04073fb997e6c0fcede58ac47826,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724090347151368645,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-92lgl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43308261-71aa-457d-b067-31eba47b806a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\"
:\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94c038cb09bf03ea5b2347760c68846208182a51d263b5f0091b42111c062e93,PodSandboxId:3eb6b5132307bbff93791163abfc95833a93f906648fd41e3f8c382b0f935e42,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:4,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724090347128158082,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5rc55,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dee53412-4e59-431
3-bb21-1c638ff80131,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cebbd9bcd1a46bb674b25e582ffcff78db0498b863ea2dd9ab4f1211a7f60c7b,PodSandboxId:141c3e73e78c0c6785dd24457f28ff8b6c05346f4264a02f2ff9c3a88da697d2,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724090342142061497,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-499773,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e3e779cdbb3d9352c2919eb93558f5e,},Annotations:map[string
]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b63e3e21994af74fa7e9a484ca3fcabe5b950afc830348a99f19c0c40f4fd60f,PodSandboxId:a22f58b60780bbde43866fb700dd304b6a3e3131f30a9fdbd5082d3a0725c178,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724090342057759699,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-499773,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85a5d6f780124b09e9d2a8bc6ed8546e,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9644334b65041e7a4cc7d15012ec3a463bcad8696d138c2daeecadf6fd2ac80,PodSandboxId:a686d45fac6caaf43b760f96dfa8f6c445d3b3fe0f94c4b1ef8fbe0860e12a6e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724090342062951056,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-499773,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9afb6fbe53b4198e2f38b9df3d29540,},Annotations:map[strin
g]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9172838165b29b3eaf8af4a1644c656ef08d8f11f470542ebda7a13d7da279f,PodSandboxId:3eb6b5132307bbff93791163abfc95833a93f906648fd41e3f8c382b0f935e42,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724090341904742504,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5rc55,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dee53412-4e59-4313-bb21-1c638ff80131,},Annotations:map[string]string{io.kubernetes.container.h
ash: 78ccb3c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:025daf068d0c59cc15c36bb2d734fbc328aa1aef71913f043a85a6a45068b922,PodSandboxId:f7327debbdff4d2e538dd7547a269661ea63004191f6dd452067cf499ef6180a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724090304613873034,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-92lgl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43308261-71aa-457d-b067-31eba47b806a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.conta
iner.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fadcbf7cbe5eedabf87425c2c06e99a1468f88a525cd21dd44694b7f9df6b03f,PodSandboxId:2fa7ed74db3e9083ad948ca47c7551c6380e1f0ee3a7186f78e008e59ec6f010,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724090300740086594,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sch
eduler-functional-499773,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9afb6fbe53b4198e2f38b9df3d29540,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:946377ac74c7157ca70cc7bbc43cf1465b2ef12db3ce87c1b4653f4dbe60fe79,PodSandboxId:4dc885e3a5d9a5be4bed3f4831e96da4f5867a41fee60fd8d98e7cbed838015b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724090300729044805,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube
-controller-manager-functional-499773,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85a5d6f780124b09e9d2a8bc6ed8546e,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed58d0c7be59c68ff19a2dc1d6c74242f73dde007ee2d97f9408c713f67f3338,PodSandboxId:22714fd2519f34c6b7c4235e99412d954afbd7b9a65c1c6dd350f7779947c1ca,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724090300705293020,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-499773,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e3e779cdbb3d9352c2919eb93558f5e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3a919898-1f46-406d-ab16-287dfe7d553a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	f68033720a943       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   13 seconds ago       Running             kube-apiserver            0                   7ddf5cdf45048       kube-apiserver-functional-499773
	ace4d30f302a6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   25 seconds ago       Exited              storage-provisioner       5                   c94c29ee0f461       storage-provisioner
	9e2a8a7b13f32       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   About a minute ago   Running             coredns                   2                   d2334c90f543e       coredns-6f6b679f8f-92lgl
	94c038cb09bf0       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   About a minute ago   Running             kube-proxy                4                   3eb6b5132307b       kube-proxy-5rc55
	cebbd9bcd1a46       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   About a minute ago   Running             etcd                      3                   141c3e73e78c0       etcd-functional-499773
	e9644334b6504       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   About a minute ago   Running             kube-scheduler            3                   a686d45fac6ca       kube-scheduler-functional-499773
	b63e3e21994af       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   About a minute ago   Running             kube-controller-manager   3                   a22f58b60780b       kube-controller-manager-functional-499773
	b9172838165b2       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   About a minute ago   Exited              kube-proxy                3                   3eb6b5132307b       kube-proxy-5rc55
	025daf068d0c5       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   About a minute ago   Exited              coredns                   1                   f7327debbdff4       coredns-6f6b679f8f-92lgl
	fadcbf7cbe5ee       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   About a minute ago   Exited              kube-scheduler            2                   2fa7ed74db3e9       kube-scheduler-functional-499773
	946377ac74c71       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   About a minute ago   Exited              kube-controller-manager   2                   4dc885e3a5d9a       kube-controller-manager-functional-499773
	ed58d0c7be59c       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   About a minute ago   Exited              etcd                      2                   22714fd2519f3       etcd-functional-499773
	
	
	==> coredns [025daf068d0c59cc15c36bb2d734fbc328aa1aef71913f043a85a6a45068b922] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:51902 - 58067 "HINFO IN 8779538635133992944.1560737439343827479. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.022137072s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [9e2a8a7b13f329fcfaeb9cf69295c28fc46b25d4614879fc6204b2479459fa4e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:52360 - 13011 "HINFO IN 3419039341048315881.8541156706145263593. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015633794s
	
	
	==> describe nodes <==
	Name:               functional-499773
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-499773
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9c2db9d51ec33b5c53a86e9ba3d384ee332e3411
	                    minikube.k8s.io/name=functional-499773
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T17_57_20_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 17:57:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-499773
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 18:00:07 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 19 Aug 2024 17:59:06 +0000   Mon, 19 Aug 2024 18:00:04 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 19 Aug 2024 17:59:06 +0000   Mon, 19 Aug 2024 18:00:04 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 19 Aug 2024 17:59:06 +0000   Mon, 19 Aug 2024 18:00:04 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 19 Aug 2024 17:59:06 +0000   Mon, 19 Aug 2024 18:00:04 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.36
	  Hostname:    functional-499773
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	System Info:
	  Machine ID:                 fa9201ca6267441a8cb5919b5d3d2c17
	  System UUID:                fa9201ca-6267-441a-8cb5-919b5d3d2c17
	  Boot ID:                    e671503f-e081-4539-b42c-a3d469917eda
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-92lgl                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     2m51s
	  kube-system                 etcd-functional-499773                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         2m56s
	  kube-system                 kube-apiserver-functional-499773             250m (12%)    0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 kube-controller-manager-functional-499773    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m56s
	  kube-system                 kube-proxy-5rc55                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m51s
	  kube-system                 kube-scheduler-functional-499773             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m56s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m48s                kube-proxy       
	  Normal  Starting                 68s                  kube-proxy       
	  Normal  Starting                 110s                 kube-proxy       
	  Normal  NodeAllocatableEnforced  2m56s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m56s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m56s                kubelet          Node functional-499773 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m56s                kubelet          Node functional-499773 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m56s                kubelet          Node functional-499773 status is now: NodeHasSufficientPID
	  Normal  NodeReady                2m55s                kubelet          Node functional-499773 status is now: NodeReady
	  Normal  RegisteredNode           2m51s                node-controller  Node functional-499773 event: Registered Node functional-499773 in Controller
	  Normal  NodeHasSufficientMemory  115s (x8 over 115s)  kubelet          Node functional-499773 status is now: NodeHasSufficientMemory
	  Normal  Starting                 115s                 kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    115s (x8 over 115s)  kubelet          Node functional-499773 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     115s (x7 over 115s)  kubelet          Node functional-499773 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  115s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           109s                 node-controller  Node functional-499773 event: Registered Node functional-499773 in Controller
	  Normal  Starting                 71s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  71s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  70s (x8 over 71s)    kubelet          Node functional-499773 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    70s (x8 over 71s)    kubelet          Node functional-499773 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     70s (x7 over 71s)    kubelet          Node functional-499773 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           66s                  node-controller  Node functional-499773 event: Registered Node functional-499773 in Controller
	  Normal  NodeNotReady             11s                  node-controller  Node functional-499773 status is now: NodeNotReady
	
	
	==> dmesg <==
	[ +11.312443] kauditd_printk_skb: 107 callbacks suppressed
	[Aug19 17:58] systemd-fstab-generator[2372]: Ignoring "noauto" option for root device
	[  +0.143349] systemd-fstab-generator[2384]: Ignoring "noauto" option for root device
	[  +0.203433] systemd-fstab-generator[2404]: Ignoring "noauto" option for root device
	[  +0.353047] systemd-fstab-generator[2559]: Ignoring "noauto" option for root device
	[  +0.883155] systemd-fstab-generator[2951]: Ignoring "noauto" option for root device
	[  +1.762987] systemd-fstab-generator[3564]: Ignoring "noauto" option for root device
	[  +1.937514] systemd-fstab-generator[3685]: Ignoring "noauto" option for root device
	[  +0.076568] kauditd_printk_skb: 254 callbacks suppressed
	[  +5.057777] kauditd_printk_skb: 54 callbacks suppressed
	[ +12.598609] systemd-fstab-generator[4177]: Ignoring "noauto" option for root device
	[ +18.160343] systemd-fstab-generator[5035]: Ignoring "noauto" option for root device
	[  +0.074559] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.058775] systemd-fstab-generator[5047]: Ignoring "noauto" option for root device
	[  +0.160094] systemd-fstab-generator[5061]: Ignoring "noauto" option for root device
	[  +0.132808] systemd-fstab-generator[5073]: Ignoring "noauto" option for root device
	[  +0.272652] systemd-fstab-generator[5101]: Ignoring "noauto" option for root device
	[Aug19 17:59] systemd-fstab-generator[5229]: Ignoring "noauto" option for root device
	[  +0.078908] kauditd_printk_skb: 100 callbacks suppressed
	[  +3.410335] systemd-fstab-generator[5966]: Ignoring "noauto" option for root device
	[  +2.752838] kauditd_printk_skb: 130 callbacks suppressed
	[  +6.088435] kauditd_printk_skb: 9 callbacks suppressed
	[  +9.211210] systemd-fstab-generator[6316]: Ignoring "noauto" option for root device
	[ +14.730377] kauditd_printk_skb: 14 callbacks suppressed
	[Aug19 18:00] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [cebbd9bcd1a46bb674b25e582ffcff78db0498b863ea2dd9ab4f1211a7f60c7b] <==
	{"level":"info","ts":"2024-08-19T17:59:03.142456Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"4bc1bccd4ea9d8cb","local-member-id":"74e924d55c832457","added-peer-id":"74e924d55c832457","added-peer-peer-urls":["https://192.168.39.36:2380"]}
	{"level":"info","ts":"2024-08-19T17:59:03.143037Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"4bc1bccd4ea9d8cb","local-member-id":"74e924d55c832457","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T17:59:03.143383Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T17:59:03.160285Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T17:59:03.161728Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-19T17:59:03.161951Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"74e924d55c832457","initial-advertise-peer-urls":["https://192.168.39.36:2380"],"listen-peer-urls":["https://192.168.39.36:2380"],"advertise-client-urls":["https://192.168.39.36:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.36:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-19T17:59:03.161992Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-19T17:59:03.162065Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.36:2380"}
	{"level":"info","ts":"2024-08-19T17:59:03.162089Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.36:2380"}
	{"level":"info","ts":"2024-08-19T17:59:04.253324Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"74e924d55c832457 is starting a new election at term 3"}
	{"level":"info","ts":"2024-08-19T17:59:04.253437Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"74e924d55c832457 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-08-19T17:59:04.253481Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"74e924d55c832457 received MsgPreVoteResp from 74e924d55c832457 at term 3"}
	{"level":"info","ts":"2024-08-19T17:59:04.253518Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"74e924d55c832457 became candidate at term 4"}
	{"level":"info","ts":"2024-08-19T17:59:04.253542Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"74e924d55c832457 received MsgVoteResp from 74e924d55c832457 at term 4"}
	{"level":"info","ts":"2024-08-19T17:59:04.253570Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"74e924d55c832457 became leader at term 4"}
	{"level":"info","ts":"2024-08-19T17:59:04.253596Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 74e924d55c832457 elected leader 74e924d55c832457 at term 4"}
	{"level":"info","ts":"2024-08-19T17:59:04.258456Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"74e924d55c832457","local-member-attributes":"{Name:functional-499773 ClientURLs:[https://192.168.39.36:2379]}","request-path":"/0/members/74e924d55c832457/attributes","cluster-id":"4bc1bccd4ea9d8cb","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-19T17:59:04.258700Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T17:59:04.258989Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T17:59:04.259131Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-19T17:59:04.259179Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-19T17:59:04.261816Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T17:59:04.261851Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T17:59:04.262705Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-19T17:59:04.264285Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.36:2379"}
	
	
	==> etcd [ed58d0c7be59c68ff19a2dc1d6c74242f73dde007ee2d97f9408c713f67f3338] <==
	{"level":"info","ts":"2024-08-19T17:58:22.352009Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"74e924d55c832457 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-19T17:58:22.352065Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"74e924d55c832457 received MsgPreVoteResp from 74e924d55c832457 at term 2"}
	{"level":"info","ts":"2024-08-19T17:58:22.352101Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"74e924d55c832457 became candidate at term 3"}
	{"level":"info","ts":"2024-08-19T17:58:22.352135Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"74e924d55c832457 received MsgVoteResp from 74e924d55c832457 at term 3"}
	{"level":"info","ts":"2024-08-19T17:58:22.352173Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"74e924d55c832457 became leader at term 3"}
	{"level":"info","ts":"2024-08-19T17:58:22.352281Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 74e924d55c832457 elected leader 74e924d55c832457 at term 3"}
	{"level":"info","ts":"2024-08-19T17:58:22.357261Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"74e924d55c832457","local-member-attributes":"{Name:functional-499773 ClientURLs:[https://192.168.39.36:2379]}","request-path":"/0/members/74e924d55c832457/attributes","cluster-id":"4bc1bccd4ea9d8cb","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-19T17:58:22.357295Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T17:58:22.357544Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-19T17:58:22.357579Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-19T17:58:22.357355Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T17:58:22.358473Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T17:58:22.358493Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T17:58:22.359338Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.36:2379"}
	{"level":"info","ts":"2024-08-19T17:58:22.359963Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-19T17:58:49.791909Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-19T17:58:49.791984Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"functional-499773","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.36:2380"],"advertise-client-urls":["https://192.168.39.36:2379"]}
	{"level":"warn","ts":"2024-08-19T17:58:49.792045Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-19T17:58:49.792125Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-19T17:58:49.831991Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.36:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-19T17:58:49.832044Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.36:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-19T17:58:49.832088Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"74e924d55c832457","current-leader-member-id":"74e924d55c832457"}
	{"level":"info","ts":"2024-08-19T17:58:49.839680Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.36:2380"}
	{"level":"info","ts":"2024-08-19T17:58:49.839783Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.36:2380"}
	{"level":"info","ts":"2024-08-19T17:58:49.839793Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"functional-499773","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.36:2380"],"advertise-client-urls":["https://192.168.39.36:2379"]}
	
	
	==> kernel <==
	 18:00:15 up 3 min,  0 users,  load average: 0.78, 0.89, 0.39
	Linux functional-499773 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [f68033720a9433d5115cdf21b78946b310d73cf279b9a21b28e2b83835ad8760] <==
	I0819 18:00:04.237442       1 crd_finalizer.go:269] Starting CRDFinalizer
	I0819 18:00:04.271780       1 shared_informer.go:320] Caches are synced for configmaps
	I0819 18:00:04.273859       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0819 18:00:04.324283       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0819 18:00:04.324317       1 policy_source.go:224] refreshing policies
	I0819 18:00:04.334444       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0819 18:00:04.337385       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0819 18:00:04.337534       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0819 18:00:04.337778       1 controller.go:142] Starting OpenAPI controller
	I0819 18:00:04.341514       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0819 18:00:04.342632       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0819 18:00:04.342830       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0819 18:00:04.342900       1 aggregator.go:171] initial CRD sync complete...
	I0819 18:00:04.342922       1 autoregister_controller.go:144] Starting autoregister controller
	I0819 18:00:04.342927       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0819 18:00:04.342931       1 cache.go:39] Caches are synced for autoregister controller
	I0819 18:00:04.349584       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0819 18:00:04.372275       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0819 18:00:04.372760       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0819 18:00:04.372825       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0819 18:00:04.413387       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0819 18:00:05.176603       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0819 18:00:05.554845       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.36]
	I0819 18:00:05.556041       1 controller.go:615] quota admission added evaluator for: endpoints
	I0819 18:00:05.561056       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [946377ac74c7157ca70cc7bbc43cf1465b2ef12db3ce87c1b4653f4dbe60fe79] <==
	I0819 17:58:26.958655       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0819 17:58:26.959302       1 shared_informer.go:320] Caches are synced for ephemeral
	I0819 17:58:26.961799       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0819 17:58:26.961898       1 shared_informer.go:320] Caches are synced for namespace
	I0819 17:58:26.965795       1 shared_informer.go:320] Caches are synced for taint
	I0819 17:58:26.966160       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0819 17:58:26.966975       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-499773"
	I0819 17:58:26.967095       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0819 17:58:26.969066       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0819 17:58:26.970655       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0819 17:58:26.971969       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0819 17:58:26.974543       1 shared_informer.go:320] Caches are synced for job
	I0819 17:58:27.036377       1 shared_informer.go:320] Caches are synced for stateful set
	I0819 17:58:27.063452       1 shared_informer.go:320] Caches are synced for deployment
	I0819 17:58:27.084567       1 shared_informer.go:320] Caches are synced for disruption
	I0819 17:58:27.157989       1 shared_informer.go:320] Caches are synced for persistent volume
	I0819 17:58:27.166607       1 shared_informer.go:320] Caches are synced for resource quota
	I0819 17:58:27.167916       1 shared_informer.go:320] Caches are synced for resource quota
	I0819 17:58:27.221845       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="263.098768ms"
	I0819 17:58:27.222275       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="153.224µs"
	I0819 17:58:27.603470       1 shared_informer.go:320] Caches are synced for garbage collector
	I0819 17:58:27.658951       1 shared_informer.go:320] Caches are synced for garbage collector
	I0819 17:58:27.659033       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0819 17:58:31.772497       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="11.620634ms"
	I0819 17:58:31.773568       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="36.223µs"
	
	
	==> kube-controller-manager [b63e3e21994af74fa7e9a484ca3fcabe5b950afc830348a99f19c0c40f4fd60f] <==
	E0819 18:00:04.209872       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.NetworkPolicy: unknown (get networkpolicies.networking.k8s.io)" logger="UnhandledError"
	E0819 18:00:04.209891       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RoleBinding: unknown (get rolebindings.rbac.authorization.k8s.io)" logger="UnhandledError"
	E0819 18:00:04.209944       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)" logger="UnhandledError"
	E0819 18:00:04.209981       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ValidatingWebhookConfiguration: unknown (get validatingwebhookconfigurations.admissionregistration.k8s.io)" logger="UnhandledError"
	E0819 18:00:04.209999       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E0819 18:00:04.210034       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Deployment: unknown (get deployments.apps)" logger="UnhandledError"
	E0819 18:00:04.210050       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.IngressClass: unknown (get ingressclasses.networking.k8s.io)" logger="UnhandledError"
	E0819 18:00:04.328680       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ValidatingAdmissionPolicy: unknown (get validatingadmissionpolicies.admissionregistration.k8s.io)" logger="UnhandledError"
	I0819 18:00:04.804692       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0819 18:00:04.865049       1 controller_utils.go:151] "Failed to update status for pod" logger="node-lifecycle-controller" pod="kube-system/storage-provisioner" err="Operation cannot be fulfilled on pods \"storage-provisioner\": the object has been modified; please apply your changes to the latest version and try again"
	I0819 18:00:04.892661       1 controller_utils.go:151] "Failed to update status for pod" logger="node-lifecycle-controller" pod="kube-system/kube-apiserver-functional-499773" err="Operation cannot be fulfilled on pods \"kube-apiserver-functional-499773\": StorageError: invalid object, Code: 4, Key: /registry/pods/kube-system/kube-apiserver-functional-499773, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: f52aff8b-9557-432a-bd51-8a78b22056ac, UID in object meta: 73c7136a-f523-4558-ae8e-4594af2ce9a0"
	E0819 18:00:04.892825       1 node_lifecycle_controller.go:758] "Unhandled Error" err="unable to mark all pods NotReady on node functional-499773: [Operation cannot be fulfilled on pods \"storage-provisioner\": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on pods \"kube-apiserver-functional-499773\": StorageError: invalid object, Code: 4, Key: /registry/pods/kube-system/kube-apiserver-functional-499773, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: f52aff8b-9557-432a-bd51-8a78b22056ac, UID in object meta: 73c7136a-f523-4558-ae8e-4594af2ce9a0]; queuing for retry" logger="UnhandledError"
	I0819 18:00:04.894339       1 node_lifecycle_controller.go:1036] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	E0819 18:00:09.900547       1 node_lifecycle_controller.go:978] "Error updating node" err="Operation cannot be fulfilled on nodes \"functional-499773\": the object has been modified; please apply your changes to the latest version and try again" logger="node-lifecycle-controller" node="functional-499773"
	I0819 18:00:09.927686       1 controller_utils.go:151] "Failed to update status for pod" logger="node-lifecycle-controller" pod="kube-system/storage-provisioner" err="Operation cannot be fulfilled on pods \"storage-provisioner\": the object has been modified; please apply your changes to the latest version and try again"
	I0819 18:00:09.933895       1 controller_utils.go:151] "Failed to update status for pod" logger="node-lifecycle-controller" pod="kube-system/coredns-6f6b679f8f-92lgl" err="Operation cannot be fulfilled on pods \"coredns-6f6b679f8f-92lgl\": the object has been modified; please apply your changes to the latest version and try again"
	I0819 18:00:09.938530       1 controller_utils.go:151] "Failed to update status for pod" logger="node-lifecycle-controller" pod="kube-system/etcd-functional-499773" err="Operation cannot be fulfilled on pods \"etcd-functional-499773\": the object has been modified; please apply your changes to the latest version and try again"
	I0819 18:00:09.942249       1 controller_utils.go:151] "Failed to update status for pod" logger="node-lifecycle-controller" pod="kube-system/kube-apiserver-functional-499773" err="Operation cannot be fulfilled on pods \"kube-apiserver-functional-499773\": StorageError: invalid object, Code: 4, Key: /registry/pods/kube-system/kube-apiserver-functional-499773, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: f52aff8b-9557-432a-bd51-8a78b22056ac, UID in object meta: 73c7136a-f523-4558-ae8e-4594af2ce9a0"
	I0819 18:00:09.947829       1 controller_utils.go:151] "Failed to update status for pod" logger="node-lifecycle-controller" pod="kube-system/kube-controller-manager-functional-499773" err="Operation cannot be fulfilled on pods \"kube-controller-manager-functional-499773\": the object has been modified; please apply your changes to the latest version and try again"
	I0819 18:00:09.952877       1 controller_utils.go:151] "Failed to update status for pod" logger="node-lifecycle-controller" pod="kube-system/kube-proxy-5rc55" err="Operation cannot be fulfilled on pods \"kube-proxy-5rc55\": the object has been modified; please apply your changes to the latest version and try again"
	I0819 18:00:09.956680       1 controller_utils.go:151] "Failed to update status for pod" logger="node-lifecycle-controller" pod="kube-system/kube-scheduler-functional-499773" err="Operation cannot be fulfilled on pods \"kube-scheduler-functional-499773\": the object has been modified; please apply your changes to the latest version and try again"
	E0819 18:00:09.956754       1 node_lifecycle_controller.go:758] "Unhandled Error" err="unable to mark all pods NotReady on node functional-499773: [Operation cannot be fulfilled on pods \"storage-provisioner\": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on pods \"coredns-6f6b679f8f-92lgl\": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on pods \"etcd-functional-499773\": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on pods \"kube-apiserver-functional-499773\": StorageError: invalid object, Code: 4, Key: /registry/pods/kube-system/kube-apiserver-functional-499773, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: f52aff8b-9557-432a-bd51-8a78b22056ac, UID in object meta: 73c7136a-f523-4558-ae8e-4594af2ce9a0, Operation cannot be fulfilled on
pods \"kube-controller-manager-functional-499773\": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on pods \"kube-proxy-5rc55\": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on pods \"kube-scheduler-functional-499773\": the object has been modified; please apply your changes to the latest version and try again]; queuing for retry" logger="UnhandledError"
	I0819 18:00:11.744612       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="16.521374ms"
	I0819 18:00:14.925272       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="103.911µs"
	I0819 18:00:14.961280       1 node_lifecycle_controller.go:1055] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [94c038cb09bf03ea5b2347760c68846208182a51d263b5f0091b42111c062e93] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 17:59:07.489327       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0819 17:59:07.498380       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.36"]
	E0819 17:59:07.498986       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 17:59:07.560313       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 17:59:07.560341       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 17:59:07.560364       1 server_linux.go:169] "Using iptables Proxier"
	I0819 17:59:07.562996       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 17:59:07.563437       1 server.go:483] "Version info" version="v1.31.0"
	I0819 17:59:07.563515       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 17:59:07.564674       1 config.go:197] "Starting service config controller"
	I0819 17:59:07.564766       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 17:59:07.564875       1 config.go:104] "Starting endpoint slice config controller"
	I0819 17:59:07.564933       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 17:59:07.566851       1 config.go:326] "Starting node config controller"
	I0819 17:59:07.566893       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 17:59:07.665804       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0819 17:59:07.665934       1 shared_informer.go:320] Caches are synced for service config
	I0819 17:59:07.667538       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [b9172838165b29b3eaf8af4a1644c656ef08d8f11f470542ebda7a13d7da279f] <==
	I0819 17:59:02.402373       1 server_linux.go:66] "Using iptables proxy"
	E0819 17:59:02.496310       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-24: Error: Could not process rule: Operation not supported
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	
	
	==> kube-scheduler [e9644334b65041e7a4cc7d15012ec3a463bcad8696d138c2daeecadf6fd2ac80] <==
	I0819 17:59:03.655536       1 serving.go:386] Generated self-signed cert in-memory
	W0819 17:59:06.318760       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0819 17:59:06.318803       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0819 17:59:06.318814       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0819 17:59:06.318821       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0819 17:59:06.348674       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0819 17:59:06.348756       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 17:59:06.350699       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0819 17:59:06.350746       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0819 17:59:06.350977       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0819 17:59:06.351051       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0819 17:59:06.451252       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0819 18:00:04.219137       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E0819 18:00:04.219417       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E0819 18:00:04.219604       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)" logger="UnhandledError"
	E0819 18:00:04.219766       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)" logger="UnhandledError"
	E0819 18:00:04.219951       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	E0819 18:00:04.220081       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	E0819 18:00:04.220443       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E0819 18:00:04.220738       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	E0819 18:00:04.220861       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E0819 18:00:04.227159       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	
	
	==> kube-scheduler [fadcbf7cbe5eedabf87425c2c06e99a1468f88a525cd21dd44694b7f9df6b03f] <==
	I0819 17:58:21.800110       1 serving.go:386] Generated self-signed cert in-memory
	W0819 17:58:23.601102       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0819 17:58:23.601179       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0819 17:58:23.601247       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0819 17:58:23.601260       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0819 17:58:23.683861       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0819 17:58:23.684067       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 17:58:23.686674       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0819 17:58:23.686912       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0819 17:58:23.687502       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0819 17:58:23.687905       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0819 17:58:23.787302       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0819 17:58:49.802865       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Aug 19 17:59:55 functional-499773 kubelet[5973]: I0819 17:59:55.381936    5973 status_manager.go:851] "Failed to get status for pod" podUID="223744f1-1b75-4b9d-9955-b089f7da38e1" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.39.36:8441: connect: connection refused"
	Aug 19 17:59:55 functional-499773 kubelet[5973]: E0819 17:59:55.382495    5973 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83c579563bb0e50181fe4bc6267c22c7f54afbb5c9e0e68b9daec35ea4bd7792\": container with ID starting with 83c579563bb0e50181fe4bc6267c22c7f54afbb5c9e0e68b9daec35ea4bd7792 not found: ID does not exist" containerID="83c579563bb0e50181fe4bc6267c22c7f54afbb5c9e0e68b9daec35ea4bd7792"
	Aug 19 17:59:55 functional-499773 kubelet[5973]: I0819 17:59:55.382544    5973 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83c579563bb0e50181fe4bc6267c22c7f54afbb5c9e0e68b9daec35ea4bd7792"} err="failed to get container status \"83c579563bb0e50181fe4bc6267c22c7f54afbb5c9e0e68b9daec35ea4bd7792\": rpc error: code = NotFound desc = could not find container \"83c579563bb0e50181fe4bc6267c22c7f54afbb5c9e0e68b9daec35ea4bd7792\": container with ID starting with 83c579563bb0e50181fe4bc6267c22c7f54afbb5c9e0e68b9daec35ea4bd7792 not found: ID does not exist"
	Aug 19 17:59:56 functional-499773 kubelet[5973]: I0819 17:59:56.888874    5973 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd0a0a40a66f8b78846d847f98197f36" path="/var/lib/kubelet/pods/bd0a0a40a66f8b78846d847f98197f36/volumes"
	Aug 19 17:59:58 functional-499773 kubelet[5973]: E0819 17:59:58.648242    5973 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events\": dial tcp 192.168.39.36:8441: connect: connection refused" event="&Event{ObjectMeta:{storage-provisioner.17ed33104049258c  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:storage-provisioner,UID:223744f1-1b75-4b9d-9955-b089f7da38e1,APIVersion:v1,ResourceVersion:450,FieldPath:spec.containers{storage-provisioner},},Reason:BackOff,Message:Back-off restarting failed container storage-provisioner in pod storage-provisioner_kube-system(223744f1-1b75-4b9d-9955-b089f7da38e1),Source:EventSource{Component:kubelet,Host:functional-499773,},FirstTimestamp:2024-08-19 17:59:37.243796876 +0000 UTC m=+32.602279436,LastTimestamp:2024-08-19 17:59:37.243796876 +0000 UTC m=+32.602279436,
Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-499773,}"
	Aug 19 18:00:00 functional-499773 kubelet[5973]: E0819 18:00:00.291529    5973 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-499773?timeout=10s\": dial tcp 192.168.39.36:8441: connect: connection refused" interval="7s"
	Aug 19 18:00:01 functional-499773 kubelet[5973]: I0819 18:00:01.882669    5973 kubelet.go:1895] "Trying to delete pod" pod="kube-system/kube-apiserver-functional-499773" podUID="f52aff8b-9557-432a-bd51-8a78b22056ac"
	Aug 19 18:00:01 functional-499773 kubelet[5973]: I0819 18:00:01.883595    5973 status_manager.go:851] "Failed to get status for pod" podUID="223744f1-1b75-4b9d-9955-b089f7da38e1" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.39.36:8441: connect: connection refused"
	Aug 19 18:00:01 functional-499773 kubelet[5973]: E0819 18:00:01.883732    5973 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-499773\": dial tcp 192.168.39.36:8441: connect: connection refused" pod="kube-system/kube-apiserver-functional-499773"
	Aug 19 18:00:02 functional-499773 kubelet[5973]: I0819 18:00:02.378967    5973 kubelet.go:1895] "Trying to delete pod" pod="kube-system/kube-apiserver-functional-499773" podUID="f52aff8b-9557-432a-bd51-8a78b22056ac"
	Aug 19 18:00:04 functional-499773 kubelet[5973]: E0819 18:00:04.255382    5973 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	Aug 19 18:00:04 functional-499773 kubelet[5973]: I0819 18:00:04.383752    5973 kubelet.go:1900] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-functional-499773"
	Aug 19 18:00:04 functional-499773 kubelet[5973]: I0819 18:00:04.885543    5973 scope.go:117] "RemoveContainer" containerID="ace4d30f302a645ef83900bc072f4d12707a818ef45b6d6afe5c48ebc4e47a49"
	Aug 19 18:00:04 functional-499773 kubelet[5973]: E0819 18:00:04.885660    5973 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(223744f1-1b75-4b9d-9955-b089f7da38e1)\"" pod="kube-system/storage-provisioner" podUID="223744f1-1b75-4b9d-9955-b089f7da38e1"
	Aug 19 18:00:04 functional-499773 kubelet[5973]: E0819 18:00:04.995080    5973 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 18:00:04 functional-499773 kubelet[5973]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 18:00:04 functional-499773 kubelet[5973]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 18:00:04 functional-499773 kubelet[5973]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 18:00:04 functional-499773 kubelet[5973]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 18:00:05 functional-499773 kubelet[5973]: E0819 18:00:05.013823    5973 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090405013353347,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156677,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:00:05 functional-499773 kubelet[5973]: E0819 18:00:05.013851    5973 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090405013353347,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156677,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:00:05 functional-499773 kubelet[5973]: I0819 18:00:05.391498    5973 kubelet.go:1895] "Trying to delete pod" pod="kube-system/kube-apiserver-functional-499773" podUID="f52aff8b-9557-432a-bd51-8a78b22056ac"
	Aug 19 18:00:14 functional-499773 kubelet[5973]: I0819 18:00:14.905888    5973 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-functional-499773" podStartSLOduration=10.905871149 podStartE2EDuration="10.905871149s" podCreationTimestamp="2024-08-19 18:00:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-19 18:00:13.415818313 +0000 UTC m=+68.774300881" watchObservedRunningTime="2024-08-19 18:00:14.905871149 +0000 UTC m=+70.264353716"
	Aug 19 18:00:15 functional-499773 kubelet[5973]: E0819 18:00:15.015615    5973 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090415015316159,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156677,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:00:15 functional-499773 kubelet[5973]: E0819 18:00:15.015646    5973 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090415015316159,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156677,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [ace4d30f302a645ef83900bc072f4d12707a818ef45b6d6afe5c48ebc4e47a49] <==
	I0819 17:59:49.980453       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0819 17:59:49.982378       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 18:00:14.767139  387684 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19468-372744/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-499773 -n functional-499773
helpers_test.go:261: (dbg) Run:  kubectl --context functional-499773 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestFunctional/serial/ComponentHealth FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/serial/ComponentHealth (2.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (142.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-086149 node stop m02 -v=7 --alsologtostderr
E0819 18:06:05.343469  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/functional-499773/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:06:46.305506  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/functional-499773/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:07:10.114373  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-086149 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.48396681s)

                                                
                                                
-- stdout --
	* Stopping node "ha-086149-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 18:05:57.071941  394837 out.go:345] Setting OutFile to fd 1 ...
	I0819 18:05:57.072063  394837 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:05:57.072072  394837 out.go:358] Setting ErrFile to fd 2...
	I0819 18:05:57.072077  394837 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:05:57.072292  394837 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19468-372744/.minikube/bin
	I0819 18:05:57.072565  394837 mustload.go:65] Loading cluster: ha-086149
	I0819 18:05:57.072930  394837 config.go:182] Loaded profile config "ha-086149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:05:57.072946  394837 stop.go:39] StopHost: ha-086149-m02
	I0819 18:05:57.073325  394837 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:05:57.073371  394837 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:05:57.089060  394837 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37619
	I0819 18:05:57.089711  394837 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:05:57.090365  394837 main.go:141] libmachine: Using API Version  1
	I0819 18:05:57.090390  394837 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:05:57.090729  394837 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:05:57.092902  394837 out.go:177] * Stopping node "ha-086149-m02"  ...
	I0819 18:05:57.094191  394837 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0819 18:05:57.094227  394837 main.go:141] libmachine: (ha-086149-m02) Calling .DriverName
	I0819 18:05:57.094447  394837 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0819 18:05:57.094484  394837 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHHostname
	I0819 18:05:57.097486  394837 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:05:57.097959  394837 main.go:141] libmachine: (ha-086149-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:44:0e", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:02:15 +0000 UTC Type:0 Mac:52:54:00:b9:44:0e Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-086149-m02 Clientid:01:52:54:00:b9:44:0e}
	I0819 18:05:57.098025  394837 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined IP address 192.168.39.167 and MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:05:57.098109  394837 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHPort
	I0819 18:05:57.098278  394837 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHKeyPath
	I0819 18:05:57.098451  394837 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHUsername
	I0819 18:05:57.098592  394837 sshutil.go:53] new ssh client: &{IP:192.168.39.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m02/id_rsa Username:docker}
	I0819 18:05:57.186786  394837 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0819 18:05:57.241403  394837 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0819 18:05:57.296440  394837 main.go:141] libmachine: Stopping "ha-086149-m02"...
	I0819 18:05:57.296469  394837 main.go:141] libmachine: (ha-086149-m02) Calling .GetState
	I0819 18:05:57.298231  394837 main.go:141] libmachine: (ha-086149-m02) Calling .Stop
	I0819 18:05:57.302673  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 0/120
	I0819 18:05:58.304285  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 1/120
	I0819 18:05:59.306783  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 2/120
	I0819 18:06:00.308136  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 3/120
	I0819 18:06:01.310309  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 4/120
	I0819 18:06:02.312117  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 5/120
	I0819 18:06:03.314282  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 6/120
	I0819 18:06:04.315841  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 7/120
	I0819 18:06:05.317052  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 8/120
	I0819 18:06:06.318285  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 9/120
	I0819 18:06:07.320370  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 10/120
	I0819 18:06:08.321901  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 11/120
	I0819 18:06:09.323388  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 12/120
	I0819 18:06:10.324792  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 13/120
	I0819 18:06:11.326303  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 14/120
	I0819 18:06:12.327930  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 15/120
	I0819 18:06:13.329233  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 16/120
	I0819 18:06:14.330627  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 17/120
	I0819 18:06:15.331979  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 18/120
	I0819 18:06:16.334178  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 19/120
	I0819 18:06:17.336501  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 20/120
	I0819 18:06:18.338175  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 21/120
	I0819 18:06:19.339560  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 22/120
	I0819 18:06:20.341206  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 23/120
	I0819 18:06:21.342561  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 24/120
	I0819 18:06:22.344032  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 25/120
	I0819 18:06:23.346605  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 26/120
	I0819 18:06:24.347958  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 27/120
	I0819 18:06:25.350511  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 28/120
	I0819 18:06:26.351763  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 29/120
	I0819 18:06:27.353865  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 30/120
	I0819 18:06:28.355201  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 31/120
	I0819 18:06:29.356610  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 32/120
	I0819 18:06:30.358182  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 33/120
	I0819 18:06:31.359571  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 34/120
	I0819 18:06:32.361073  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 35/120
	I0819 18:06:33.362582  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 36/120
	I0819 18:06:34.364269  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 37/120
	I0819 18:06:35.366280  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 38/120
	I0819 18:06:36.367939  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 39/120
	I0819 18:06:37.370375  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 40/120
	I0819 18:06:38.372368  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 41/120
	I0819 18:06:39.374487  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 42/120
	I0819 18:06:40.376751  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 43/120
	I0819 18:06:41.378459  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 44/120
	I0819 18:06:42.380348  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 45/120
	I0819 18:06:43.382648  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 46/120
	I0819 18:06:44.384583  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 47/120
	I0819 18:06:45.386327  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 48/120
	I0819 18:06:46.387804  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 49/120
	I0819 18:06:47.390502  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 50/120
	I0819 18:06:48.391981  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 51/120
	I0819 18:06:49.394070  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 52/120
	I0819 18:06:50.395583  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 53/120
	I0819 18:06:51.397021  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 54/120
	I0819 18:06:52.399270  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 55/120
	I0819 18:06:53.400747  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 56/120
	I0819 18:06:54.402381  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 57/120
	I0819 18:06:55.404047  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 58/120
	I0819 18:06:56.406212  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 59/120
	I0819 18:06:57.408198  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 60/120
	I0819 18:06:58.409354  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 61/120
	I0819 18:06:59.410634  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 62/120
	I0819 18:07:00.412047  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 63/120
	I0819 18:07:01.414018  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 64/120
	I0819 18:07:02.415799  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 65/120
	I0819 18:07:03.417341  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 66/120
	I0819 18:07:04.418806  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 67/120
	I0819 18:07:05.420251  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 68/120
	I0819 18:07:06.422084  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 69/120
	I0819 18:07:07.424524  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 70/120
	I0819 18:07:08.426027  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 71/120
	I0819 18:07:09.427386  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 72/120
	I0819 18:07:10.429069  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 73/120
	I0819 18:07:11.430363  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 74/120
	I0819 18:07:12.432521  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 75/120
	I0819 18:07:13.434085  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 76/120
	I0819 18:07:14.435455  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 77/120
	I0819 18:07:15.436715  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 78/120
	I0819 18:07:16.438168  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 79/120
	I0819 18:07:17.440420  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 80/120
	I0819 18:07:18.442272  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 81/120
	I0819 18:07:19.444306  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 82/120
	I0819 18:07:20.445624  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 83/120
	I0819 18:07:21.447290  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 84/120
	I0819 18:07:22.449194  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 85/120
	I0819 18:07:23.450513  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 86/120
	I0819 18:07:24.452179  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 87/120
	I0819 18:07:25.454304  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 88/120
	I0819 18:07:26.455647  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 89/120
	I0819 18:07:27.457884  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 90/120
	I0819 18:07:28.459335  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 91/120
	I0819 18:07:29.460813  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 92/120
	I0819 18:07:30.461968  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 93/120
	I0819 18:07:31.463442  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 94/120
	I0819 18:07:32.464999  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 95/120
	I0819 18:07:33.466631  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 96/120
	I0819 18:07:34.468945  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 97/120
	I0819 18:07:35.470699  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 98/120
	I0819 18:07:36.472194  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 99/120
	I0819 18:07:37.474508  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 100/120
	I0819 18:07:38.475979  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 101/120
	I0819 18:07:39.478157  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 102/120
	I0819 18:07:40.479486  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 103/120
	I0819 18:07:41.480944  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 104/120
	I0819 18:07:42.482732  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 105/120
	I0819 18:07:43.484063  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 106/120
	I0819 18:07:44.486287  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 107/120
	I0819 18:07:45.488078  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 108/120
	I0819 18:07:46.490332  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 109/120
	I0819 18:07:47.492401  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 110/120
	I0819 18:07:48.494274  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 111/120
	I0819 18:07:49.495659  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 112/120
	I0819 18:07:50.497210  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 113/120
	I0819 18:07:51.498562  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 114/120
	I0819 18:07:52.500651  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 115/120
	I0819 18:07:53.502073  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 116/120
	I0819 18:07:54.503471  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 117/120
	I0819 18:07:55.504700  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 118/120
	I0819 18:07:56.506046  394837 main.go:141] libmachine: (ha-086149-m02) Waiting for machine to stop 119/120
	I0819 18:07:57.506951  394837 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0819 18:07:57.507116  394837 out.go:270] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-086149 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-086149 status -v=7 --alsologtostderr
E0819 18:08:08.227093  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/functional-499773/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-086149 status -v=7 --alsologtostderr: exit status 3 (19.25092681s)

                                                
                                                
-- stdout --
	ha-086149
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-086149-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-086149-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-086149-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 18:07:57.552405  395268 out.go:345] Setting OutFile to fd 1 ...
	I0819 18:07:57.552533  395268 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:07:57.552542  395268 out.go:358] Setting ErrFile to fd 2...
	I0819 18:07:57.552547  395268 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:07:57.552704  395268 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19468-372744/.minikube/bin
	I0819 18:07:57.552869  395268 out.go:352] Setting JSON to false
	I0819 18:07:57.552904  395268 mustload.go:65] Loading cluster: ha-086149
	I0819 18:07:57.553029  395268 notify.go:220] Checking for updates...
	I0819 18:07:57.553329  395268 config.go:182] Loaded profile config "ha-086149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:07:57.553352  395268 status.go:255] checking status of ha-086149 ...
	I0819 18:07:57.553861  395268 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:07:57.553936  395268 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:07:57.569558  395268 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43119
	I0819 18:07:57.570044  395268 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:07:57.570674  395268 main.go:141] libmachine: Using API Version  1
	I0819 18:07:57.570707  395268 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:07:57.571116  395268 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:07:57.571318  395268 main.go:141] libmachine: (ha-086149) Calling .GetState
	I0819 18:07:57.572893  395268 status.go:330] ha-086149 host status = "Running" (err=<nil>)
	I0819 18:07:57.572912  395268 host.go:66] Checking if "ha-086149" exists ...
	I0819 18:07:57.573310  395268 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:07:57.573349  395268 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:07:57.588804  395268 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46113
	I0819 18:07:57.589260  395268 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:07:57.589772  395268 main.go:141] libmachine: Using API Version  1
	I0819 18:07:57.589797  395268 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:07:57.590130  395268 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:07:57.590350  395268 main.go:141] libmachine: (ha-086149) Calling .GetIP
	I0819 18:07:57.593394  395268 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:07:57.593857  395268 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:07:57.593892  395268 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:07:57.594039  395268 host.go:66] Checking if "ha-086149" exists ...
	I0819 18:07:57.594356  395268 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:07:57.594393  395268 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:07:57.609860  395268 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46323
	I0819 18:07:57.610259  395268 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:07:57.610757  395268 main.go:141] libmachine: Using API Version  1
	I0819 18:07:57.610779  395268 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:07:57.611073  395268 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:07:57.611270  395268 main.go:141] libmachine: (ha-086149) Calling .DriverName
	I0819 18:07:57.611436  395268 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 18:07:57.611476  395268 main.go:141] libmachine: (ha-086149) Calling .GetSSHHostname
	I0819 18:07:57.614575  395268 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:07:57.615030  395268 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:07:57.615063  395268 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:07:57.615304  395268 main.go:141] libmachine: (ha-086149) Calling .GetSSHPort
	I0819 18:07:57.615510  395268 main.go:141] libmachine: (ha-086149) Calling .GetSSHKeyPath
	I0819 18:07:57.615701  395268 main.go:141] libmachine: (ha-086149) Calling .GetSSHUsername
	I0819 18:07:57.615887  395268 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149/id_rsa Username:docker}
	I0819 18:07:57.700736  395268 ssh_runner.go:195] Run: systemctl --version
	I0819 18:07:57.709085  395268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:07:57.725525  395268 kubeconfig.go:125] found "ha-086149" server: "https://192.168.39.254:8443"
	I0819 18:07:57.725562  395268 api_server.go:166] Checking apiserver status ...
	I0819 18:07:57.725610  395268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:07:57.746160  395268 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1121/cgroup
	W0819 18:07:57.758377  395268 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1121/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 18:07:57.758442  395268 ssh_runner.go:195] Run: ls
	I0819 18:07:57.762983  395268 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 18:07:57.768150  395268 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 18:07:57.768181  395268 status.go:422] ha-086149 apiserver status = Running (err=<nil>)
	I0819 18:07:57.768195  395268 status.go:257] ha-086149 status: &{Name:ha-086149 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 18:07:57.768219  395268 status.go:255] checking status of ha-086149-m02 ...
	I0819 18:07:57.768634  395268 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:07:57.768673  395268 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:07:57.784961  395268 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41023
	I0819 18:07:57.785367  395268 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:07:57.785869  395268 main.go:141] libmachine: Using API Version  1
	I0819 18:07:57.785892  395268 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:07:57.786184  395268 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:07:57.786453  395268 main.go:141] libmachine: (ha-086149-m02) Calling .GetState
	I0819 18:07:57.787918  395268 status.go:330] ha-086149-m02 host status = "Running" (err=<nil>)
	I0819 18:07:57.787935  395268 host.go:66] Checking if "ha-086149-m02" exists ...
	I0819 18:07:57.788315  395268 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:07:57.788357  395268 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:07:57.803354  395268 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33475
	I0819 18:07:57.803785  395268 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:07:57.804204  395268 main.go:141] libmachine: Using API Version  1
	I0819 18:07:57.804230  395268 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:07:57.804523  395268 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:07:57.804688  395268 main.go:141] libmachine: (ha-086149-m02) Calling .GetIP
	I0819 18:07:57.807840  395268 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:07:57.808298  395268 main.go:141] libmachine: (ha-086149-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:44:0e", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:02:15 +0000 UTC Type:0 Mac:52:54:00:b9:44:0e Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-086149-m02 Clientid:01:52:54:00:b9:44:0e}
	I0819 18:07:57.808332  395268 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined IP address 192.168.39.167 and MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:07:57.808476  395268 host.go:66] Checking if "ha-086149-m02" exists ...
	I0819 18:07:57.808878  395268 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:07:57.808923  395268 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:07:57.824205  395268 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34141
	I0819 18:07:57.824611  395268 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:07:57.825349  395268 main.go:141] libmachine: Using API Version  1
	I0819 18:07:57.825369  395268 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:07:57.825719  395268 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:07:57.825898  395268 main.go:141] libmachine: (ha-086149-m02) Calling .DriverName
	I0819 18:07:57.826068  395268 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 18:07:57.826091  395268 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHHostname
	I0819 18:07:57.828860  395268 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:07:57.829323  395268 main.go:141] libmachine: (ha-086149-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:44:0e", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:02:15 +0000 UTC Type:0 Mac:52:54:00:b9:44:0e Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-086149-m02 Clientid:01:52:54:00:b9:44:0e}
	I0819 18:07:57.829352  395268 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined IP address 192.168.39.167 and MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:07:57.829454  395268 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHPort
	I0819 18:07:57.829623  395268 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHKeyPath
	I0819 18:07:57.829812  395268 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHUsername
	I0819 18:07:57.829977  395268 sshutil.go:53] new ssh client: &{IP:192.168.39.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m02/id_rsa Username:docker}
	W0819 18:08:16.387901  395268 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.167:22: connect: no route to host
	W0819 18:08:16.388025  395268 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.167:22: connect: no route to host
	E0819 18:08:16.388046  395268 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.167:22: connect: no route to host
	I0819 18:08:16.388053  395268 status.go:257] ha-086149-m02 status: &{Name:ha-086149-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0819 18:08:16.388073  395268 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.167:22: connect: no route to host
	I0819 18:08:16.388081  395268 status.go:255] checking status of ha-086149-m03 ...
	I0819 18:08:16.388408  395268 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:08:16.388460  395268 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:08:16.404611  395268 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38301
	I0819 18:08:16.405155  395268 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:08:16.405661  395268 main.go:141] libmachine: Using API Version  1
	I0819 18:08:16.405684  395268 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:08:16.406047  395268 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:08:16.406243  395268 main.go:141] libmachine: (ha-086149-m03) Calling .GetState
	I0819 18:08:16.407989  395268 status.go:330] ha-086149-m03 host status = "Running" (err=<nil>)
	I0819 18:08:16.408012  395268 host.go:66] Checking if "ha-086149-m03" exists ...
	I0819 18:08:16.408328  395268 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:08:16.408365  395268 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:08:16.423781  395268 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43833
	I0819 18:08:16.424194  395268 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:08:16.424717  395268 main.go:141] libmachine: Using API Version  1
	I0819 18:08:16.424741  395268 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:08:16.425043  395268 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:08:16.425329  395268 main.go:141] libmachine: (ha-086149-m03) Calling .GetIP
	I0819 18:08:16.428234  395268 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:08:16.428652  395268 main.go:141] libmachine: (ha-086149-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:29:16", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:03:35 +0000 UTC Type:0 Mac:52:54:00:dc:29:16 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-086149-m03 Clientid:01:52:54:00:dc:29:16}
	I0819 18:08:16.428681  395268 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined IP address 192.168.39.121 and MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:08:16.428844  395268 host.go:66] Checking if "ha-086149-m03" exists ...
	I0819 18:08:16.429205  395268 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:08:16.429250  395268 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:08:16.444871  395268 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45929
	I0819 18:08:16.445351  395268 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:08:16.445808  395268 main.go:141] libmachine: Using API Version  1
	I0819 18:08:16.445833  395268 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:08:16.446152  395268 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:08:16.446326  395268 main.go:141] libmachine: (ha-086149-m03) Calling .DriverName
	I0819 18:08:16.446543  395268 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 18:08:16.446563  395268 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHHostname
	I0819 18:08:16.449499  395268 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:08:16.449948  395268 main.go:141] libmachine: (ha-086149-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:29:16", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:03:35 +0000 UTC Type:0 Mac:52:54:00:dc:29:16 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-086149-m03 Clientid:01:52:54:00:dc:29:16}
	I0819 18:08:16.449970  395268 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined IP address 192.168.39.121 and MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:08:16.450130  395268 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHPort
	I0819 18:08:16.450319  395268 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHKeyPath
	I0819 18:08:16.450516  395268 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHUsername
	I0819 18:08:16.450675  395268 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m03/id_rsa Username:docker}
	I0819 18:08:16.532898  395268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:08:16.553331  395268 kubeconfig.go:125] found "ha-086149" server: "https://192.168.39.254:8443"
	I0819 18:08:16.553368  395268 api_server.go:166] Checking apiserver status ...
	I0819 18:08:16.553414  395268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:08:16.568361  395268 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1440/cgroup
	W0819 18:08:16.578216  395268 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1440/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 18:08:16.578278  395268 ssh_runner.go:195] Run: ls
	I0819 18:08:16.584047  395268 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 18:08:16.591590  395268 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 18:08:16.591620  395268 status.go:422] ha-086149-m03 apiserver status = Running (err=<nil>)
	I0819 18:08:16.591632  395268 status.go:257] ha-086149-m03 status: &{Name:ha-086149-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 18:08:16.591655  395268 status.go:255] checking status of ha-086149-m04 ...
	I0819 18:08:16.592044  395268 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:08:16.592097  395268 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:08:16.608036  395268 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40535
	I0819 18:08:16.608496  395268 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:08:16.608987  395268 main.go:141] libmachine: Using API Version  1
	I0819 18:08:16.609009  395268 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:08:16.609336  395268 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:08:16.609559  395268 main.go:141] libmachine: (ha-086149-m04) Calling .GetState
	I0819 18:08:16.611188  395268 status.go:330] ha-086149-m04 host status = "Running" (err=<nil>)
	I0819 18:08:16.611204  395268 host.go:66] Checking if "ha-086149-m04" exists ...
	I0819 18:08:16.611496  395268 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:08:16.611531  395268 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:08:16.626775  395268 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33509
	I0819 18:08:16.627197  395268 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:08:16.627737  395268 main.go:141] libmachine: Using API Version  1
	I0819 18:08:16.627761  395268 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:08:16.628112  395268 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:08:16.628303  395268 main.go:141] libmachine: (ha-086149-m04) Calling .GetIP
	I0819 18:08:16.631078  395268 main.go:141] libmachine: (ha-086149-m04) DBG | domain ha-086149-m04 has defined MAC address 52:54:00:03:a4:7a in network mk-ha-086149
	I0819 18:08:16.631525  395268 main.go:141] libmachine: (ha-086149-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:a4:7a", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:05:01 +0000 UTC Type:0 Mac:52:54:00:03:a4:7a Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-086149-m04 Clientid:01:52:54:00:03:a4:7a}
	I0819 18:08:16.631551  395268 main.go:141] libmachine: (ha-086149-m04) DBG | domain ha-086149-m04 has defined IP address 192.168.39.173 and MAC address 52:54:00:03:a4:7a in network mk-ha-086149
	I0819 18:08:16.631744  395268 host.go:66] Checking if "ha-086149-m04" exists ...
	I0819 18:08:16.632129  395268 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:08:16.632184  395268 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:08:16.648164  395268 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41849
	I0819 18:08:16.648563  395268 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:08:16.649030  395268 main.go:141] libmachine: Using API Version  1
	I0819 18:08:16.649052  395268 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:08:16.649382  395268 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:08:16.649568  395268 main.go:141] libmachine: (ha-086149-m04) Calling .DriverName
	I0819 18:08:16.649775  395268 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 18:08:16.649799  395268 main.go:141] libmachine: (ha-086149-m04) Calling .GetSSHHostname
	I0819 18:08:16.652492  395268 main.go:141] libmachine: (ha-086149-m04) DBG | domain ha-086149-m04 has defined MAC address 52:54:00:03:a4:7a in network mk-ha-086149
	I0819 18:08:16.652897  395268 main.go:141] libmachine: (ha-086149-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:a4:7a", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:05:01 +0000 UTC Type:0 Mac:52:54:00:03:a4:7a Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-086149-m04 Clientid:01:52:54:00:03:a4:7a}
	I0819 18:08:16.652932  395268 main.go:141] libmachine: (ha-086149-m04) DBG | domain ha-086149-m04 has defined IP address 192.168.39.173 and MAC address 52:54:00:03:a4:7a in network mk-ha-086149
	I0819 18:08:16.653084  395268 main.go:141] libmachine: (ha-086149-m04) Calling .GetSSHPort
	I0819 18:08:16.653279  395268 main.go:141] libmachine: (ha-086149-m04) Calling .GetSSHKeyPath
	I0819 18:08:16.653410  395268 main.go:141] libmachine: (ha-086149-m04) Calling .GetSSHUsername
	I0819 18:08:16.653543  395268 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m04/id_rsa Username:docker}
	I0819 18:08:16.741194  395268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:08:16.757428  395268 status.go:257] ha-086149-m04 status: &{Name:ha-086149-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-086149 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-086149 -n ha-086149
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-086149 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-086149 logs -n 25: (1.459437069s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-086149 cp ha-086149-m03:/home/docker/cp-test.txt                              | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:05 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3465103634/001/cp-test_ha-086149-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-086149 ssh -n                                                                 | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:05 UTC |
	|         | ha-086149-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-086149 cp ha-086149-m03:/home/docker/cp-test.txt                              | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:05 UTC |
	|         | ha-086149:/home/docker/cp-test_ha-086149-m03_ha-086149.txt                       |           |         |         |                     |                     |
	| ssh     | ha-086149 ssh -n                                                                 | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:05 UTC |
	|         | ha-086149-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-086149 ssh -n ha-086149 sudo cat                                              | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:05 UTC |
	|         | /home/docker/cp-test_ha-086149-m03_ha-086149.txt                                 |           |         |         |                     |                     |
	| cp      | ha-086149 cp ha-086149-m03:/home/docker/cp-test.txt                              | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:05 UTC |
	|         | ha-086149-m02:/home/docker/cp-test_ha-086149-m03_ha-086149-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-086149 ssh -n                                                                 | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:05 UTC |
	|         | ha-086149-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-086149 ssh -n ha-086149-m02 sudo cat                                          | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:05 UTC |
	|         | /home/docker/cp-test_ha-086149-m03_ha-086149-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-086149 cp ha-086149-m03:/home/docker/cp-test.txt                              | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:05 UTC |
	|         | ha-086149-m04:/home/docker/cp-test_ha-086149-m03_ha-086149-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-086149 ssh -n                                                                 | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:05 UTC |
	|         | ha-086149-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-086149 ssh -n ha-086149-m04 sudo cat                                          | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:05 UTC |
	|         | /home/docker/cp-test_ha-086149-m03_ha-086149-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-086149 cp testdata/cp-test.txt                                                | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:05 UTC |
	|         | ha-086149-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-086149 ssh -n                                                                 | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:05 UTC |
	|         | ha-086149-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-086149 cp ha-086149-m04:/home/docker/cp-test.txt                              | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:05 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3465103634/001/cp-test_ha-086149-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-086149 ssh -n                                                                 | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:05 UTC |
	|         | ha-086149-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-086149 cp ha-086149-m04:/home/docker/cp-test.txt                              | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:05 UTC |
	|         | ha-086149:/home/docker/cp-test_ha-086149-m04_ha-086149.txt                       |           |         |         |                     |                     |
	| ssh     | ha-086149 ssh -n                                                                 | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:05 UTC |
	|         | ha-086149-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-086149 ssh -n ha-086149 sudo cat                                              | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:05 UTC |
	|         | /home/docker/cp-test_ha-086149-m04_ha-086149.txt                                 |           |         |         |                     |                     |
	| cp      | ha-086149 cp ha-086149-m04:/home/docker/cp-test.txt                              | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:05 UTC |
	|         | ha-086149-m02:/home/docker/cp-test_ha-086149-m04_ha-086149-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-086149 ssh -n                                                                 | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:05 UTC |
	|         | ha-086149-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-086149 ssh -n ha-086149-m02 sudo cat                                          | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:05 UTC |
	|         | /home/docker/cp-test_ha-086149-m04_ha-086149-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-086149 cp ha-086149-m04:/home/docker/cp-test.txt                              | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:05 UTC |
	|         | ha-086149-m03:/home/docker/cp-test_ha-086149-m04_ha-086149-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-086149 ssh -n                                                                 | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:05 UTC |
	|         | ha-086149-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-086149 ssh -n ha-086149-m03 sudo cat                                          | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:05 UTC |
	|         | /home/docker/cp-test_ha-086149-m04_ha-086149-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-086149 node stop m02 -v=7                                                     | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 18:01:14
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 18:01:14.240865  390826 out.go:345] Setting OutFile to fd 1 ...
	I0819 18:01:14.241152  390826 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:01:14.241163  390826 out.go:358] Setting ErrFile to fd 2...
	I0819 18:01:14.241167  390826 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:01:14.241405  390826 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19468-372744/.minikube/bin
	I0819 18:01:14.242090  390826 out.go:352] Setting JSON to false
	I0819 18:01:14.243024  390826 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":6217,"bootTime":1724084257,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 18:01:14.243086  390826 start.go:139] virtualization: kvm guest
	I0819 18:01:14.246082  390826 out.go:177] * [ha-086149] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 18:01:14.247574  390826 notify.go:220] Checking for updates...
	I0819 18:01:14.247589  390826 out.go:177]   - MINIKUBE_LOCATION=19468
	I0819 18:01:14.249064  390826 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 18:01:14.250572  390826 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19468-372744/kubeconfig
	I0819 18:01:14.252143  390826 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19468-372744/.minikube
	I0819 18:01:14.253509  390826 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 18:01:14.255056  390826 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 18:01:14.256458  390826 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 18:01:14.290623  390826 out.go:177] * Using the kvm2 driver based on user configuration
	I0819 18:01:14.291905  390826 start.go:297] selected driver: kvm2
	I0819 18:01:14.291928  390826 start.go:901] validating driver "kvm2" against <nil>
	I0819 18:01:14.291942  390826 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 18:01:14.292641  390826 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 18:01:14.292766  390826 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19468-372744/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 18:01:14.307537  390826 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 18:01:14.307598  390826 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 18:01:14.307841  390826 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 18:01:14.307881  390826 cni.go:84] Creating CNI manager for ""
	I0819 18:01:14.307901  390826 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0819 18:01:14.307911  390826 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0819 18:01:14.307977  390826 start.go:340] cluster config:
	{Name:ha-086149 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-086149 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0819 18:01:14.308105  390826 iso.go:125] acquiring lock: {Name:mk4c0ac1c3202b1a296739df622960e7a0bd8566 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 18:01:14.309823  390826 out.go:177] * Starting "ha-086149" primary control-plane node in "ha-086149" cluster
	I0819 18:01:14.311065  390826 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 18:01:14.311098  390826 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0819 18:01:14.311107  390826 cache.go:56] Caching tarball of preloaded images
	I0819 18:01:14.311185  390826 preload.go:172] Found /home/jenkins/minikube-integration/19468-372744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 18:01:14.311199  390826 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 18:01:14.311518  390826 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/config.json ...
	I0819 18:01:14.311542  390826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/config.json: {Name:mkc1be96187f5b28ff94ccb29ea872196c5d05af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:01:14.311728  390826 start.go:360] acquireMachinesLock for ha-086149: {Name:mk24ba67a747357e9ce40f1e460d2bb0bc59cc75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 18:01:14.311769  390826 start.go:364] duration metric: took 23.965µs to acquireMachinesLock for "ha-086149"
	I0819 18:01:14.311794  390826 start.go:93] Provisioning new machine with config: &{Name:ha-086149 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-086149 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 18:01:14.311863  390826 start.go:125] createHost starting for "" (driver="kvm2")
	I0819 18:01:14.313644  390826 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 18:01:14.313782  390826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:01:14.313827  390826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:01:14.327944  390826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35857
	I0819 18:01:14.328381  390826 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:01:14.328914  390826 main.go:141] libmachine: Using API Version  1
	I0819 18:01:14.328936  390826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:01:14.329300  390826 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:01:14.329486  390826 main.go:141] libmachine: (ha-086149) Calling .GetMachineName
	I0819 18:01:14.329632  390826 main.go:141] libmachine: (ha-086149) Calling .DriverName
	I0819 18:01:14.329800  390826 start.go:159] libmachine.API.Create for "ha-086149" (driver="kvm2")
	I0819 18:01:14.329827  390826 client.go:168] LocalClient.Create starting
	I0819 18:01:14.329868  390826 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem
	I0819 18:01:14.329911  390826 main.go:141] libmachine: Decoding PEM data...
	I0819 18:01:14.329933  390826 main.go:141] libmachine: Parsing certificate...
	I0819 18:01:14.330035  390826 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem
	I0819 18:01:14.330064  390826 main.go:141] libmachine: Decoding PEM data...
	I0819 18:01:14.330084  390826 main.go:141] libmachine: Parsing certificate...
	I0819 18:01:14.330107  390826 main.go:141] libmachine: Running pre-create checks...
	I0819 18:01:14.330123  390826 main.go:141] libmachine: (ha-086149) Calling .PreCreateCheck
	I0819 18:01:14.330444  390826 main.go:141] libmachine: (ha-086149) Calling .GetConfigRaw
	I0819 18:01:14.330802  390826 main.go:141] libmachine: Creating machine...
	I0819 18:01:14.330816  390826 main.go:141] libmachine: (ha-086149) Calling .Create
	I0819 18:01:14.330922  390826 main.go:141] libmachine: (ha-086149) Creating KVM machine...
	I0819 18:01:14.332004  390826 main.go:141] libmachine: (ha-086149) DBG | found existing default KVM network
	I0819 18:01:14.332705  390826 main.go:141] libmachine: (ha-086149) DBG | I0819 18:01:14.332572  390849 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00011d1f0}
	I0819 18:01:14.332725  390826 main.go:141] libmachine: (ha-086149) DBG | created network xml: 
	I0819 18:01:14.332736  390826 main.go:141] libmachine: (ha-086149) DBG | <network>
	I0819 18:01:14.332743  390826 main.go:141] libmachine: (ha-086149) DBG |   <name>mk-ha-086149</name>
	I0819 18:01:14.332749  390826 main.go:141] libmachine: (ha-086149) DBG |   <dns enable='no'/>
	I0819 18:01:14.332759  390826 main.go:141] libmachine: (ha-086149) DBG |   
	I0819 18:01:14.332767  390826 main.go:141] libmachine: (ha-086149) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0819 18:01:14.332781  390826 main.go:141] libmachine: (ha-086149) DBG |     <dhcp>
	I0819 18:01:14.332796  390826 main.go:141] libmachine: (ha-086149) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0819 18:01:14.332809  390826 main.go:141] libmachine: (ha-086149) DBG |     </dhcp>
	I0819 18:01:14.332818  390826 main.go:141] libmachine: (ha-086149) DBG |   </ip>
	I0819 18:01:14.332824  390826 main.go:141] libmachine: (ha-086149) DBG |   
	I0819 18:01:14.332830  390826 main.go:141] libmachine: (ha-086149) DBG | </network>
	I0819 18:01:14.332839  390826 main.go:141] libmachine: (ha-086149) DBG | 
	I0819 18:01:14.338000  390826 main.go:141] libmachine: (ha-086149) DBG | trying to create private KVM network mk-ha-086149 192.168.39.0/24...
	I0819 18:01:14.402561  390826 main.go:141] libmachine: (ha-086149) DBG | private KVM network mk-ha-086149 192.168.39.0/24 created
	I0819 18:01:14.402609  390826 main.go:141] libmachine: (ha-086149) DBG | I0819 18:01:14.402535  390849 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19468-372744/.minikube
	I0819 18:01:14.402621  390826 main.go:141] libmachine: (ha-086149) Setting up store path in /home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149 ...
	I0819 18:01:14.402647  390826 main.go:141] libmachine: (ha-086149) Building disk image from file:///home/jenkins/minikube-integration/19468-372744/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0819 18:01:14.402674  390826 main.go:141] libmachine: (ha-086149) Downloading /home/jenkins/minikube-integration/19468-372744/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19468-372744/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 18:01:14.678792  390826 main.go:141] libmachine: (ha-086149) DBG | I0819 18:01:14.678650  390849 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149/id_rsa...
	I0819 18:01:14.736590  390826 main.go:141] libmachine: (ha-086149) DBG | I0819 18:01:14.736432  390849 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149/ha-086149.rawdisk...
	I0819 18:01:14.736625  390826 main.go:141] libmachine: (ha-086149) DBG | Writing magic tar header
	I0819 18:01:14.736689  390826 main.go:141] libmachine: (ha-086149) DBG | Writing SSH key tar header
	I0819 18:01:14.736745  390826 main.go:141] libmachine: (ha-086149) DBG | I0819 18:01:14.736551  390849 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149 ...
	I0819 18:01:14.736763  390826 main.go:141] libmachine: (ha-086149) Setting executable bit set on /home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149 (perms=drwx------)
	I0819 18:01:14.736775  390826 main.go:141] libmachine: (ha-086149) Setting executable bit set on /home/jenkins/minikube-integration/19468-372744/.minikube/machines (perms=drwxr-xr-x)
	I0819 18:01:14.736783  390826 main.go:141] libmachine: (ha-086149) Setting executable bit set on /home/jenkins/minikube-integration/19468-372744/.minikube (perms=drwxr-xr-x)
	I0819 18:01:14.736798  390826 main.go:141] libmachine: (ha-086149) Setting executable bit set on /home/jenkins/minikube-integration/19468-372744 (perms=drwxrwxr-x)
	I0819 18:01:14.736819  390826 main.go:141] libmachine: (ha-086149) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149
	I0819 18:01:14.736829  390826 main.go:141] libmachine: (ha-086149) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0819 18:01:14.736838  390826 main.go:141] libmachine: (ha-086149) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0819 18:01:14.736847  390826 main.go:141] libmachine: (ha-086149) Creating domain...
	I0819 18:01:14.736858  390826 main.go:141] libmachine: (ha-086149) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19468-372744/.minikube/machines
	I0819 18:01:14.736867  390826 main.go:141] libmachine: (ha-086149) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19468-372744/.minikube
	I0819 18:01:14.736874  390826 main.go:141] libmachine: (ha-086149) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19468-372744
	I0819 18:01:14.736881  390826 main.go:141] libmachine: (ha-086149) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0819 18:01:14.736887  390826 main.go:141] libmachine: (ha-086149) DBG | Checking permissions on dir: /home/jenkins
	I0819 18:01:14.736896  390826 main.go:141] libmachine: (ha-086149) DBG | Checking permissions on dir: /home
	I0819 18:01:14.736973  390826 main.go:141] libmachine: (ha-086149) DBG | Skipping /home - not owner
	I0819 18:01:14.737957  390826 main.go:141] libmachine: (ha-086149) define libvirt domain using xml: 
	I0819 18:01:14.737981  390826 main.go:141] libmachine: (ha-086149) <domain type='kvm'>
	I0819 18:01:14.737990  390826 main.go:141] libmachine: (ha-086149)   <name>ha-086149</name>
	I0819 18:01:14.738001  390826 main.go:141] libmachine: (ha-086149)   <memory unit='MiB'>2200</memory>
	I0819 18:01:14.738013  390826 main.go:141] libmachine: (ha-086149)   <vcpu>2</vcpu>
	I0819 18:01:14.738018  390826 main.go:141] libmachine: (ha-086149)   <features>
	I0819 18:01:14.738023  390826 main.go:141] libmachine: (ha-086149)     <acpi/>
	I0819 18:01:14.738027  390826 main.go:141] libmachine: (ha-086149)     <apic/>
	I0819 18:01:14.738032  390826 main.go:141] libmachine: (ha-086149)     <pae/>
	I0819 18:01:14.738037  390826 main.go:141] libmachine: (ha-086149)     
	I0819 18:01:14.738046  390826 main.go:141] libmachine: (ha-086149)   </features>
	I0819 18:01:14.738051  390826 main.go:141] libmachine: (ha-086149)   <cpu mode='host-passthrough'>
	I0819 18:01:14.738058  390826 main.go:141] libmachine: (ha-086149)   
	I0819 18:01:14.738068  390826 main.go:141] libmachine: (ha-086149)   </cpu>
	I0819 18:01:14.738087  390826 main.go:141] libmachine: (ha-086149)   <os>
	I0819 18:01:14.738103  390826 main.go:141] libmachine: (ha-086149)     <type>hvm</type>
	I0819 18:01:14.738109  390826 main.go:141] libmachine: (ha-086149)     <boot dev='cdrom'/>
	I0819 18:01:14.738113  390826 main.go:141] libmachine: (ha-086149)     <boot dev='hd'/>
	I0819 18:01:14.738119  390826 main.go:141] libmachine: (ha-086149)     <bootmenu enable='no'/>
	I0819 18:01:14.738126  390826 main.go:141] libmachine: (ha-086149)   </os>
	I0819 18:01:14.738130  390826 main.go:141] libmachine: (ha-086149)   <devices>
	I0819 18:01:14.738136  390826 main.go:141] libmachine: (ha-086149)     <disk type='file' device='cdrom'>
	I0819 18:01:14.738146  390826 main.go:141] libmachine: (ha-086149)       <source file='/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149/boot2docker.iso'/>
	I0819 18:01:14.738151  390826 main.go:141] libmachine: (ha-086149)       <target dev='hdc' bus='scsi'/>
	I0819 18:01:14.738159  390826 main.go:141] libmachine: (ha-086149)       <readonly/>
	I0819 18:01:14.738163  390826 main.go:141] libmachine: (ha-086149)     </disk>
	I0819 18:01:14.738170  390826 main.go:141] libmachine: (ha-086149)     <disk type='file' device='disk'>
	I0819 18:01:14.738176  390826 main.go:141] libmachine: (ha-086149)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0819 18:01:14.738186  390826 main.go:141] libmachine: (ha-086149)       <source file='/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149/ha-086149.rawdisk'/>
	I0819 18:01:14.738191  390826 main.go:141] libmachine: (ha-086149)       <target dev='hda' bus='virtio'/>
	I0819 18:01:14.738197  390826 main.go:141] libmachine: (ha-086149)     </disk>
	I0819 18:01:14.738202  390826 main.go:141] libmachine: (ha-086149)     <interface type='network'>
	I0819 18:01:14.738231  390826 main.go:141] libmachine: (ha-086149)       <source network='mk-ha-086149'/>
	I0819 18:01:14.738247  390826 main.go:141] libmachine: (ha-086149)       <model type='virtio'/>
	I0819 18:01:14.738268  390826 main.go:141] libmachine: (ha-086149)     </interface>
	I0819 18:01:14.738283  390826 main.go:141] libmachine: (ha-086149)     <interface type='network'>
	I0819 18:01:14.738296  390826 main.go:141] libmachine: (ha-086149)       <source network='default'/>
	I0819 18:01:14.738310  390826 main.go:141] libmachine: (ha-086149)       <model type='virtio'/>
	I0819 18:01:14.738332  390826 main.go:141] libmachine: (ha-086149)     </interface>
	I0819 18:01:14.738346  390826 main.go:141] libmachine: (ha-086149)     <serial type='pty'>
	I0819 18:01:14.738358  390826 main.go:141] libmachine: (ha-086149)       <target port='0'/>
	I0819 18:01:14.738368  390826 main.go:141] libmachine: (ha-086149)     </serial>
	I0819 18:01:14.738396  390826 main.go:141] libmachine: (ha-086149)     <console type='pty'>
	I0819 18:01:14.738405  390826 main.go:141] libmachine: (ha-086149)       <target type='serial' port='0'/>
	I0819 18:01:14.738410  390826 main.go:141] libmachine: (ha-086149)     </console>
	I0819 18:01:14.738417  390826 main.go:141] libmachine: (ha-086149)     <rng model='virtio'>
	I0819 18:01:14.738423  390826 main.go:141] libmachine: (ha-086149)       <backend model='random'>/dev/random</backend>
	I0819 18:01:14.738430  390826 main.go:141] libmachine: (ha-086149)     </rng>
	I0819 18:01:14.738435  390826 main.go:141] libmachine: (ha-086149)     
	I0819 18:01:14.738446  390826 main.go:141] libmachine: (ha-086149)     
	I0819 18:01:14.738453  390826 main.go:141] libmachine: (ha-086149)   </devices>
	I0819 18:01:14.738469  390826 main.go:141] libmachine: (ha-086149) </domain>
	I0819 18:01:14.738479  390826 main.go:141] libmachine: (ha-086149) 
	I0819 18:01:14.743216  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:03:c5:5f in network default
	I0819 18:01:14.743804  390826 main.go:141] libmachine: (ha-086149) Ensuring networks are active...
	I0819 18:01:14.743825  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:14.744421  390826 main.go:141] libmachine: (ha-086149) Ensuring network default is active
	I0819 18:01:14.744762  390826 main.go:141] libmachine: (ha-086149) Ensuring network mk-ha-086149 is active
	I0819 18:01:14.745298  390826 main.go:141] libmachine: (ha-086149) Getting domain xml...
	I0819 18:01:14.745905  390826 main.go:141] libmachine: (ha-086149) Creating domain...
	I0819 18:01:15.953141  390826 main.go:141] libmachine: (ha-086149) Waiting to get IP...
	I0819 18:01:15.953890  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:15.954251  390826 main.go:141] libmachine: (ha-086149) DBG | unable to find current IP address of domain ha-086149 in network mk-ha-086149
	I0819 18:01:15.954271  390826 main.go:141] libmachine: (ha-086149) DBG | I0819 18:01:15.954227  390849 retry.go:31] will retry after 231.676833ms: waiting for machine to come up
	I0819 18:01:16.187742  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:16.188211  390826 main.go:141] libmachine: (ha-086149) DBG | unable to find current IP address of domain ha-086149 in network mk-ha-086149
	I0819 18:01:16.188245  390826 main.go:141] libmachine: (ha-086149) DBG | I0819 18:01:16.188162  390849 retry.go:31] will retry after 292.527195ms: waiting for machine to come up
	I0819 18:01:16.482731  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:16.483176  390826 main.go:141] libmachine: (ha-086149) DBG | unable to find current IP address of domain ha-086149 in network mk-ha-086149
	I0819 18:01:16.483203  390826 main.go:141] libmachine: (ha-086149) DBG | I0819 18:01:16.483122  390849 retry.go:31] will retry after 330.893319ms: waiting for machine to come up
	I0819 18:01:16.815745  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:16.816126  390826 main.go:141] libmachine: (ha-086149) DBG | unable to find current IP address of domain ha-086149 in network mk-ha-086149
	I0819 18:01:16.816156  390826 main.go:141] libmachine: (ha-086149) DBG | I0819 18:01:16.816076  390849 retry.go:31] will retry after 444.378344ms: waiting for machine to come up
	I0819 18:01:17.261713  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:17.262004  390826 main.go:141] libmachine: (ha-086149) DBG | unable to find current IP address of domain ha-086149 in network mk-ha-086149
	I0819 18:01:17.262034  390826 main.go:141] libmachine: (ha-086149) DBG | I0819 18:01:17.261932  390849 retry.go:31] will retry after 566.799409ms: waiting for machine to come up
	I0819 18:01:17.830885  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:17.831318  390826 main.go:141] libmachine: (ha-086149) DBG | unable to find current IP address of domain ha-086149 in network mk-ha-086149
	I0819 18:01:17.831344  390826 main.go:141] libmachine: (ha-086149) DBG | I0819 18:01:17.831270  390849 retry.go:31] will retry after 748.576215ms: waiting for machine to come up
	I0819 18:01:18.581145  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:18.581611  390826 main.go:141] libmachine: (ha-086149) DBG | unable to find current IP address of domain ha-086149 in network mk-ha-086149
	I0819 18:01:18.581660  390826 main.go:141] libmachine: (ha-086149) DBG | I0819 18:01:18.581558  390849 retry.go:31] will retry after 1.124966525s: waiting for machine to come up
	I0819 18:01:19.708677  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:19.709123  390826 main.go:141] libmachine: (ha-086149) DBG | unable to find current IP address of domain ha-086149 in network mk-ha-086149
	I0819 18:01:19.709155  390826 main.go:141] libmachine: (ha-086149) DBG | I0819 18:01:19.709077  390849 retry.go:31] will retry after 1.107728894s: waiting for machine to come up
	I0819 18:01:20.818466  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:20.818893  390826 main.go:141] libmachine: (ha-086149) DBG | unable to find current IP address of domain ha-086149 in network mk-ha-086149
	I0819 18:01:20.818959  390826 main.go:141] libmachine: (ha-086149) DBG | I0819 18:01:20.818841  390849 retry.go:31] will retry after 1.665812969s: waiting for machine to come up
	I0819 18:01:22.486711  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:22.487198  390826 main.go:141] libmachine: (ha-086149) DBG | unable to find current IP address of domain ha-086149 in network mk-ha-086149
	I0819 18:01:22.487233  390826 main.go:141] libmachine: (ha-086149) DBG | I0819 18:01:22.487151  390849 retry.go:31] will retry after 1.582489658s: waiting for machine to come up
	I0819 18:01:24.072236  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:24.072800  390826 main.go:141] libmachine: (ha-086149) DBG | unable to find current IP address of domain ha-086149 in network mk-ha-086149
	I0819 18:01:24.072833  390826 main.go:141] libmachine: (ha-086149) DBG | I0819 18:01:24.072721  390849 retry.go:31] will retry after 2.220917653s: waiting for machine to come up
	I0819 18:01:26.294955  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:26.295430  390826 main.go:141] libmachine: (ha-086149) DBG | unable to find current IP address of domain ha-086149 in network mk-ha-086149
	I0819 18:01:26.295453  390826 main.go:141] libmachine: (ha-086149) DBG | I0819 18:01:26.295399  390849 retry.go:31] will retry after 3.560062988s: waiting for machine to come up
	I0819 18:01:29.856788  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:29.857284  390826 main.go:141] libmachine: (ha-086149) DBG | unable to find current IP address of domain ha-086149 in network mk-ha-086149
	I0819 18:01:29.857309  390826 main.go:141] libmachine: (ha-086149) DBG | I0819 18:01:29.857243  390849 retry.go:31] will retry after 3.132423259s: waiting for machine to come up
	I0819 18:01:32.993589  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:32.993968  390826 main.go:141] libmachine: (ha-086149) DBG | unable to find current IP address of domain ha-086149 in network mk-ha-086149
	I0819 18:01:32.993998  390826 main.go:141] libmachine: (ha-086149) DBG | I0819 18:01:32.993903  390849 retry.go:31] will retry after 4.312546597s: waiting for machine to come up
	I0819 18:01:37.310234  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:37.310613  390826 main.go:141] libmachine: (ha-086149) Found IP for machine: 192.168.39.249
	I0819 18:01:37.310637  390826 main.go:141] libmachine: (ha-086149) Reserving static IP address...
	I0819 18:01:37.310650  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has current primary IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:37.311102  390826 main.go:141] libmachine: (ha-086149) DBG | unable to find host DHCP lease matching {name: "ha-086149", mac: "52:54:00:3b:ab:95", ip: "192.168.39.249"} in network mk-ha-086149
	I0819 18:01:37.382735  390826 main.go:141] libmachine: (ha-086149) DBG | Getting to WaitForSSH function...
	I0819 18:01:37.382762  390826 main.go:141] libmachine: (ha-086149) Reserved static IP address: 192.168.39.249
	I0819 18:01:37.382775  390826 main.go:141] libmachine: (ha-086149) Waiting for SSH to be available...
	I0819 18:01:37.385538  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:37.385901  390826 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:minikube Clientid:01:52:54:00:3b:ab:95}
	I0819 18:01:37.385933  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:37.386056  390826 main.go:141] libmachine: (ha-086149) DBG | Using SSH client type: external
	I0819 18:01:37.386085  390826 main.go:141] libmachine: (ha-086149) DBG | Using SSH private key: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149/id_rsa (-rw-------)
	I0819 18:01:37.386117  390826 main.go:141] libmachine: (ha-086149) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.249 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 18:01:37.386150  390826 main.go:141] libmachine: (ha-086149) DBG | About to run SSH command:
	I0819 18:01:37.386177  390826 main.go:141] libmachine: (ha-086149) DBG | exit 0
	I0819 18:01:37.508186  390826 main.go:141] libmachine: (ha-086149) DBG | SSH cmd err, output: <nil>: 
	I0819 18:01:37.508445  390826 main.go:141] libmachine: (ha-086149) KVM machine creation complete!
	I0819 18:01:37.508869  390826 main.go:141] libmachine: (ha-086149) Calling .GetConfigRaw
	I0819 18:01:37.509429  390826 main.go:141] libmachine: (ha-086149) Calling .DriverName
	I0819 18:01:37.509628  390826 main.go:141] libmachine: (ha-086149) Calling .DriverName
	I0819 18:01:37.509764  390826 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0819 18:01:37.509780  390826 main.go:141] libmachine: (ha-086149) Calling .GetState
	I0819 18:01:37.511032  390826 main.go:141] libmachine: Detecting operating system of created instance...
	I0819 18:01:37.511048  390826 main.go:141] libmachine: Waiting for SSH to be available...
	I0819 18:01:37.511056  390826 main.go:141] libmachine: Getting to WaitForSSH function...
	I0819 18:01:37.511063  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHHostname
	I0819 18:01:37.513123  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:37.513488  390826 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:01:37.513515  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:37.513669  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHPort
	I0819 18:01:37.513880  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHKeyPath
	I0819 18:01:37.514076  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHKeyPath
	I0819 18:01:37.514212  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHUsername
	I0819 18:01:37.514390  390826 main.go:141] libmachine: Using SSH client type: native
	I0819 18:01:37.514597  390826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0819 18:01:37.514608  390826 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0819 18:01:37.615268  390826 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 18:01:37.615299  390826 main.go:141] libmachine: Detecting the provisioner...
	I0819 18:01:37.615309  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHHostname
	I0819 18:01:37.617932  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:37.618267  390826 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:01:37.618295  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:37.618456  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHPort
	I0819 18:01:37.618688  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHKeyPath
	I0819 18:01:37.618855  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHKeyPath
	I0819 18:01:37.619026  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHUsername
	I0819 18:01:37.619166  390826 main.go:141] libmachine: Using SSH client type: native
	I0819 18:01:37.619344  390826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0819 18:01:37.619355  390826 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0819 18:01:37.724338  390826 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0819 18:01:37.724449  390826 main.go:141] libmachine: found compatible host: buildroot
	I0819 18:01:37.724459  390826 main.go:141] libmachine: Provisioning with buildroot...
	I0819 18:01:37.724470  390826 main.go:141] libmachine: (ha-086149) Calling .GetMachineName
	I0819 18:01:37.724739  390826 buildroot.go:166] provisioning hostname "ha-086149"
	I0819 18:01:37.724769  390826 main.go:141] libmachine: (ha-086149) Calling .GetMachineName
	I0819 18:01:37.724966  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHHostname
	I0819 18:01:37.727668  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:37.728005  390826 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:01:37.728048  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:37.728267  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHPort
	I0819 18:01:37.728456  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHKeyPath
	I0819 18:01:37.728626  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHKeyPath
	I0819 18:01:37.728792  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHUsername
	I0819 18:01:37.728936  390826 main.go:141] libmachine: Using SSH client type: native
	I0819 18:01:37.729115  390826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0819 18:01:37.729129  390826 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-086149 && echo "ha-086149" | sudo tee /etc/hostname
	I0819 18:01:37.842792  390826 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-086149
	
	I0819 18:01:37.842819  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHHostname
	I0819 18:01:37.845794  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:37.846081  390826 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:01:37.846104  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:37.846317  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHPort
	I0819 18:01:37.846579  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHKeyPath
	I0819 18:01:37.846767  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHKeyPath
	I0819 18:01:37.846897  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHUsername
	I0819 18:01:37.847116  390826 main.go:141] libmachine: Using SSH client type: native
	I0819 18:01:37.847282  390826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0819 18:01:37.847298  390826 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-086149' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-086149/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-086149' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 18:01:37.957710  390826 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 18:01:37.957771  390826 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19468-372744/.minikube CaCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19468-372744/.minikube}
	I0819 18:01:37.957810  390826 buildroot.go:174] setting up certificates
	I0819 18:01:37.957820  390826 provision.go:84] configureAuth start
	I0819 18:01:37.957834  390826 main.go:141] libmachine: (ha-086149) Calling .GetMachineName
	I0819 18:01:37.958169  390826 main.go:141] libmachine: (ha-086149) Calling .GetIP
	I0819 18:01:37.961063  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:37.961475  390826 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:01:37.961501  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:37.961659  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHHostname
	I0819 18:01:37.964114  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:37.964462  390826 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:01:37.964485  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:37.964677  390826 provision.go:143] copyHostCerts
	I0819 18:01:37.964713  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem
	I0819 18:01:37.964759  390826 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem, removing ...
	I0819 18:01:37.964776  390826 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem
	I0819 18:01:37.964850  390826 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem (1123 bytes)
	I0819 18:01:37.964968  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem
	I0819 18:01:37.964987  390826 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem, removing ...
	I0819 18:01:37.965004  390826 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem
	I0819 18:01:37.965034  390826 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem (1675 bytes)
	I0819 18:01:37.965088  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem
	I0819 18:01:37.965104  390826 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem, removing ...
	I0819 18:01:37.965108  390826 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem
	I0819 18:01:37.965133  390826 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem (1082 bytes)
	I0819 18:01:37.965234  390826 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem org=jenkins.ha-086149 san=[127.0.0.1 192.168.39.249 ha-086149 localhost minikube]
	I0819 18:01:38.173183  390826 provision.go:177] copyRemoteCerts
	I0819 18:01:38.173246  390826 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 18:01:38.173275  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHHostname
	I0819 18:01:38.175851  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:38.176095  390826 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:01:38.176128  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:38.176282  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHPort
	I0819 18:01:38.176497  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHKeyPath
	I0819 18:01:38.176665  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHUsername
	I0819 18:01:38.176833  390826 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149/id_rsa Username:docker}
	I0819 18:01:38.257560  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 18:01:38.257639  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 18:01:38.284684  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 18:01:38.284752  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0819 18:01:38.309385  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 18:01:38.309447  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 18:01:38.333123  390826 provision.go:87] duration metric: took 375.286063ms to configureAuth
	I0819 18:01:38.333155  390826 buildroot.go:189] setting minikube options for container-runtime
	I0819 18:01:38.333397  390826 config.go:182] Loaded profile config "ha-086149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:01:38.333516  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHHostname
	I0819 18:01:38.335910  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:38.336207  390826 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:01:38.336232  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:38.336374  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHPort
	I0819 18:01:38.336579  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHKeyPath
	I0819 18:01:38.336758  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHKeyPath
	I0819 18:01:38.336911  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHUsername
	I0819 18:01:38.337075  390826 main.go:141] libmachine: Using SSH client type: native
	I0819 18:01:38.337341  390826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0819 18:01:38.337363  390826 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 18:01:38.598506  390826 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 18:01:38.598543  390826 main.go:141] libmachine: Checking connection to Docker...
	I0819 18:01:38.598553  390826 main.go:141] libmachine: (ha-086149) Calling .GetURL
	I0819 18:01:38.599830  390826 main.go:141] libmachine: (ha-086149) DBG | Using libvirt version 6000000
	I0819 18:01:38.603049  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:38.603455  390826 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:01:38.603479  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:38.603662  390826 main.go:141] libmachine: Docker is up and running!
	I0819 18:01:38.603695  390826 main.go:141] libmachine: Reticulating splines...
	I0819 18:01:38.603704  390826 client.go:171] duration metric: took 24.273868888s to LocalClient.Create
	I0819 18:01:38.603734  390826 start.go:167] duration metric: took 24.273933922s to libmachine.API.Create "ha-086149"
	I0819 18:01:38.603746  390826 start.go:293] postStartSetup for "ha-086149" (driver="kvm2")
	I0819 18:01:38.603759  390826 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 18:01:38.603780  390826 main.go:141] libmachine: (ha-086149) Calling .DriverName
	I0819 18:01:38.604028  390826 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 18:01:38.604059  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHHostname
	I0819 18:01:38.606363  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:38.606683  390826 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:01:38.606703  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:38.606858  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHPort
	I0819 18:01:38.607012  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHKeyPath
	I0819 18:01:38.607149  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHUsername
	I0819 18:01:38.607289  390826 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149/id_rsa Username:docker}
	I0819 18:01:38.686072  390826 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 18:01:38.690382  390826 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 18:01:38.690411  390826 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-372744/.minikube/addons for local assets ...
	I0819 18:01:38.690477  390826 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-372744/.minikube/files for local assets ...
	I0819 18:01:38.690547  390826 filesync.go:149] local asset: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem -> 3800092.pem in /etc/ssl/certs
	I0819 18:01:38.690556  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem -> /etc/ssl/certs/3800092.pem
	I0819 18:01:38.690647  390826 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 18:01:38.700129  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem --> /etc/ssl/certs/3800092.pem (1708 bytes)
	I0819 18:01:38.725376  390826 start.go:296] duration metric: took 121.612672ms for postStartSetup
	I0819 18:01:38.725438  390826 main.go:141] libmachine: (ha-086149) Calling .GetConfigRaw
	I0819 18:01:38.726203  390826 main.go:141] libmachine: (ha-086149) Calling .GetIP
	I0819 18:01:38.728817  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:38.729168  390826 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:01:38.729189  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:38.729441  390826 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/config.json ...
	I0819 18:01:38.729623  390826 start.go:128] duration metric: took 24.417747393s to createHost
	I0819 18:01:38.729647  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHHostname
	I0819 18:01:38.731878  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:38.732140  390826 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:01:38.732174  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:38.732297  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHPort
	I0819 18:01:38.732481  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHKeyPath
	I0819 18:01:38.732618  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHKeyPath
	I0819 18:01:38.732709  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHUsername
	I0819 18:01:38.732872  390826 main.go:141] libmachine: Using SSH client type: native
	I0819 18:01:38.733034  390826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0819 18:01:38.733047  390826 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 18:01:38.832329  390826 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724090498.808951790
	
	I0819 18:01:38.832355  390826 fix.go:216] guest clock: 1724090498.808951790
	I0819 18:01:38.832365  390826 fix.go:229] Guest: 2024-08-19 18:01:38.80895179 +0000 UTC Remote: 2024-08-19 18:01:38.729636292 +0000 UTC m=+24.523532707 (delta=79.315498ms)
	I0819 18:01:38.832393  390826 fix.go:200] guest clock delta is within tolerance: 79.315498ms
	I0819 18:01:38.832402  390826 start.go:83] releasing machines lock for "ha-086149", held for 24.520619381s
	I0819 18:01:38.832430  390826 main.go:141] libmachine: (ha-086149) Calling .DriverName
	I0819 18:01:38.832727  390826 main.go:141] libmachine: (ha-086149) Calling .GetIP
	I0819 18:01:38.835361  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:38.835631  390826 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:01:38.835661  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:38.835753  390826 main.go:141] libmachine: (ha-086149) Calling .DriverName
	I0819 18:01:38.836218  390826 main.go:141] libmachine: (ha-086149) Calling .DriverName
	I0819 18:01:38.836367  390826 main.go:141] libmachine: (ha-086149) Calling .DriverName
	I0819 18:01:38.836443  390826 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 18:01:38.836492  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHHostname
	I0819 18:01:38.836568  390826 ssh_runner.go:195] Run: cat /version.json
	I0819 18:01:38.836594  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHHostname
	I0819 18:01:38.839021  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:38.839317  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:38.839529  390826 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:01:38.839556  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:38.839615  390826 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:01:38.839632  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:38.839691  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHPort
	I0819 18:01:38.839877  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHPort
	I0819 18:01:38.839879  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHKeyPath
	I0819 18:01:38.840170  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHKeyPath
	I0819 18:01:38.840181  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHUsername
	I0819 18:01:38.840341  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHUsername
	I0819 18:01:38.840337  390826 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149/id_rsa Username:docker}
	I0819 18:01:38.840488  390826 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149/id_rsa Username:docker}
	I0819 18:01:38.935481  390826 ssh_runner.go:195] Run: systemctl --version
	I0819 18:01:38.941309  390826 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 18:01:39.096352  390826 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 18:01:39.102482  390826 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 18:01:39.102559  390826 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 18:01:39.118206  390826 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 18:01:39.118237  390826 start.go:495] detecting cgroup driver to use...
	I0819 18:01:39.118319  390826 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 18:01:39.134273  390826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 18:01:39.148062  390826 docker.go:217] disabling cri-docker service (if available) ...
	I0819 18:01:39.148121  390826 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 18:01:39.161462  390826 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 18:01:39.175269  390826 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 18:01:39.293356  390826 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 18:01:39.445330  390826 docker.go:233] disabling docker service ...
	I0819 18:01:39.445414  390826 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 18:01:39.460090  390826 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 18:01:39.472790  390826 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 18:01:39.616740  390826 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 18:01:39.734300  390826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 18:01:39.748386  390826 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 18:01:39.766358  390826 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 18:01:39.766422  390826 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:01:39.776623  390826 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 18:01:39.776685  390826 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:01:39.786691  390826 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:01:39.796640  390826 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:01:39.806683  390826 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 18:01:39.816903  390826 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:01:39.826798  390826 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:01:39.843421  390826 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:01:39.853381  390826 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 18:01:39.862235  390826 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 18:01:39.862289  390826 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 18:01:39.874809  390826 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 18:01:39.883569  390826 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 18:01:39.998358  390826 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 18:01:40.135661  390826 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 18:01:40.135757  390826 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 18:01:40.140313  390826 start.go:563] Will wait 60s for crictl version
	I0819 18:01:40.140376  390826 ssh_runner.go:195] Run: which crictl
	I0819 18:01:40.144077  390826 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 18:01:40.180775  390826 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 18:01:40.180864  390826 ssh_runner.go:195] Run: crio --version
	I0819 18:01:40.210079  390826 ssh_runner.go:195] Run: crio --version
	I0819 18:01:40.240165  390826 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 18:01:40.241358  390826 main.go:141] libmachine: (ha-086149) Calling .GetIP
	I0819 18:01:40.244054  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:40.244407  390826 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:01:40.244433  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:40.244638  390826 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 18:01:40.248760  390826 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 18:01:40.262105  390826 kubeadm.go:883] updating cluster {Name:ha-086149 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:ha-086149 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 18:01:40.262241  390826 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 18:01:40.262306  390826 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 18:01:40.294822  390826 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0819 18:01:40.294904  390826 ssh_runner.go:195] Run: which lz4
	I0819 18:01:40.298591  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0819 18:01:40.298677  390826 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 18:01:40.302618  390826 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 18:01:40.302644  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0819 18:01:41.653130  390826 crio.go:462] duration metric: took 1.354478555s to copy over tarball
	I0819 18:01:41.653222  390826 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 18:01:43.658136  390826 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.004875453s)
	I0819 18:01:43.658164  390826 crio.go:469] duration metric: took 2.005002364s to extract the tarball
	I0819 18:01:43.658171  390826 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 18:01:43.697217  390826 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 18:01:43.745822  390826 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 18:01:43.745847  390826 cache_images.go:84] Images are preloaded, skipping loading
	I0819 18:01:43.745858  390826 kubeadm.go:934] updating node { 192.168.39.249 8443 v1.31.0 crio true true} ...
	I0819 18:01:43.746007  390826 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-086149 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.249
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-086149 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 18:01:43.746105  390826 ssh_runner.go:195] Run: crio config
	I0819 18:01:43.791378  390826 cni.go:84] Creating CNI manager for ""
	I0819 18:01:43.791406  390826 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0819 18:01:43.791428  390826 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 18:01:43.791459  390826 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.249 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-086149 NodeName:ha-086149 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.249"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.249 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 18:01:43.791667  390826 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.249
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-086149"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.249
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.249"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 18:01:43.791719  390826 kube-vip.go:115] generating kube-vip config ...
	I0819 18:01:43.791775  390826 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0819 18:01:43.808159  390826 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0819 18:01:43.808286  390826 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0819 18:01:43.808341  390826 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 18:01:43.818120  390826 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 18:01:43.818166  390826 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0819 18:01:43.827346  390826 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0819 18:01:43.843358  390826 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 18:01:43.859459  390826 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0819 18:01:43.875500  390826 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0819 18:01:43.891118  390826 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0819 18:01:43.894940  390826 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 18:01:43.906694  390826 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 18:01:44.019755  390826 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 18:01:44.037206  390826 certs.go:68] Setting up /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149 for IP: 192.168.39.249
	I0819 18:01:44.037233  390826 certs.go:194] generating shared ca certs ...
	I0819 18:01:44.037250  390826 certs.go:226] acquiring lock for ca certs: {Name:mk639e03f593e0bccac045f6e9f5ba3b96cc81e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:01:44.037395  390826 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key
	I0819 18:01:44.037430  390826 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key
	I0819 18:01:44.037439  390826 certs.go:256] generating profile certs ...
	I0819 18:01:44.037486  390826 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/client.key
	I0819 18:01:44.037513  390826 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/client.crt with IP's: []
	I0819 18:01:44.154467  390826 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/client.crt ...
	I0819 18:01:44.154501  390826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/client.crt: {Name:mk258075469b347e17ae9e52e38a8f7b4d8898f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:01:44.154664  390826 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/client.key ...
	I0819 18:01:44.154675  390826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/client.key: {Name:mkb5a4a095ddf05a1ffc45a14947f43ab1e167d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:01:44.154759  390826 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.key.2127eae6
	I0819 18:01:44.154775  390826 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.crt.2127eae6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.249 192.168.39.254]
	I0819 18:01:44.407450  390826 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.crt.2127eae6 ...
	I0819 18:01:44.407483  390826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.crt.2127eae6: {Name:mkaa4255cf0215780e52d06d0978b9ef66e9383c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:01:44.407659  390826 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.key.2127eae6 ...
	I0819 18:01:44.407689  390826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.key.2127eae6: {Name:mk13449ba75342bd86a357e19023a42b69429c07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:01:44.407769  390826 certs.go:381] copying /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.crt.2127eae6 -> /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.crt
	I0819 18:01:44.407871  390826 certs.go:385] copying /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.key.2127eae6 -> /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.key
	I0819 18:01:44.407938  390826 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/proxy-client.key
	I0819 18:01:44.407954  390826 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/proxy-client.crt with IP's: []
	I0819 18:01:44.659255  390826 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/proxy-client.crt ...
	I0819 18:01:44.659286  390826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/proxy-client.crt: {Name:mk8161be27b842429a94ece9edfb4c7103e5dd4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:01:44.659443  390826 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/proxy-client.key ...
	I0819 18:01:44.659454  390826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/proxy-client.key: {Name:mk49fe1209981c015e7b47bc5acccdb54fa003fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:01:44.659523  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 18:01:44.659544  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 18:01:44.659557  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 18:01:44.659567  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 18:01:44.659580  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0819 18:01:44.659591  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0819 18:01:44.659603  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0819 18:01:44.659616  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0819 18:01:44.659670  390826 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem (1338 bytes)
	W0819 18:01:44.659721  390826 certs.go:480] ignoring /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009_empty.pem, impossibly tiny 0 bytes
	I0819 18:01:44.659731  390826 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 18:01:44.659752  390826 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem (1082 bytes)
	I0819 18:01:44.659774  390826 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem (1123 bytes)
	I0819 18:01:44.659794  390826 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem (1675 bytes)
	I0819 18:01:44.659829  390826 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem (1708 bytes)
	I0819 18:01:44.659857  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:01:44.659871  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem -> /usr/share/ca-certificates/380009.pem
	I0819 18:01:44.659884  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem -> /usr/share/ca-certificates/3800092.pem
	I0819 18:01:44.660513  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 18:01:44.686415  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 18:01:44.714836  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 18:01:44.742072  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 18:01:44.769557  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0819 18:01:44.801060  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 18:01:44.847181  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 18:01:44.886103  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 18:01:44.912931  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 18:01:44.939740  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem --> /usr/share/ca-certificates/380009.pem (1338 bytes)
	I0819 18:01:44.966553  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem --> /usr/share/ca-certificates/3800092.pem (1708 bytes)
	I0819 18:01:44.993733  390826 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 18:01:45.012583  390826 ssh_runner.go:195] Run: openssl version
	I0819 18:01:45.018619  390826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/380009.pem && ln -fs /usr/share/ca-certificates/380009.pem /etc/ssl/certs/380009.pem"
	I0819 18:01:45.030131  390826 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/380009.pem
	I0819 18:01:45.035072  390826 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 17:56 /usr/share/ca-certificates/380009.pem
	I0819 18:01:45.035138  390826 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/380009.pem
	I0819 18:01:45.041228  390826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/380009.pem /etc/ssl/certs/51391683.0"
	I0819 18:01:45.052433  390826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3800092.pem && ln -fs /usr/share/ca-certificates/3800092.pem /etc/ssl/certs/3800092.pem"
	I0819 18:01:45.063641  390826 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3800092.pem
	I0819 18:01:45.068462  390826 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 17:56 /usr/share/ca-certificates/3800092.pem
	I0819 18:01:45.068527  390826 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3800092.pem
	I0819 18:01:45.074375  390826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3800092.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 18:01:45.085468  390826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 18:01:45.096018  390826 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:01:45.100715  390826 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:01:45.100771  390826 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:01:45.106535  390826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 18:01:45.117371  390826 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 18:01:45.121839  390826 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 18:01:45.121891  390826 kubeadm.go:392] StartCluster: {Name:ha-086149 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clust
erName:ha-086149 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 18:01:45.121970  390826 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 18:01:45.122022  390826 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 18:01:45.164294  390826 cri.go:89] found id: ""
	I0819 18:01:45.164366  390826 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 18:01:45.174823  390826 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 18:01:45.184977  390826 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 18:01:45.198329  390826 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 18:01:45.198350  390826 kubeadm.go:157] found existing configuration files:
	
	I0819 18:01:45.198399  390826 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 18:01:45.209542  390826 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 18:01:45.209593  390826 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 18:01:45.219539  390826 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 18:01:45.228956  390826 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 18:01:45.229021  390826 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 18:01:45.238691  390826 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 18:01:45.248330  390826 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 18:01:45.248400  390826 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 18:01:45.258511  390826 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 18:01:45.273668  390826 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 18:01:45.273735  390826 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 18:01:45.283470  390826 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 18:01:45.396768  390826 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0819 18:01:45.396886  390826 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 18:01:45.493304  390826 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 18:01:45.493445  390826 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 18:01:45.493562  390826 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0819 18:01:45.504233  390826 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 18:01:45.572693  390826 out.go:235]   - Generating certificates and keys ...
	I0819 18:01:45.572859  390826 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 18:01:45.572953  390826 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 18:01:45.952901  390826 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0819 18:01:46.141101  390826 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0819 18:01:46.225834  390826 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0819 18:01:46.393564  390826 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0819 18:01:46.498486  390826 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0819 18:01:46.498651  390826 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-086149 localhost] and IPs [192.168.39.249 127.0.0.1 ::1]
	I0819 18:01:46.611046  390826 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0819 18:01:46.611211  390826 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-086149 localhost] and IPs [192.168.39.249 127.0.0.1 ::1]
	I0819 18:01:46.728113  390826 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0819 18:01:46.908159  390826 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0819 18:01:47.227993  390826 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0819 18:01:47.228204  390826 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 18:01:47.338009  390826 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 18:01:47.409840  390826 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0819 18:01:47.566221  390826 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 18:01:47.801677  390826 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 18:01:47.909159  390826 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 18:01:47.910131  390826 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 18:01:47.914891  390826 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 18:01:48.091438  390826 out.go:235]   - Booting up control plane ...
	I0819 18:01:48.091596  390826 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 18:01:48.091720  390826 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 18:01:48.091811  390826 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 18:01:48.091947  390826 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 18:01:48.092083  390826 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 18:01:48.092140  390826 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 18:01:48.092324  390826 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0819 18:01:48.092472  390826 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0819 18:01:48.586342  390826 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.436454ms
	I0819 18:01:48.586444  390826 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0819 18:01:54.544621  390826 kubeadm.go:310] [api-check] The API server is healthy after 5.961720563s
	I0819 18:01:54.561358  390826 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 18:01:54.579538  390826 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 18:01:54.611082  390826 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 18:01:54.611350  390826 kubeadm.go:310] [mark-control-plane] Marking the node ha-086149 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 18:01:54.633582  390826 kubeadm.go:310] [bootstrap-token] Using token: 6ctgsc.y7paq351y1edkj9k
	I0819 18:01:54.634932  390826 out.go:235]   - Configuring RBAC rules ...
	I0819 18:01:54.635053  390826 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 18:01:54.639884  390826 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 18:01:54.652039  390826 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 18:01:54.655851  390826 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 18:01:54.661049  390826 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 18:01:54.664499  390826 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 18:01:54.957667  390826 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 18:01:55.386738  390826 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 18:01:55.956556  390826 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 18:01:55.957757  390826 kubeadm.go:310] 
	I0819 18:01:55.957856  390826 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 18:01:55.957866  390826 kubeadm.go:310] 
	I0819 18:01:55.957957  390826 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 18:01:55.957965  390826 kubeadm.go:310] 
	I0819 18:01:55.957996  390826 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 18:01:55.958073  390826 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 18:01:55.958162  390826 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 18:01:55.958171  390826 kubeadm.go:310] 
	I0819 18:01:55.958252  390826 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 18:01:55.958260  390826 kubeadm.go:310] 
	I0819 18:01:55.958325  390826 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 18:01:55.958334  390826 kubeadm.go:310] 
	I0819 18:01:55.958398  390826 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 18:01:55.958506  390826 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 18:01:55.958586  390826 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 18:01:55.958603  390826 kubeadm.go:310] 
	I0819 18:01:55.958710  390826 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 18:01:55.958810  390826 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 18:01:55.958820  390826 kubeadm.go:310] 
	I0819 18:01:55.958924  390826 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 6ctgsc.y7paq351y1edkj9k \
	I0819 18:01:55.959068  390826 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3fcbd90565c5acbc36a47b2db682cb22dce9b172c9bf3af21e506ebb67608039 \
	I0819 18:01:55.959109  390826 kubeadm.go:310] 	--control-plane 
	I0819 18:01:55.959117  390826 kubeadm.go:310] 
	I0819 18:01:55.959228  390826 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 18:01:55.959241  390826 kubeadm.go:310] 
	I0819 18:01:55.959312  390826 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 6ctgsc.y7paq351y1edkj9k \
	I0819 18:01:55.959413  390826 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3fcbd90565c5acbc36a47b2db682cb22dce9b172c9bf3af21e506ebb67608039 
	I0819 18:01:55.960426  390826 kubeadm.go:310] W0819 18:01:45.377364     856 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 18:01:55.960761  390826 kubeadm.go:310] W0819 18:01:45.378188     856 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 18:01:55.960885  390826 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 18:01:55.960917  390826 cni.go:84] Creating CNI manager for ""
	I0819 18:01:55.960930  390826 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0819 18:01:55.962469  390826 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0819 18:01:55.963759  390826 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0819 18:01:55.969468  390826 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0819 18:01:55.969489  390826 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0819 18:01:55.989602  390826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0819 18:01:56.347021  390826 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 18:01:56.347178  390826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:01:56.347192  390826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-086149 minikube.k8s.io/updated_at=2024_08_19T18_01_56_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=9c2db9d51ec33b5c53a86e9ba3d384ee332e3411 minikube.k8s.io/name=ha-086149 minikube.k8s.io/primary=true
	I0819 18:01:56.402969  390826 ops.go:34] apiserver oom_adj: -16
	I0819 18:01:56.560334  390826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:01:57.060392  390826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:01:57.560936  390826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:01:58.060657  390826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:01:58.560717  390826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:01:59.060436  390826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:01:59.560902  390826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:01:59.706545  390826 kubeadm.go:1113] duration metric: took 3.359439383s to wait for elevateKubeSystemPrivileges
	I0819 18:01:59.706592  390826 kubeadm.go:394] duration metric: took 14.584706319s to StartCluster
	I0819 18:01:59.706620  390826 settings.go:142] acquiring lock: {Name:mk396fcf49a1d0e69583cf37ff3c819e37118163 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:01:59.706712  390826 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19468-372744/kubeconfig
	I0819 18:01:59.707624  390826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/kubeconfig: {Name:mk8e7b4e1bb7da665111d2acd83eb48882c66853 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:01:59.708143  390826 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 18:01:59.708183  390826 start.go:241] waiting for startup goroutines ...
	I0819 18:01:59.708165  390826 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0819 18:01:59.708260  390826 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 18:01:59.708346  390826 addons.go:69] Setting storage-provisioner=true in profile "ha-086149"
	I0819 18:01:59.708374  390826 addons.go:69] Setting default-storageclass=true in profile "ha-086149"
	I0819 18:01:59.708382  390826 addons.go:234] Setting addon storage-provisioner=true in "ha-086149"
	I0819 18:01:59.708388  390826 config.go:182] Loaded profile config "ha-086149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:01:59.708411  390826 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-086149"
	I0819 18:01:59.708421  390826 host.go:66] Checking if "ha-086149" exists ...
	I0819 18:01:59.708836  390826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:01:59.708857  390826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:01:59.708877  390826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:01:59.708880  390826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:01:59.724644  390826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43111
	I0819 18:01:59.724698  390826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43475
	I0819 18:01:59.725176  390826 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:01:59.725182  390826 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:01:59.725736  390826 main.go:141] libmachine: Using API Version  1
	I0819 18:01:59.725765  390826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:01:59.726062  390826 main.go:141] libmachine: Using API Version  1
	I0819 18:01:59.726084  390826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:01:59.726116  390826 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:01:59.726335  390826 main.go:141] libmachine: (ha-086149) Calling .GetState
	I0819 18:01:59.726378  390826 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:01:59.726922  390826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:01:59.726953  390826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:01:59.728551  390826 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19468-372744/kubeconfig
	I0819 18:01:59.728761  390826 kapi.go:59] client config for ha-086149: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/client.crt", KeyFile:"/home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/client.key", CAFile:"/home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f18d20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0819 18:01:59.729258  390826 cert_rotation.go:140] Starting client certificate rotation controller
	I0819 18:01:59.729544  390826 addons.go:234] Setting addon default-storageclass=true in "ha-086149"
	I0819 18:01:59.729585  390826 host.go:66] Checking if "ha-086149" exists ...
	I0819 18:01:59.729959  390826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:01:59.729986  390826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:01:59.743354  390826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38843
	I0819 18:01:59.743855  390826 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:01:59.744462  390826 main.go:141] libmachine: Using API Version  1
	I0819 18:01:59.744497  390826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:01:59.744852  390826 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:01:59.745068  390826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33009
	I0819 18:01:59.745095  390826 main.go:141] libmachine: (ha-086149) Calling .GetState
	I0819 18:01:59.745585  390826 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:01:59.746106  390826 main.go:141] libmachine: Using API Version  1
	I0819 18:01:59.746133  390826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:01:59.746481  390826 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:01:59.746971  390826 main.go:141] libmachine: (ha-086149) Calling .DriverName
	I0819 18:01:59.746976  390826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:01:59.747052  390826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:01:59.748802  390826 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 18:01:59.750137  390826 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 18:01:59.750160  390826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 18:01:59.750181  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHHostname
	I0819 18:01:59.753011  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:59.753394  390826 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:01:59.753422  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:59.753577  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHPort
	I0819 18:01:59.753788  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHKeyPath
	I0819 18:01:59.753953  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHUsername
	I0819 18:01:59.754110  390826 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149/id_rsa Username:docker}
	I0819 18:01:59.763234  390826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43467
	I0819 18:01:59.763643  390826 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:01:59.764166  390826 main.go:141] libmachine: Using API Version  1
	I0819 18:01:59.764199  390826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:01:59.764552  390826 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:01:59.764777  390826 main.go:141] libmachine: (ha-086149) Calling .GetState
	I0819 18:01:59.766331  390826 main.go:141] libmachine: (ha-086149) Calling .DriverName
	I0819 18:01:59.766600  390826 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 18:01:59.766617  390826 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 18:01:59.766641  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHHostname
	I0819 18:01:59.769152  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:59.769554  390826 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:01:59.769577  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:59.769732  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHPort
	I0819 18:01:59.769958  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHKeyPath
	I0819 18:01:59.770156  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHUsername
	I0819 18:01:59.770314  390826 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149/id_rsa Username:docker}
	I0819 18:01:59.853383  390826 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0819 18:01:59.925462  390826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 18:01:59.935249  390826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 18:02:00.590000  390826 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0819 18:02:00.837686  390826 main.go:141] libmachine: Making call to close driver server
	I0819 18:02:00.837715  390826 main.go:141] libmachine: (ha-086149) Calling .Close
	I0819 18:02:00.837781  390826 main.go:141] libmachine: Making call to close driver server
	I0819 18:02:00.837806  390826 main.go:141] libmachine: (ha-086149) Calling .Close
	I0819 18:02:00.838145  390826 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:02:00.838163  390826 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:02:00.838175  390826 main.go:141] libmachine: Making call to close driver server
	I0819 18:02:00.838183  390826 main.go:141] libmachine: (ha-086149) Calling .Close
	I0819 18:02:00.838319  390826 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:02:00.838341  390826 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:02:00.838351  390826 main.go:141] libmachine: Making call to close driver server
	I0819 18:02:00.838359  390826 main.go:141] libmachine: (ha-086149) Calling .Close
	I0819 18:02:00.838319  390826 main.go:141] libmachine: (ha-086149) DBG | Closing plugin on server side
	I0819 18:02:00.838530  390826 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:02:00.838553  390826 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:02:00.838553  390826 main.go:141] libmachine: (ha-086149) DBG | Closing plugin on server side
	I0819 18:02:00.838703  390826 main.go:141] libmachine: (ha-086149) DBG | Closing plugin on server side
	I0819 18:02:00.838749  390826 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:02:00.838758  390826 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:02:00.838848  390826 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0819 18:02:00.838866  390826 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0819 18:02:00.839004  390826 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0819 18:02:00.839015  390826 round_trippers.go:469] Request Headers:
	I0819 18:02:00.839026  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:02:00.839036  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:02:00.857763  390826 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0819 18:02:00.858372  390826 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0819 18:02:00.858388  390826 round_trippers.go:469] Request Headers:
	I0819 18:02:00.858395  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:02:00.858400  390826 round_trippers.go:473]     Content-Type: application/json
	I0819 18:02:00.858404  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:02:00.860823  390826 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:02:00.860981  390826 main.go:141] libmachine: Making call to close driver server
	I0819 18:02:00.860994  390826 main.go:141] libmachine: (ha-086149) Calling .Close
	I0819 18:02:00.861329  390826 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:02:00.861357  390826 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:02:00.861358  390826 main.go:141] libmachine: (ha-086149) DBG | Closing plugin on server side
	I0819 18:02:00.863225  390826 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0819 18:02:00.864484  390826 addons.go:510] duration metric: took 1.156242861s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0819 18:02:00.864518  390826 start.go:246] waiting for cluster config update ...
	I0819 18:02:00.864533  390826 start.go:255] writing updated cluster config ...
	I0819 18:02:00.866115  390826 out.go:201] 
	I0819 18:02:00.867539  390826 config.go:182] Loaded profile config "ha-086149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:02:00.867643  390826 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/config.json ...
	I0819 18:02:00.869430  390826 out.go:177] * Starting "ha-086149-m02" control-plane node in "ha-086149" cluster
	I0819 18:02:00.870522  390826 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 18:02:00.870541  390826 cache.go:56] Caching tarball of preloaded images
	I0819 18:02:00.870622  390826 preload.go:172] Found /home/jenkins/minikube-integration/19468-372744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 18:02:00.870633  390826 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 18:02:00.870710  390826 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/config.json ...
	I0819 18:02:00.870888  390826 start.go:360] acquireMachinesLock for ha-086149-m02: {Name:mk24ba67a747357e9ce40f1e460d2bb0bc59cc75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 18:02:00.870936  390826 start.go:364] duration metric: took 27.935µs to acquireMachinesLock for "ha-086149-m02"
	I0819 18:02:00.870957  390826 start.go:93] Provisioning new machine with config: &{Name:ha-086149 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-086149 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 18:02:00.871042  390826 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0819 18:02:00.872431  390826 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 18:02:00.872509  390826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:02:00.872533  390826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:02:00.887364  390826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45653
	I0819 18:02:00.887803  390826 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:02:00.888322  390826 main.go:141] libmachine: Using API Version  1
	I0819 18:02:00.888343  390826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:02:00.888660  390826 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:02:00.888876  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetMachineName
	I0819 18:02:00.889071  390826 main.go:141] libmachine: (ha-086149-m02) Calling .DriverName
	I0819 18:02:00.889242  390826 start.go:159] libmachine.API.Create for "ha-086149" (driver="kvm2")
	I0819 18:02:00.889272  390826 client.go:168] LocalClient.Create starting
	I0819 18:02:00.889310  390826 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem
	I0819 18:02:00.889349  390826 main.go:141] libmachine: Decoding PEM data...
	I0819 18:02:00.889369  390826 main.go:141] libmachine: Parsing certificate...
	I0819 18:02:00.889443  390826 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem
	I0819 18:02:00.889473  390826 main.go:141] libmachine: Decoding PEM data...
	I0819 18:02:00.889489  390826 main.go:141] libmachine: Parsing certificate...
	I0819 18:02:00.889516  390826 main.go:141] libmachine: Running pre-create checks...
	I0819 18:02:00.889526  390826 main.go:141] libmachine: (ha-086149-m02) Calling .PreCreateCheck
	I0819 18:02:00.889701  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetConfigRaw
	I0819 18:02:00.890132  390826 main.go:141] libmachine: Creating machine...
	I0819 18:02:00.890150  390826 main.go:141] libmachine: (ha-086149-m02) Calling .Create
	I0819 18:02:00.890301  390826 main.go:141] libmachine: (ha-086149-m02) Creating KVM machine...
	I0819 18:02:00.891513  390826 main.go:141] libmachine: (ha-086149-m02) DBG | found existing default KVM network
	I0819 18:02:00.891656  390826 main.go:141] libmachine: (ha-086149-m02) DBG | found existing private KVM network mk-ha-086149
	I0819 18:02:00.891788  390826 main.go:141] libmachine: (ha-086149-m02) Setting up store path in /home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m02 ...
	I0819 18:02:00.891816  390826 main.go:141] libmachine: (ha-086149-m02) Building disk image from file:///home/jenkins/minikube-integration/19468-372744/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0819 18:02:00.891883  390826 main.go:141] libmachine: (ha-086149-m02) DBG | I0819 18:02:00.891770  391194 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19468-372744/.minikube
	I0819 18:02:00.891984  390826 main.go:141] libmachine: (ha-086149-m02) Downloading /home/jenkins/minikube-integration/19468-372744/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19468-372744/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 18:02:01.163735  390826 main.go:141] libmachine: (ha-086149-m02) DBG | I0819 18:02:01.163579  391194 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m02/id_rsa...
	I0819 18:02:01.344183  390826 main.go:141] libmachine: (ha-086149-m02) DBG | I0819 18:02:01.344042  391194 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m02/ha-086149-m02.rawdisk...
	I0819 18:02:01.344216  390826 main.go:141] libmachine: (ha-086149-m02) DBG | Writing magic tar header
	I0819 18:02:01.344227  390826 main.go:141] libmachine: (ha-086149-m02) DBG | Writing SSH key tar header
	I0819 18:02:01.344235  390826 main.go:141] libmachine: (ha-086149-m02) DBG | I0819 18:02:01.344169  391194 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m02 ...
	I0819 18:02:01.344299  390826 main.go:141] libmachine: (ha-086149-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m02
	I0819 18:02:01.344332  390826 main.go:141] libmachine: (ha-086149-m02) Setting executable bit set on /home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m02 (perms=drwx------)
	I0819 18:02:01.344354  390826 main.go:141] libmachine: (ha-086149-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19468-372744/.minikube/machines
	I0819 18:02:01.344366  390826 main.go:141] libmachine: (ha-086149-m02) Setting executable bit set on /home/jenkins/minikube-integration/19468-372744/.minikube/machines (perms=drwxr-xr-x)
	I0819 18:02:01.344379  390826 main.go:141] libmachine: (ha-086149-m02) Setting executable bit set on /home/jenkins/minikube-integration/19468-372744/.minikube (perms=drwxr-xr-x)
	I0819 18:02:01.344386  390826 main.go:141] libmachine: (ha-086149-m02) Setting executable bit set on /home/jenkins/minikube-integration/19468-372744 (perms=drwxrwxr-x)
	I0819 18:02:01.344394  390826 main.go:141] libmachine: (ha-086149-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0819 18:02:01.344403  390826 main.go:141] libmachine: (ha-086149-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0819 18:02:01.344417  390826 main.go:141] libmachine: (ha-086149-m02) Creating domain...
	I0819 18:02:01.344432  390826 main.go:141] libmachine: (ha-086149-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19468-372744/.minikube
	I0819 18:02:01.344449  390826 main.go:141] libmachine: (ha-086149-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19468-372744
	I0819 18:02:01.344470  390826 main.go:141] libmachine: (ha-086149-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0819 18:02:01.344483  390826 main.go:141] libmachine: (ha-086149-m02) DBG | Checking permissions on dir: /home/jenkins
	I0819 18:02:01.344523  390826 main.go:141] libmachine: (ha-086149-m02) DBG | Checking permissions on dir: /home
	I0819 18:02:01.344552  390826 main.go:141] libmachine: (ha-086149-m02) DBG | Skipping /home - not owner
	I0819 18:02:01.345652  390826 main.go:141] libmachine: (ha-086149-m02) define libvirt domain using xml: 
	I0819 18:02:01.345680  390826 main.go:141] libmachine: (ha-086149-m02) <domain type='kvm'>
	I0819 18:02:01.345692  390826 main.go:141] libmachine: (ha-086149-m02)   <name>ha-086149-m02</name>
	I0819 18:02:01.345774  390826 main.go:141] libmachine: (ha-086149-m02)   <memory unit='MiB'>2200</memory>
	I0819 18:02:01.345825  390826 main.go:141] libmachine: (ha-086149-m02)   <vcpu>2</vcpu>
	I0819 18:02:01.345841  390826 main.go:141] libmachine: (ha-086149-m02)   <features>
	I0819 18:02:01.345850  390826 main.go:141] libmachine: (ha-086149-m02)     <acpi/>
	I0819 18:02:01.345860  390826 main.go:141] libmachine: (ha-086149-m02)     <apic/>
	I0819 18:02:01.345872  390826 main.go:141] libmachine: (ha-086149-m02)     <pae/>
	I0819 18:02:01.345916  390826 main.go:141] libmachine: (ha-086149-m02)     
	I0819 18:02:01.345927  390826 main.go:141] libmachine: (ha-086149-m02)   </features>
	I0819 18:02:01.345942  390826 main.go:141] libmachine: (ha-086149-m02)   <cpu mode='host-passthrough'>
	I0819 18:02:01.345953  390826 main.go:141] libmachine: (ha-086149-m02)   
	I0819 18:02:01.345964  390826 main.go:141] libmachine: (ha-086149-m02)   </cpu>
	I0819 18:02:01.345979  390826 main.go:141] libmachine: (ha-086149-m02)   <os>
	I0819 18:02:01.345989  390826 main.go:141] libmachine: (ha-086149-m02)     <type>hvm</type>
	I0819 18:02:01.346000  390826 main.go:141] libmachine: (ha-086149-m02)     <boot dev='cdrom'/>
	I0819 18:02:01.346008  390826 main.go:141] libmachine: (ha-086149-m02)     <boot dev='hd'/>
	I0819 18:02:01.346038  390826 main.go:141] libmachine: (ha-086149-m02)     <bootmenu enable='no'/>
	I0819 18:02:01.346059  390826 main.go:141] libmachine: (ha-086149-m02)   </os>
	I0819 18:02:01.346066  390826 main.go:141] libmachine: (ha-086149-m02)   <devices>
	I0819 18:02:01.346074  390826 main.go:141] libmachine: (ha-086149-m02)     <disk type='file' device='cdrom'>
	I0819 18:02:01.346083  390826 main.go:141] libmachine: (ha-086149-m02)       <source file='/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m02/boot2docker.iso'/>
	I0819 18:02:01.346091  390826 main.go:141] libmachine: (ha-086149-m02)       <target dev='hdc' bus='scsi'/>
	I0819 18:02:01.346097  390826 main.go:141] libmachine: (ha-086149-m02)       <readonly/>
	I0819 18:02:01.346105  390826 main.go:141] libmachine: (ha-086149-m02)     </disk>
	I0819 18:02:01.346112  390826 main.go:141] libmachine: (ha-086149-m02)     <disk type='file' device='disk'>
	I0819 18:02:01.346120  390826 main.go:141] libmachine: (ha-086149-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0819 18:02:01.346130  390826 main.go:141] libmachine: (ha-086149-m02)       <source file='/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m02/ha-086149-m02.rawdisk'/>
	I0819 18:02:01.346137  390826 main.go:141] libmachine: (ha-086149-m02)       <target dev='hda' bus='virtio'/>
	I0819 18:02:01.346143  390826 main.go:141] libmachine: (ha-086149-m02)     </disk>
	I0819 18:02:01.346152  390826 main.go:141] libmachine: (ha-086149-m02)     <interface type='network'>
	I0819 18:02:01.346160  390826 main.go:141] libmachine: (ha-086149-m02)       <source network='mk-ha-086149'/>
	I0819 18:02:01.346174  390826 main.go:141] libmachine: (ha-086149-m02)       <model type='virtio'/>
	I0819 18:02:01.346181  390826 main.go:141] libmachine: (ha-086149-m02)     </interface>
	I0819 18:02:01.346186  390826 main.go:141] libmachine: (ha-086149-m02)     <interface type='network'>
	I0819 18:02:01.346192  390826 main.go:141] libmachine: (ha-086149-m02)       <source network='default'/>
	I0819 18:02:01.346199  390826 main.go:141] libmachine: (ha-086149-m02)       <model type='virtio'/>
	I0819 18:02:01.346205  390826 main.go:141] libmachine: (ha-086149-m02)     </interface>
	I0819 18:02:01.346212  390826 main.go:141] libmachine: (ha-086149-m02)     <serial type='pty'>
	I0819 18:02:01.346218  390826 main.go:141] libmachine: (ha-086149-m02)       <target port='0'/>
	I0819 18:02:01.346225  390826 main.go:141] libmachine: (ha-086149-m02)     </serial>
	I0819 18:02:01.346230  390826 main.go:141] libmachine: (ha-086149-m02)     <console type='pty'>
	I0819 18:02:01.346237  390826 main.go:141] libmachine: (ha-086149-m02)       <target type='serial' port='0'/>
	I0819 18:02:01.346242  390826 main.go:141] libmachine: (ha-086149-m02)     </console>
	I0819 18:02:01.346249  390826 main.go:141] libmachine: (ha-086149-m02)     <rng model='virtio'>
	I0819 18:02:01.346282  390826 main.go:141] libmachine: (ha-086149-m02)       <backend model='random'>/dev/random</backend>
	I0819 18:02:01.346308  390826 main.go:141] libmachine: (ha-086149-m02)     </rng>
	I0819 18:02:01.346321  390826 main.go:141] libmachine: (ha-086149-m02)     
	I0819 18:02:01.346332  390826 main.go:141] libmachine: (ha-086149-m02)     
	I0819 18:02:01.346345  390826 main.go:141] libmachine: (ha-086149-m02)   </devices>
	I0819 18:02:01.346356  390826 main.go:141] libmachine: (ha-086149-m02) </domain>
	I0819 18:02:01.346369  390826 main.go:141] libmachine: (ha-086149-m02) 
	I0819 18:02:01.353449  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined MAC address 52:54:00:25:12:fc in network default
	I0819 18:02:01.354063  390826 main.go:141] libmachine: (ha-086149-m02) Ensuring networks are active...
	I0819 18:02:01.354090  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:01.354865  390826 main.go:141] libmachine: (ha-086149-m02) Ensuring network default is active
	I0819 18:02:01.355292  390826 main.go:141] libmachine: (ha-086149-m02) Ensuring network mk-ha-086149 is active
	I0819 18:02:01.355765  390826 main.go:141] libmachine: (ha-086149-m02) Getting domain xml...
	I0819 18:02:01.356643  390826 main.go:141] libmachine: (ha-086149-m02) Creating domain...
	I0819 18:02:02.573137  390826 main.go:141] libmachine: (ha-086149-m02) Waiting to get IP...
	I0819 18:02:02.573999  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:02.574397  390826 main.go:141] libmachine: (ha-086149-m02) DBG | unable to find current IP address of domain ha-086149-m02 in network mk-ha-086149
	I0819 18:02:02.574452  390826 main.go:141] libmachine: (ha-086149-m02) DBG | I0819 18:02:02.574377  391194 retry.go:31] will retry after 213.692862ms: waiting for machine to come up
	I0819 18:02:02.789798  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:02.790223  390826 main.go:141] libmachine: (ha-086149-m02) DBG | unable to find current IP address of domain ha-086149-m02 in network mk-ha-086149
	I0819 18:02:02.790259  390826 main.go:141] libmachine: (ha-086149-m02) DBG | I0819 18:02:02.790168  391194 retry.go:31] will retry after 315.769086ms: waiting for machine to come up
	I0819 18:02:03.108010  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:03.108442  390826 main.go:141] libmachine: (ha-086149-m02) DBG | unable to find current IP address of domain ha-086149-m02 in network mk-ha-086149
	I0819 18:02:03.108477  390826 main.go:141] libmachine: (ha-086149-m02) DBG | I0819 18:02:03.108385  391194 retry.go:31] will retry after 301.828125ms: waiting for machine to come up
	I0819 18:02:03.412018  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:03.412538  390826 main.go:141] libmachine: (ha-086149-m02) DBG | unable to find current IP address of domain ha-086149-m02 in network mk-ha-086149
	I0819 18:02:03.412566  390826 main.go:141] libmachine: (ha-086149-m02) DBG | I0819 18:02:03.412497  391194 retry.go:31] will retry after 566.070222ms: waiting for machine to come up
	I0819 18:02:03.980372  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:03.980809  390826 main.go:141] libmachine: (ha-086149-m02) DBG | unable to find current IP address of domain ha-086149-m02 in network mk-ha-086149
	I0819 18:02:03.980839  390826 main.go:141] libmachine: (ha-086149-m02) DBG | I0819 18:02:03.980760  391194 retry.go:31] will retry after 725.498843ms: waiting for machine to come up
	I0819 18:02:04.707651  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:04.708163  390826 main.go:141] libmachine: (ha-086149-m02) DBG | unable to find current IP address of domain ha-086149-m02 in network mk-ha-086149
	I0819 18:02:04.708189  390826 main.go:141] libmachine: (ha-086149-m02) DBG | I0819 18:02:04.708114  391194 retry.go:31] will retry after 888.838276ms: waiting for machine to come up
	I0819 18:02:05.598151  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:05.598534  390826 main.go:141] libmachine: (ha-086149-m02) DBG | unable to find current IP address of domain ha-086149-m02 in network mk-ha-086149
	I0819 18:02:05.598561  390826 main.go:141] libmachine: (ha-086149-m02) DBG | I0819 18:02:05.598505  391194 retry.go:31] will retry after 725.496011ms: waiting for machine to come up
	I0819 18:02:06.326059  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:06.326591  390826 main.go:141] libmachine: (ha-086149-m02) DBG | unable to find current IP address of domain ha-086149-m02 in network mk-ha-086149
	I0819 18:02:06.326619  390826 main.go:141] libmachine: (ha-086149-m02) DBG | I0819 18:02:06.326549  391194 retry.go:31] will retry after 1.213657221s: waiting for machine to come up
	I0819 18:02:07.541312  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:07.541730  390826 main.go:141] libmachine: (ha-086149-m02) DBG | unable to find current IP address of domain ha-086149-m02 in network mk-ha-086149
	I0819 18:02:07.541762  390826 main.go:141] libmachine: (ha-086149-m02) DBG | I0819 18:02:07.541670  391194 retry.go:31] will retry after 1.144037477s: waiting for machine to come up
	I0819 18:02:08.687009  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:08.687346  390826 main.go:141] libmachine: (ha-086149-m02) DBG | unable to find current IP address of domain ha-086149-m02 in network mk-ha-086149
	I0819 18:02:08.687378  390826 main.go:141] libmachine: (ha-086149-m02) DBG | I0819 18:02:08.687317  391194 retry.go:31] will retry after 1.786431516s: waiting for machine to come up
	I0819 18:02:10.475126  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:10.475572  390826 main.go:141] libmachine: (ha-086149-m02) DBG | unable to find current IP address of domain ha-086149-m02 in network mk-ha-086149
	I0819 18:02:10.475604  390826 main.go:141] libmachine: (ha-086149-m02) DBG | I0819 18:02:10.475516  391194 retry.go:31] will retry after 2.7984425s: waiting for machine to come up
	I0819 18:02:13.276769  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:13.277252  390826 main.go:141] libmachine: (ha-086149-m02) DBG | unable to find current IP address of domain ha-086149-m02 in network mk-ha-086149
	I0819 18:02:13.277281  390826 main.go:141] libmachine: (ha-086149-m02) DBG | I0819 18:02:13.277177  391194 retry.go:31] will retry after 3.557169037s: waiting for machine to come up
	I0819 18:02:16.836169  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:16.836715  390826 main.go:141] libmachine: (ha-086149-m02) DBG | unable to find current IP address of domain ha-086149-m02 in network mk-ha-086149
	I0819 18:02:16.836739  390826 main.go:141] libmachine: (ha-086149-m02) DBG | I0819 18:02:16.836637  391194 retry.go:31] will retry after 3.947371274s: waiting for machine to come up
	I0819 18:02:20.788796  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:20.789268  390826 main.go:141] libmachine: (ha-086149-m02) DBG | unable to find current IP address of domain ha-086149-m02 in network mk-ha-086149
	I0819 18:02:20.789290  390826 main.go:141] libmachine: (ha-086149-m02) DBG | I0819 18:02:20.789224  391194 retry.go:31] will retry after 5.582773093s: waiting for machine to come up
	I0819 18:02:26.374103  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:26.374654  390826 main.go:141] libmachine: (ha-086149-m02) Found IP for machine: 192.168.39.167
	I0819 18:02:26.374678  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has current primary IP address 192.168.39.167 and MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:26.374684  390826 main.go:141] libmachine: (ha-086149-m02) Reserving static IP address...
	I0819 18:02:26.375127  390826 main.go:141] libmachine: (ha-086149-m02) DBG | unable to find host DHCP lease matching {name: "ha-086149-m02", mac: "52:54:00:b9:44:0e", ip: "192.168.39.167"} in network mk-ha-086149
	I0819 18:02:26.451534  390826 main.go:141] libmachine: (ha-086149-m02) DBG | Getting to WaitForSSH function...
	I0819 18:02:26.451567  390826 main.go:141] libmachine: (ha-086149-m02) Reserved static IP address: 192.168.39.167
	I0819 18:02:26.451582  390826 main.go:141] libmachine: (ha-086149-m02) Waiting for SSH to be available...
	I0819 18:02:26.454800  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:26.455320  390826 main.go:141] libmachine: (ha-086149-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:b9:44:0e", ip: ""} in network mk-ha-086149
	I0819 18:02:26.455347  390826 main.go:141] libmachine: (ha-086149-m02) DBG | unable to find defined IP address of network mk-ha-086149 interface with MAC address 52:54:00:b9:44:0e
	I0819 18:02:26.455518  390826 main.go:141] libmachine: (ha-086149-m02) DBG | Using SSH client type: external
	I0819 18:02:26.455550  390826 main.go:141] libmachine: (ha-086149-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m02/id_rsa (-rw-------)
	I0819 18:02:26.455578  390826 main.go:141] libmachine: (ha-086149-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 18:02:26.455595  390826 main.go:141] libmachine: (ha-086149-m02) DBG | About to run SSH command:
	I0819 18:02:26.455612  390826 main.go:141] libmachine: (ha-086149-m02) DBG | exit 0
	I0819 18:02:26.459237  390826 main.go:141] libmachine: (ha-086149-m02) DBG | SSH cmd err, output: exit status 255: 
	I0819 18:02:26.459267  390826 main.go:141] libmachine: (ha-086149-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0819 18:02:26.459278  390826 main.go:141] libmachine: (ha-086149-m02) DBG | command : exit 0
	I0819 18:02:26.459290  390826 main.go:141] libmachine: (ha-086149-m02) DBG | err     : exit status 255
	I0819 18:02:26.459302  390826 main.go:141] libmachine: (ha-086149-m02) DBG | output  : 
	I0819 18:02:29.460056  390826 main.go:141] libmachine: (ha-086149-m02) DBG | Getting to WaitForSSH function...
	I0819 18:02:29.463263  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:29.463618  390826 main.go:141] libmachine: (ha-086149-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:44:0e", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:02:15 +0000 UTC Type:0 Mac:52:54:00:b9:44:0e Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-086149-m02 Clientid:01:52:54:00:b9:44:0e}
	I0819 18:02:29.463647  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined IP address 192.168.39.167 and MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:29.463807  390826 main.go:141] libmachine: (ha-086149-m02) DBG | Using SSH client type: external
	I0819 18:02:29.463838  390826 main.go:141] libmachine: (ha-086149-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m02/id_rsa (-rw-------)
	I0819 18:02:29.463870  390826 main.go:141] libmachine: (ha-086149-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.167 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 18:02:29.463884  390826 main.go:141] libmachine: (ha-086149-m02) DBG | About to run SSH command:
	I0819 18:02:29.463919  390826 main.go:141] libmachine: (ha-086149-m02) DBG | exit 0
	I0819 18:02:29.591884  390826 main.go:141] libmachine: (ha-086149-m02) DBG | SSH cmd err, output: <nil>: 
	I0819 18:02:29.592289  390826 main.go:141] libmachine: (ha-086149-m02) KVM machine creation complete!
	I0819 18:02:29.592585  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetConfigRaw
	I0819 18:02:29.593231  390826 main.go:141] libmachine: (ha-086149-m02) Calling .DriverName
	I0819 18:02:29.593450  390826 main.go:141] libmachine: (ha-086149-m02) Calling .DriverName
	I0819 18:02:29.593703  390826 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0819 18:02:29.593722  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetState
	I0819 18:02:29.594958  390826 main.go:141] libmachine: Detecting operating system of created instance...
	I0819 18:02:29.594972  390826 main.go:141] libmachine: Waiting for SSH to be available...
	I0819 18:02:29.594977  390826 main.go:141] libmachine: Getting to WaitForSSH function...
	I0819 18:02:29.594985  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHHostname
	I0819 18:02:29.597081  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:29.597433  390826 main.go:141] libmachine: (ha-086149-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:44:0e", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:02:15 +0000 UTC Type:0 Mac:52:54:00:b9:44:0e Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-086149-m02 Clientid:01:52:54:00:b9:44:0e}
	I0819 18:02:29.597461  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined IP address 192.168.39.167 and MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:29.597582  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHPort
	I0819 18:02:29.597780  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHKeyPath
	I0819 18:02:29.597928  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHKeyPath
	I0819 18:02:29.598082  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHUsername
	I0819 18:02:29.598242  390826 main.go:141] libmachine: Using SSH client type: native
	I0819 18:02:29.598481  390826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.167 22 <nil> <nil>}
	I0819 18:02:29.598495  390826 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0819 18:02:29.711103  390826 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 18:02:29.711127  390826 main.go:141] libmachine: Detecting the provisioner...
	I0819 18:02:29.711150  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHHostname
	I0819 18:02:29.714092  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:29.714482  390826 main.go:141] libmachine: (ha-086149-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:44:0e", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:02:15 +0000 UTC Type:0 Mac:52:54:00:b9:44:0e Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-086149-m02 Clientid:01:52:54:00:b9:44:0e}
	I0819 18:02:29.714514  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined IP address 192.168.39.167 and MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:29.714667  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHPort
	I0819 18:02:29.714895  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHKeyPath
	I0819 18:02:29.715068  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHKeyPath
	I0819 18:02:29.715177  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHUsername
	I0819 18:02:29.715311  390826 main.go:141] libmachine: Using SSH client type: native
	I0819 18:02:29.715508  390826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.167 22 <nil> <nil>}
	I0819 18:02:29.715523  390826 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0819 18:02:29.832407  390826 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0819 18:02:29.832502  390826 main.go:141] libmachine: found compatible host: buildroot
	I0819 18:02:29.832517  390826 main.go:141] libmachine: Provisioning with buildroot...
	I0819 18:02:29.832529  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetMachineName
	I0819 18:02:29.832801  390826 buildroot.go:166] provisioning hostname "ha-086149-m02"
	I0819 18:02:29.832836  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetMachineName
	I0819 18:02:29.833053  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHHostname
	I0819 18:02:29.835580  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:29.836030  390826 main.go:141] libmachine: (ha-086149-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:44:0e", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:02:15 +0000 UTC Type:0 Mac:52:54:00:b9:44:0e Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-086149-m02 Clientid:01:52:54:00:b9:44:0e}
	I0819 18:02:29.836077  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined IP address 192.168.39.167 and MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:29.836240  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHPort
	I0819 18:02:29.836432  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHKeyPath
	I0819 18:02:29.836590  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHKeyPath
	I0819 18:02:29.836769  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHUsername
	I0819 18:02:29.836968  390826 main.go:141] libmachine: Using SSH client type: native
	I0819 18:02:29.837196  390826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.167 22 <nil> <nil>}
	I0819 18:02:29.837218  390826 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-086149-m02 && echo "ha-086149-m02" | sudo tee /etc/hostname
	I0819 18:02:29.961904  390826 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-086149-m02
	
	I0819 18:02:29.961935  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHHostname
	I0819 18:02:29.964835  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:29.965249  390826 main.go:141] libmachine: (ha-086149-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:44:0e", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:02:15 +0000 UTC Type:0 Mac:52:54:00:b9:44:0e Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-086149-m02 Clientid:01:52:54:00:b9:44:0e}
	I0819 18:02:29.965273  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined IP address 192.168.39.167 and MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:29.965458  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHPort
	I0819 18:02:29.965670  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHKeyPath
	I0819 18:02:29.965837  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHKeyPath
	I0819 18:02:29.965957  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHUsername
	I0819 18:02:29.966106  390826 main.go:141] libmachine: Using SSH client type: native
	I0819 18:02:29.966269  390826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.167 22 <nil> <nil>}
	I0819 18:02:29.966290  390826 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-086149-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-086149-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-086149-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 18:02:30.089048  390826 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 18:02:30.089086  390826 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19468-372744/.minikube CaCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19468-372744/.minikube}
	I0819 18:02:30.089109  390826 buildroot.go:174] setting up certificates
	I0819 18:02:30.089119  390826 provision.go:84] configureAuth start
	I0819 18:02:30.089129  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetMachineName
	I0819 18:02:30.089461  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetIP
	I0819 18:02:30.092265  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:30.092669  390826 main.go:141] libmachine: (ha-086149-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:44:0e", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:02:15 +0000 UTC Type:0 Mac:52:54:00:b9:44:0e Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-086149-m02 Clientid:01:52:54:00:b9:44:0e}
	I0819 18:02:30.092701  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined IP address 192.168.39.167 and MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:30.092884  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHHostname
	I0819 18:02:30.095727  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:30.096099  390826 main.go:141] libmachine: (ha-086149-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:44:0e", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:02:15 +0000 UTC Type:0 Mac:52:54:00:b9:44:0e Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-086149-m02 Clientid:01:52:54:00:b9:44:0e}
	I0819 18:02:30.096125  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined IP address 192.168.39.167 and MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:30.096378  390826 provision.go:143] copyHostCerts
	I0819 18:02:30.096408  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem
	I0819 18:02:30.096439  390826 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem, removing ...
	I0819 18:02:30.096448  390826 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem
	I0819 18:02:30.096554  390826 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem (1082 bytes)
	I0819 18:02:30.096631  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem
	I0819 18:02:30.096648  390826 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem, removing ...
	I0819 18:02:30.096655  390826 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem
	I0819 18:02:30.096681  390826 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem (1123 bytes)
	I0819 18:02:30.096726  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem
	I0819 18:02:30.096740  390826 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem, removing ...
	I0819 18:02:30.096747  390826 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem
	I0819 18:02:30.096767  390826 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem (1675 bytes)
	I0819 18:02:30.096813  390826 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem org=jenkins.ha-086149-m02 san=[127.0.0.1 192.168.39.167 ha-086149-m02 localhost minikube]
	I0819 18:02:30.185382  390826 provision.go:177] copyRemoteCerts
	I0819 18:02:30.185447  390826 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 18:02:30.185477  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHHostname
	I0819 18:02:30.188112  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:30.188524  390826 main.go:141] libmachine: (ha-086149-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:44:0e", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:02:15 +0000 UTC Type:0 Mac:52:54:00:b9:44:0e Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-086149-m02 Clientid:01:52:54:00:b9:44:0e}
	I0819 18:02:30.188561  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined IP address 192.168.39.167 and MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:30.188806  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHPort
	I0819 18:02:30.189073  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHKeyPath
	I0819 18:02:30.189248  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHUsername
	I0819 18:02:30.189403  390826 sshutil.go:53] new ssh client: &{IP:192.168.39.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m02/id_rsa Username:docker}
	I0819 18:02:30.278357  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 18:02:30.278448  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0819 18:02:30.303041  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 18:02:30.303128  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 18:02:30.328073  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 18:02:30.328160  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 18:02:30.352418  390826 provision.go:87] duration metric: took 263.283773ms to configureAuth
	I0819 18:02:30.352453  390826 buildroot.go:189] setting minikube options for container-runtime
	I0819 18:02:30.352659  390826 config.go:182] Loaded profile config "ha-086149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:02:30.352754  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHHostname
	I0819 18:02:30.355415  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:30.355751  390826 main.go:141] libmachine: (ha-086149-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:44:0e", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:02:15 +0000 UTC Type:0 Mac:52:54:00:b9:44:0e Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-086149-m02 Clientid:01:52:54:00:b9:44:0e}
	I0819 18:02:30.355783  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined IP address 192.168.39.167 and MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:30.355978  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHPort
	I0819 18:02:30.356180  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHKeyPath
	I0819 18:02:30.356334  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHKeyPath
	I0819 18:02:30.356473  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHUsername
	I0819 18:02:30.356613  390826 main.go:141] libmachine: Using SSH client type: native
	I0819 18:02:30.356785  390826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.167 22 <nil> <nil>}
	I0819 18:02:30.356801  390826 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 18:02:30.647226  390826 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 18:02:30.647261  390826 main.go:141] libmachine: Checking connection to Docker...
	I0819 18:02:30.647279  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetURL
	I0819 18:02:30.648827  390826 main.go:141] libmachine: (ha-086149-m02) DBG | Using libvirt version 6000000
	I0819 18:02:30.650998  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:30.651345  390826 main.go:141] libmachine: (ha-086149-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:44:0e", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:02:15 +0000 UTC Type:0 Mac:52:54:00:b9:44:0e Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-086149-m02 Clientid:01:52:54:00:b9:44:0e}
	I0819 18:02:30.651523  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined IP address 192.168.39.167 and MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:30.651593  390826 main.go:141] libmachine: Docker is up and running!
	I0819 18:02:30.651608  390826 main.go:141] libmachine: Reticulating splines...
	I0819 18:02:30.651617  390826 client.go:171] duration metric: took 29.762332975s to LocalClient.Create
	I0819 18:02:30.651641  390826 start.go:167] duration metric: took 29.762401242s to libmachine.API.Create "ha-086149"
	I0819 18:02:30.651650  390826 start.go:293] postStartSetup for "ha-086149-m02" (driver="kvm2")
	I0819 18:02:30.651660  390826 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 18:02:30.651714  390826 main.go:141] libmachine: (ha-086149-m02) Calling .DriverName
	I0819 18:02:30.651984  390826 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 18:02:30.652147  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHHostname
	I0819 18:02:30.654564  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:30.654965  390826 main.go:141] libmachine: (ha-086149-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:44:0e", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:02:15 +0000 UTC Type:0 Mac:52:54:00:b9:44:0e Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-086149-m02 Clientid:01:52:54:00:b9:44:0e}
	I0819 18:02:30.654987  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined IP address 192.168.39.167 and MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:30.655156  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHPort
	I0819 18:02:30.655369  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHKeyPath
	I0819 18:02:30.655538  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHUsername
	I0819 18:02:30.655725  390826 sshutil.go:53] new ssh client: &{IP:192.168.39.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m02/id_rsa Username:docker}
	I0819 18:02:30.742439  390826 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 18:02:30.747128  390826 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 18:02:30.747159  390826 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-372744/.minikube/addons for local assets ...
	I0819 18:02:30.747239  390826 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-372744/.minikube/files for local assets ...
	I0819 18:02:30.747311  390826 filesync.go:149] local asset: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem -> 3800092.pem in /etc/ssl/certs
	I0819 18:02:30.747323  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem -> /etc/ssl/certs/3800092.pem
	I0819 18:02:30.747406  390826 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 18:02:30.757484  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem --> /etc/ssl/certs/3800092.pem (1708 bytes)
	I0819 18:02:30.785461  390826 start.go:296] duration metric: took 133.794234ms for postStartSetup
	I0819 18:02:30.785531  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetConfigRaw
	I0819 18:02:30.786174  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetIP
	I0819 18:02:30.789492  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:30.789906  390826 main.go:141] libmachine: (ha-086149-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:44:0e", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:02:15 +0000 UTC Type:0 Mac:52:54:00:b9:44:0e Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-086149-m02 Clientid:01:52:54:00:b9:44:0e}
	I0819 18:02:30.789943  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined IP address 192.168.39.167 and MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:30.790207  390826 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/config.json ...
	I0819 18:02:30.790487  390826 start.go:128] duration metric: took 29.919427382s to createHost
	I0819 18:02:30.790520  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHHostname
	I0819 18:02:30.792954  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:30.793297  390826 main.go:141] libmachine: (ha-086149-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:44:0e", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:02:15 +0000 UTC Type:0 Mac:52:54:00:b9:44:0e Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-086149-m02 Clientid:01:52:54:00:b9:44:0e}
	I0819 18:02:30.793329  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined IP address 192.168.39.167 and MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:30.793558  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHPort
	I0819 18:02:30.793778  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHKeyPath
	I0819 18:02:30.793952  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHKeyPath
	I0819 18:02:30.794104  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHUsername
	I0819 18:02:30.794257  390826 main.go:141] libmachine: Using SSH client type: native
	I0819 18:02:30.794425  390826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.167 22 <nil> <nil>}
	I0819 18:02:30.794439  390826 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 18:02:30.908358  390826 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724090550.891768613
	
	I0819 18:02:30.908386  390826 fix.go:216] guest clock: 1724090550.891768613
	I0819 18:02:30.908394  390826 fix.go:229] Guest: 2024-08-19 18:02:30.891768613 +0000 UTC Remote: 2024-08-19 18:02:30.790503904 +0000 UTC m=+76.584400326 (delta=101.264709ms)
	I0819 18:02:30.908411  390826 fix.go:200] guest clock delta is within tolerance: 101.264709ms
	I0819 18:02:30.908416  390826 start.go:83] releasing machines lock for "ha-086149-m02", held for 30.03747204s
	I0819 18:02:30.908436  390826 main.go:141] libmachine: (ha-086149-m02) Calling .DriverName
	I0819 18:02:30.908746  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetIP
	I0819 18:02:30.911790  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:30.912299  390826 main.go:141] libmachine: (ha-086149-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:44:0e", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:02:15 +0000 UTC Type:0 Mac:52:54:00:b9:44:0e Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-086149-m02 Clientid:01:52:54:00:b9:44:0e}
	I0819 18:02:30.912324  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined IP address 192.168.39.167 and MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:30.914702  390826 out.go:177] * Found network options:
	I0819 18:02:30.916264  390826 out.go:177]   - NO_PROXY=192.168.39.249
	W0819 18:02:30.917550  390826 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 18:02:30.917584  390826 main.go:141] libmachine: (ha-086149-m02) Calling .DriverName
	I0819 18:02:30.918210  390826 main.go:141] libmachine: (ha-086149-m02) Calling .DriverName
	I0819 18:02:30.918395  390826 main.go:141] libmachine: (ha-086149-m02) Calling .DriverName
	I0819 18:02:30.918487  390826 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 18:02:30.918533  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHHostname
	W0819 18:02:30.918573  390826 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 18:02:30.918658  390826 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 18:02:30.918684  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHHostname
	I0819 18:02:30.921189  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:30.921523  390826 main.go:141] libmachine: (ha-086149-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:44:0e", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:02:15 +0000 UTC Type:0 Mac:52:54:00:b9:44:0e Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-086149-m02 Clientid:01:52:54:00:b9:44:0e}
	I0819 18:02:30.921551  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined IP address 192.168.39.167 and MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:30.921575  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:30.921721  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHPort
	I0819 18:02:30.921900  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHKeyPath
	I0819 18:02:30.921953  390826 main.go:141] libmachine: (ha-086149-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:44:0e", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:02:15 +0000 UTC Type:0 Mac:52:54:00:b9:44:0e Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-086149-m02 Clientid:01:52:54:00:b9:44:0e}
	I0819 18:02:30.921976  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined IP address 192.168.39.167 and MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:30.922073  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHUsername
	I0819 18:02:30.922143  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHPort
	I0819 18:02:30.922222  390826 sshutil.go:53] new ssh client: &{IP:192.168.39.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m02/id_rsa Username:docker}
	I0819 18:02:30.922313  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHKeyPath
	I0819 18:02:30.922446  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHUsername
	I0819 18:02:30.922578  390826 sshutil.go:53] new ssh client: &{IP:192.168.39.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m02/id_rsa Username:docker}
	I0819 18:02:31.162275  390826 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 18:02:31.168463  390826 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 18:02:31.168543  390826 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 18:02:31.185415  390826 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 18:02:31.185453  390826 start.go:495] detecting cgroup driver to use...
	I0819 18:02:31.185531  390826 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 18:02:31.203803  390826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 18:02:31.218769  390826 docker.go:217] disabling cri-docker service (if available) ...
	I0819 18:02:31.218849  390826 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 18:02:31.233091  390826 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 18:02:31.247534  390826 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 18:02:31.365020  390826 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 18:02:31.507633  390826 docker.go:233] disabling docker service ...
	I0819 18:02:31.507752  390826 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 18:02:31.522469  390826 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 18:02:31.535904  390826 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 18:02:31.684033  390826 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 18:02:31.816794  390826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 18:02:31.830888  390826 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 18:02:31.850134  390826 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 18:02:31.850203  390826 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:02:31.860550  390826 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 18:02:31.860618  390826 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:02:31.870742  390826 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:02:31.880834  390826 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:02:31.891213  390826 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 18:02:31.901856  390826 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:02:31.912615  390826 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:02:31.931114  390826 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:02:31.942288  390826 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 18:02:31.951905  390826 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 18:02:31.951992  390826 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 18:02:31.965733  390826 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 18:02:31.976631  390826 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 18:02:32.105549  390826 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 18:02:32.245821  390826 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 18:02:32.245895  390826 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 18:02:32.250785  390826 start.go:563] Will wait 60s for crictl version
	I0819 18:02:32.250836  390826 ssh_runner.go:195] Run: which crictl
	I0819 18:02:32.254658  390826 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 18:02:32.293963  390826 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 18:02:32.294078  390826 ssh_runner.go:195] Run: crio --version
	I0819 18:02:32.320948  390826 ssh_runner.go:195] Run: crio --version
	I0819 18:02:32.352515  390826 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 18:02:32.353910  390826 out.go:177]   - env NO_PROXY=192.168.39.249
	I0819 18:02:32.355059  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetIP
	I0819 18:02:32.357803  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:32.358225  390826 main.go:141] libmachine: (ha-086149-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:44:0e", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:02:15 +0000 UTC Type:0 Mac:52:54:00:b9:44:0e Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-086149-m02 Clientid:01:52:54:00:b9:44:0e}
	I0819 18:02:32.358257  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined IP address 192.168.39.167 and MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:32.358399  390826 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 18:02:32.362630  390826 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 18:02:32.375092  390826 mustload.go:65] Loading cluster: ha-086149
	I0819 18:02:32.375333  390826 config.go:182] Loaded profile config "ha-086149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:02:32.375732  390826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:02:32.375770  390826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:02:32.392292  390826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41425
	I0819 18:02:32.392699  390826 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:02:32.393169  390826 main.go:141] libmachine: Using API Version  1
	I0819 18:02:32.393193  390826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:02:32.393492  390826 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:02:32.393683  390826 main.go:141] libmachine: (ha-086149) Calling .GetState
	I0819 18:02:32.395300  390826 host.go:66] Checking if "ha-086149" exists ...
	I0819 18:02:32.395638  390826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:02:32.395664  390826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:02:32.410687  390826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42331
	I0819 18:02:32.411091  390826 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:02:32.411571  390826 main.go:141] libmachine: Using API Version  1
	I0819 18:02:32.411592  390826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:02:32.411927  390826 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:02:32.412133  390826 main.go:141] libmachine: (ha-086149) Calling .DriverName
	I0819 18:02:32.412299  390826 certs.go:68] Setting up /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149 for IP: 192.168.39.167
	I0819 18:02:32.412312  390826 certs.go:194] generating shared ca certs ...
	I0819 18:02:32.412332  390826 certs.go:226] acquiring lock for ca certs: {Name:mk639e03f593e0bccac045f6e9f5ba3b96cc81e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:02:32.412477  390826 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key
	I0819 18:02:32.412535  390826 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key
	I0819 18:02:32.412548  390826 certs.go:256] generating profile certs ...
	I0819 18:02:32.412635  390826 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/client.key
	I0819 18:02:32.412669  390826 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.key.29108782
	I0819 18:02:32.412693  390826 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.crt.29108782 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.249 192.168.39.167 192.168.39.254]
	I0819 18:02:32.613410  390826 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.crt.29108782 ...
	I0819 18:02:32.613445  390826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.crt.29108782: {Name:mk786a0be0a01b23577616474723d3dd1af61718 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:02:32.613633  390826 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.key.29108782 ...
	I0819 18:02:32.613652  390826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.key.29108782: {Name:mk35ec7528c86be4e226ad885f6517ee223a81da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:02:32.613749  390826 certs.go:381] copying /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.crt.29108782 -> /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.crt
	I0819 18:02:32.613904  390826 certs.go:385] copying /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.key.29108782 -> /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.key
	I0819 18:02:32.614083  390826 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/proxy-client.key
	I0819 18:02:32.614103  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 18:02:32.614123  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 18:02:32.614146  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 18:02:32.614167  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 18:02:32.614194  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0819 18:02:32.614216  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0819 18:02:32.614233  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0819 18:02:32.614254  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0819 18:02:32.614320  390826 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem (1338 bytes)
	W0819 18:02:32.614361  390826 certs.go:480] ignoring /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009_empty.pem, impossibly tiny 0 bytes
	I0819 18:02:32.614379  390826 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 18:02:32.614416  390826 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem (1082 bytes)
	I0819 18:02:32.614449  390826 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem (1123 bytes)
	I0819 18:02:32.614480  390826 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem (1675 bytes)
	I0819 18:02:32.614535  390826 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem (1708 bytes)
	I0819 18:02:32.614573  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem -> /usr/share/ca-certificates/380009.pem
	I0819 18:02:32.614595  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem -> /usr/share/ca-certificates/3800092.pem
	I0819 18:02:32.614614  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:02:32.614663  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHHostname
	I0819 18:02:32.617605  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:02:32.618037  390826 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:02:32.618064  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:02:32.618291  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHPort
	I0819 18:02:32.618489  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHKeyPath
	I0819 18:02:32.618662  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHUsername
	I0819 18:02:32.618811  390826 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149/id_rsa Username:docker}
	I0819 18:02:32.688132  390826 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0819 18:02:32.693432  390826 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0819 18:02:32.705643  390826 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0819 18:02:32.710393  390826 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0819 18:02:32.726929  390826 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0819 18:02:32.731805  390826 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0819 18:02:32.743991  390826 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0819 18:02:32.748405  390826 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0819 18:02:32.760696  390826 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0819 18:02:32.764761  390826 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0819 18:02:32.775576  390826 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0819 18:02:32.780335  390826 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0819 18:02:32.798982  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 18:02:32.824573  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 18:02:32.848456  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 18:02:32.872289  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 18:02:32.895762  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0819 18:02:32.919267  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 18:02:32.943247  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 18:02:32.967491  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 18:02:32.991733  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem --> /usr/share/ca-certificates/380009.pem (1338 bytes)
	I0819 18:02:33.016178  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem --> /usr/share/ca-certificates/3800092.pem (1708 bytes)
	I0819 18:02:33.041029  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 18:02:33.067154  390826 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0819 18:02:33.085779  390826 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0819 18:02:33.103563  390826 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0819 18:02:33.120415  390826 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0819 18:02:33.137279  390826 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0819 18:02:33.154210  390826 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0819 18:02:33.171254  390826 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0819 18:02:33.188327  390826 ssh_runner.go:195] Run: openssl version
	I0819 18:02:33.194174  390826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3800092.pem && ln -fs /usr/share/ca-certificates/3800092.pem /etc/ssl/certs/3800092.pem"
	I0819 18:02:33.204906  390826 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3800092.pem
	I0819 18:02:33.209552  390826 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 17:56 /usr/share/ca-certificates/3800092.pem
	I0819 18:02:33.209612  390826 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3800092.pem
	I0819 18:02:33.215435  390826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3800092.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 18:02:33.225689  390826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 18:02:33.236693  390826 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:02:33.241350  390826 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:02:33.241405  390826 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:02:33.247220  390826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 18:02:33.258368  390826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/380009.pem && ln -fs /usr/share/ca-certificates/380009.pem /etc/ssl/certs/380009.pem"
	I0819 18:02:33.268726  390826 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/380009.pem
	I0819 18:02:33.273014  390826 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 17:56 /usr/share/ca-certificates/380009.pem
	I0819 18:02:33.273115  390826 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/380009.pem
	I0819 18:02:33.278623  390826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/380009.pem /etc/ssl/certs/51391683.0"
	I0819 18:02:33.288625  390826 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 18:02:33.292635  390826 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 18:02:33.292700  390826 kubeadm.go:934] updating node {m02 192.168.39.167 8443 v1.31.0 crio true true} ...
	I0819 18:02:33.292792  390826 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-086149-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.167
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-086149 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 18:02:33.292862  390826 kube-vip.go:115] generating kube-vip config ...
	I0819 18:02:33.292923  390826 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0819 18:02:33.308724  390826 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0819 18:02:33.308804  390826 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0819 18:02:33.308863  390826 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 18:02:33.318019  390826 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0819 18:02:33.318070  390826 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0819 18:02:33.327419  390826 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0819 18:02:33.327441  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0819 18:02:33.327505  390826 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19468-372744/.minikube/cache/linux/amd64/v1.31.0/kubeadm
	I0819 18:02:33.327540  390826 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19468-372744/.minikube/cache/linux/amd64/v1.31.0/kubelet
	I0819 18:02:33.327513  390826 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0819 18:02:33.331840  390826 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0819 18:02:33.331859  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0819 18:02:34.279980  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0819 18:02:34.280077  390826 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0819 18:02:34.285199  390826 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0819 18:02:34.285234  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0819 18:02:34.603755  390826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:02:34.619504  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0819 18:02:34.619621  390826 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0819 18:02:34.624859  390826 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0819 18:02:34.624891  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0819 18:02:35.017238  390826 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0819 18:02:35.027772  390826 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0819 18:02:35.046060  390826 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 18:02:35.063995  390826 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0819 18:02:35.081954  390826 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0819 18:02:35.085926  390826 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 18:02:35.097918  390826 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 18:02:35.230726  390826 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 18:02:35.256273  390826 host.go:66] Checking if "ha-086149" exists ...
	I0819 18:02:35.256696  390826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:02:35.256749  390826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:02:35.272157  390826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42187
	I0819 18:02:35.272599  390826 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:02:35.273101  390826 main.go:141] libmachine: Using API Version  1
	I0819 18:02:35.273133  390826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:02:35.273423  390826 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:02:35.273619  390826 main.go:141] libmachine: (ha-086149) Calling .DriverName
	I0819 18:02:35.273745  390826 start.go:317] joinCluster: &{Name:ha-086149 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cluster
Name:ha-086149 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.167 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 18:02:35.273856  390826 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0819 18:02:35.273872  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHHostname
	I0819 18:02:35.276695  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:02:35.277091  390826 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:02:35.277122  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:02:35.277325  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHPort
	I0819 18:02:35.277514  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHKeyPath
	I0819 18:02:35.277691  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHUsername
	I0819 18:02:35.277874  390826 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149/id_rsa Username:docker}
	I0819 18:02:35.434664  390826 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.167 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 18:02:35.434728  390826 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token iv0xtp.asp8701sfrdl07f7 --discovery-token-ca-cert-hash sha256:3fcbd90565c5acbc36a47b2db682cb22dce9b172c9bf3af21e506ebb67608039 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-086149-m02 --control-plane --apiserver-advertise-address=192.168.39.167 --apiserver-bind-port=8443"
	I0819 18:02:55.605897  390826 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token iv0xtp.asp8701sfrdl07f7 --discovery-token-ca-cert-hash sha256:3fcbd90565c5acbc36a47b2db682cb22dce9b172c9bf3af21e506ebb67608039 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-086149-m02 --control-plane --apiserver-advertise-address=192.168.39.167 --apiserver-bind-port=8443": (20.171137965s)
	I0819 18:02:55.605943  390826 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0819 18:02:56.123662  390826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-086149-m02 minikube.k8s.io/updated_at=2024_08_19T18_02_56_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=9c2db9d51ec33b5c53a86e9ba3d384ee332e3411 minikube.k8s.io/name=ha-086149 minikube.k8s.io/primary=false
	I0819 18:02:56.272852  390826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-086149-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0819 18:02:56.436503  390826 start.go:319] duration metric: took 21.162750418s to joinCluster
	I0819 18:02:56.436592  390826 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.167 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 18:02:56.436892  390826 config.go:182] Loaded profile config "ha-086149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:02:56.438255  390826 out.go:177] * Verifying Kubernetes components...
	I0819 18:02:56.439648  390826 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 18:02:56.697352  390826 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 18:02:56.729948  390826 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19468-372744/kubeconfig
	I0819 18:02:56.730270  390826 kapi.go:59] client config for ha-086149: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/client.crt", KeyFile:"/home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/client.key", CAFile:"/home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f18d20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0819 18:02:56.730341  390826 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.249:8443
	I0819 18:02:56.730553  390826 node_ready.go:35] waiting up to 6m0s for node "ha-086149-m02" to be "Ready" ...
	I0819 18:02:56.730668  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:02:56.730681  390826 round_trippers.go:469] Request Headers:
	I0819 18:02:56.730691  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:02:56.730697  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:02:56.740148  390826 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0819 18:02:57.231105  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:02:57.231133  390826 round_trippers.go:469] Request Headers:
	I0819 18:02:57.231158  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:02:57.231172  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:02:57.236076  390826 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 18:02:57.730875  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:02:57.730895  390826 round_trippers.go:469] Request Headers:
	I0819 18:02:57.730904  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:02:57.730908  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:02:57.735538  390826 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 18:02:58.231679  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:02:58.231704  390826 round_trippers.go:469] Request Headers:
	I0819 18:02:58.231713  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:02:58.231717  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:02:58.236296  390826 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 18:02:58.731499  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:02:58.731527  390826 round_trippers.go:469] Request Headers:
	I0819 18:02:58.731537  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:02:58.731543  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:02:58.737763  390826 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0819 18:02:58.738771  390826 node_ready.go:53] node "ha-086149-m02" has status "Ready":"False"
	I0819 18:02:59.231010  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:02:59.231034  390826 round_trippers.go:469] Request Headers:
	I0819 18:02:59.231045  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:02:59.231052  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:02:59.234392  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:02:59.731233  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:02:59.731255  390826 round_trippers.go:469] Request Headers:
	I0819 18:02:59.731263  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:02:59.731267  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:02:59.734343  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:03:00.231336  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:03:00.231365  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:00.231376  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:00.231381  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:00.234918  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:03:00.730874  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:03:00.730896  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:00.730906  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:00.730910  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:00.733879  390826 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:03:01.230977  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:03:01.231003  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:01.231012  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:01.231017  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:01.234331  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:03:01.235035  390826 node_ready.go:53] node "ha-086149-m02" has status "Ready":"False"
	I0819 18:03:01.731548  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:03:01.731578  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:01.731590  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:01.731598  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:01.734946  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:03:02.231110  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:03:02.231141  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:02.231153  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:02.231161  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:02.235244  390826 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 18:03:02.731504  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:03:02.731539  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:02.731548  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:02.731552  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:02.734876  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:03:03.231781  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:03:03.231812  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:03.231821  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:03.231827  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:03.235825  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:03:03.236470  390826 node_ready.go:53] node "ha-086149-m02" has status "Ready":"False"
	I0819 18:03:03.730747  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:03:03.730778  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:03.730799  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:03.730805  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:03.792652  390826 round_trippers.go:574] Response Status: 200 OK in 61 milliseconds
	I0819 18:03:04.230764  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:03:04.230790  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:04.230798  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:04.230802  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:04.234364  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:03:04.730972  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:03:04.731052  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:04.731121  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:04.731132  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:04.735082  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:03:05.231072  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:03:05.231102  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:05.231116  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:05.231123  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:05.234442  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:03:05.730901  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:03:05.730927  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:05.730938  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:05.730944  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:05.734154  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:03:05.734752  390826 node_ready.go:53] node "ha-086149-m02" has status "Ready":"False"
	I0819 18:03:06.231650  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:03:06.231698  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:06.231710  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:06.231715  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:06.235728  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:03:06.730827  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:03:06.730851  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:06.730860  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:06.730864  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:06.734133  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:03:07.231066  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:03:07.231089  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:07.231097  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:07.231102  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:07.234190  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:03:07.731381  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:03:07.731407  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:07.731417  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:07.731423  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:07.734338  390826 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:03:07.735019  390826 node_ready.go:53] node "ha-086149-m02" has status "Ready":"False"
	I0819 18:03:08.231232  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:03:08.231257  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:08.231266  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:08.231269  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:08.234657  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:03:08.731627  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:03:08.731652  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:08.731660  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:08.731665  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:08.735050  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:03:09.230998  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:03:09.231024  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:09.231034  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:09.231048  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:09.234466  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:03:09.731200  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:03:09.731221  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:09.731228  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:09.731233  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:09.734342  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:03:09.735050  390826 node_ready.go:53] node "ha-086149-m02" has status "Ready":"False"
	I0819 18:03:10.231484  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:03:10.231508  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:10.231516  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:10.231522  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:10.234963  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:03:10.731474  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:03:10.731497  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:10.731505  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:10.731509  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:10.734155  390826 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:03:11.231584  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:03:11.231610  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:11.231618  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:11.231622  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:11.235119  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:03:11.731096  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:03:11.731119  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:11.731126  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:11.731130  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:11.733679  390826 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:03:12.231700  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:03:12.231725  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:12.231733  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:12.231737  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:12.235030  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:03:12.235689  390826 node_ready.go:53] node "ha-086149-m02" has status "Ready":"False"
	I0819 18:03:12.731249  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:03:12.731275  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:12.731283  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:12.731287  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:12.734643  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:03:13.231771  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:03:13.231796  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:13.231805  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:13.231809  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:13.235248  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:03:13.731231  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:03:13.731256  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:13.731264  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:13.731268  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:13.734118  390826 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:03:14.231122  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:03:14.231145  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:14.231157  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:14.231165  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:14.234050  390826 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:03:14.731726  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:03:14.731754  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:14.731765  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:14.731770  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:14.735456  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:03:14.736766  390826 node_ready.go:53] node "ha-086149-m02" has status "Ready":"False"
	I0819 18:03:15.231145  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:03:15.231169  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:15.231180  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:15.231187  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:15.234498  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:03:15.731033  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:03:15.731056  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:15.731064  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:15.731068  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:15.734005  390826 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:03:15.734737  390826 node_ready.go:49] node "ha-086149-m02" has status "Ready":"True"
	I0819 18:03:15.734768  390826 node_ready.go:38] duration metric: took 19.004186055s for node "ha-086149-m02" to be "Ready" ...
	I0819 18:03:15.734778  390826 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 18:03:15.734889  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0819 18:03:15.734902  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:15.734911  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:15.734916  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:15.739266  390826 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 18:03:15.745067  390826 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-8fjpd" in "kube-system" namespace to be "Ready" ...
	I0819 18:03:15.745161  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-8fjpd
	I0819 18:03:15.745174  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:15.745181  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:15.745187  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:15.747661  390826 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:03:15.748193  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149
	I0819 18:03:15.748207  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:15.748214  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:15.748218  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:15.750451  390826 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:03:15.750961  390826 pod_ready.go:93] pod "coredns-6f6b679f8f-8fjpd" in "kube-system" namespace has status "Ready":"True"
	I0819 18:03:15.750984  390826 pod_ready.go:82] duration metric: took 5.891312ms for pod "coredns-6f6b679f8f-8fjpd" in "kube-system" namespace to be "Ready" ...
	I0819 18:03:15.750995  390826 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-p65cb" in "kube-system" namespace to be "Ready" ...
	I0819 18:03:15.751059  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-p65cb
	I0819 18:03:15.751069  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:15.751079  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:15.751087  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:15.753277  390826 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:03:15.753835  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149
	I0819 18:03:15.753852  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:15.753861  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:15.753866  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:15.757857  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:03:15.758499  390826 pod_ready.go:93] pod "coredns-6f6b679f8f-p65cb" in "kube-system" namespace has status "Ready":"True"
	I0819 18:03:15.758517  390826 pod_ready.go:82] duration metric: took 7.514249ms for pod "coredns-6f6b679f8f-p65cb" in "kube-system" namespace to be "Ready" ...
	I0819 18:03:15.758525  390826 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-086149" in "kube-system" namespace to be "Ready" ...
	I0819 18:03:15.758580  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-086149
	I0819 18:03:15.758589  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:15.758595  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:15.758599  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:15.760699  390826 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:03:15.761371  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149
	I0819 18:03:15.761388  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:15.761398  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:15.761405  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:15.763562  390826 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:03:15.764008  390826 pod_ready.go:93] pod "etcd-ha-086149" in "kube-system" namespace has status "Ready":"True"
	I0819 18:03:15.764023  390826 pod_ready.go:82] duration metric: took 5.492637ms for pod "etcd-ha-086149" in "kube-system" namespace to be "Ready" ...
	I0819 18:03:15.764031  390826 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-086149-m02" in "kube-system" namespace to be "Ready" ...
	I0819 18:03:15.764072  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-086149-m02
	I0819 18:03:15.764080  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:15.764087  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:15.764090  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:15.765969  390826 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 18:03:15.766584  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:03:15.766601  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:15.766608  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:15.766613  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:15.768705  390826 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:03:15.769197  390826 pod_ready.go:93] pod "etcd-ha-086149-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 18:03:15.769216  390826 pod_ready.go:82] duration metric: took 5.179803ms for pod "etcd-ha-086149-m02" in "kube-system" namespace to be "Ready" ...
	I0819 18:03:15.769231  390826 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-086149" in "kube-system" namespace to be "Ready" ...
	I0819 18:03:15.931631  390826 request.go:632] Waited for 162.326929ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-086149
	I0819 18:03:15.931721  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-086149
	I0819 18:03:15.931728  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:15.931742  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:15.931759  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:15.935829  390826 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 18:03:16.131866  390826 request.go:632] Waited for 195.373418ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-086149
	I0819 18:03:16.131924  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149
	I0819 18:03:16.131928  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:16.131936  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:16.131940  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:16.135634  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:03:16.136131  390826 pod_ready.go:93] pod "kube-apiserver-ha-086149" in "kube-system" namespace has status "Ready":"True"
	I0819 18:03:16.136151  390826 pod_ready.go:82] duration metric: took 366.910938ms for pod "kube-apiserver-ha-086149" in "kube-system" namespace to be "Ready" ...
	I0819 18:03:16.136163  390826 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-086149-m02" in "kube-system" namespace to be "Ready" ...
	I0819 18:03:16.331318  390826 request.go:632] Waited for 195.07968ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-086149-m02
	I0819 18:03:16.331422  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-086149-m02
	I0819 18:03:16.331434  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:16.331447  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:16.331452  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:16.334968  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:03:16.532129  390826 request.go:632] Waited for 196.406522ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:03:16.532207  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:03:16.532217  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:16.532237  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:16.532246  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:16.535691  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:03:16.536191  390826 pod_ready.go:93] pod "kube-apiserver-ha-086149-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 18:03:16.536210  390826 pod_ready.go:82] duration metric: took 400.038947ms for pod "kube-apiserver-ha-086149-m02" in "kube-system" namespace to be "Ready" ...
	I0819 18:03:16.536233  390826 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-086149" in "kube-system" namespace to be "Ready" ...
	I0819 18:03:16.731414  390826 request.go:632] Waited for 195.094037ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-086149
	I0819 18:03:16.731500  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-086149
	I0819 18:03:16.731512  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:16.731525  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:16.731533  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:16.735046  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:03:16.931185  390826 request.go:632] Waited for 195.318382ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-086149
	I0819 18:03:16.931265  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149
	I0819 18:03:16.931272  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:16.931282  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:16.931291  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:16.934590  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:03:16.935075  390826 pod_ready.go:93] pod "kube-controller-manager-ha-086149" in "kube-system" namespace has status "Ready":"True"
	I0819 18:03:16.935096  390826 pod_ready.go:82] duration metric: took 398.853679ms for pod "kube-controller-manager-ha-086149" in "kube-system" namespace to be "Ready" ...
	I0819 18:03:16.935110  390826 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-086149-m02" in "kube-system" namespace to be "Ready" ...
	I0819 18:03:17.131090  390826 request.go:632] Waited for 195.897067ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-086149-m02
	I0819 18:03:17.131170  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-086149-m02
	I0819 18:03:17.131176  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:17.131183  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:17.131195  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:17.134780  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:03:17.331893  390826 request.go:632] Waited for 196.406154ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:03:17.332004  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:03:17.332016  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:17.332028  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:17.332037  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:17.335217  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:03:17.336031  390826 pod_ready.go:93] pod "kube-controller-manager-ha-086149-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 18:03:17.336050  390826 pod_ready.go:82] duration metric: took 400.932335ms for pod "kube-controller-manager-ha-086149-m02" in "kube-system" namespace to be "Ready" ...
	I0819 18:03:17.336063  390826 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fwkf2" in "kube-system" namespace to be "Ready" ...
	I0819 18:03:17.531346  390826 request.go:632] Waited for 195.177557ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fwkf2
	I0819 18:03:17.531423  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fwkf2
	I0819 18:03:17.531432  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:17.531443  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:17.531454  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:17.534764  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:03:17.731902  390826 request.go:632] Waited for 196.380838ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-086149
	I0819 18:03:17.731973  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149
	I0819 18:03:17.731980  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:17.732099  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:17.732153  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:17.735125  390826 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:03:17.735706  390826 pod_ready.go:93] pod "kube-proxy-fwkf2" in "kube-system" namespace has status "Ready":"True"
	I0819 18:03:17.735725  390826 pod_ready.go:82] duration metric: took 399.655828ms for pod "kube-proxy-fwkf2" in "kube-system" namespace to be "Ready" ...
	I0819 18:03:17.735736  390826 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vx94r" in "kube-system" namespace to be "Ready" ...
	I0819 18:03:17.931749  390826 request.go:632] Waited for 195.943138ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vx94r
	I0819 18:03:17.931819  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vx94r
	I0819 18:03:17.931824  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:17.931832  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:17.931839  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:17.935457  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:03:18.131628  390826 request.go:632] Waited for 195.400935ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:03:18.131709  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:03:18.131715  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:18.131723  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:18.131728  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:18.135208  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:03:18.136090  390826 pod_ready.go:93] pod "kube-proxy-vx94r" in "kube-system" namespace has status "Ready":"True"
	I0819 18:03:18.136112  390826 pod_ready.go:82] duration metric: took 400.367682ms for pod "kube-proxy-vx94r" in "kube-system" namespace to be "Ready" ...
	I0819 18:03:18.136123  390826 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-086149" in "kube-system" namespace to be "Ready" ...
	I0819 18:03:18.331374  390826 request.go:632] Waited for 195.162024ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-086149
	I0819 18:03:18.331465  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-086149
	I0819 18:03:18.331472  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:18.331484  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:18.331491  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:18.334662  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:03:18.531670  390826 request.go:632] Waited for 196.392053ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-086149
	I0819 18:03:18.531752  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149
	I0819 18:03:18.531757  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:18.531765  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:18.531772  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:18.535077  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:03:18.535730  390826 pod_ready.go:93] pod "kube-scheduler-ha-086149" in "kube-system" namespace has status "Ready":"True"
	I0819 18:03:18.535753  390826 pod_ready.go:82] duration metric: took 399.624046ms for pod "kube-scheduler-ha-086149" in "kube-system" namespace to be "Ready" ...
	I0819 18:03:18.535765  390826 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-086149-m02" in "kube-system" namespace to be "Ready" ...
	I0819 18:03:18.731816  390826 request.go:632] Waited for 195.936826ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-086149-m02
	I0819 18:03:18.731898  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-086149-m02
	I0819 18:03:18.731904  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:18.731910  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:18.731916  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:18.735060  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:03:18.931057  390826 request.go:632] Waited for 195.342395ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:03:18.931154  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:03:18.931161  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:18.931172  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:18.931177  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:18.934179  390826 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:03:18.935067  390826 pod_ready.go:93] pod "kube-scheduler-ha-086149-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 18:03:18.935089  390826 pod_ready.go:82] duration metric: took 399.3179ms for pod "kube-scheduler-ha-086149-m02" in "kube-system" namespace to be "Ready" ...
	I0819 18:03:18.935103  390826 pod_ready.go:39] duration metric: took 3.20028863s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 18:03:18.935122  390826 api_server.go:52] waiting for apiserver process to appear ...
	I0819 18:03:18.935181  390826 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:03:18.951375  390826 api_server.go:72] duration metric: took 22.514748322s to wait for apiserver process to appear ...
	I0819 18:03:18.951401  390826 api_server.go:88] waiting for apiserver healthz status ...
	I0819 18:03:18.951426  390826 api_server.go:253] Checking apiserver healthz at https://192.168.39.249:8443/healthz ...
	I0819 18:03:18.957673  390826 api_server.go:279] https://192.168.39.249:8443/healthz returned 200:
	ok
	I0819 18:03:18.957760  390826 round_trippers.go:463] GET https://192.168.39.249:8443/version
	I0819 18:03:18.957772  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:18.957784  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:18.957799  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:18.958846  390826 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 18:03:18.958950  390826 api_server.go:141] control plane version: v1.31.0
	I0819 18:03:18.958982  390826 api_server.go:131] duration metric: took 7.572392ms to wait for apiserver health ...
	I0819 18:03:18.958993  390826 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 18:03:19.131417  390826 request.go:632] Waited for 172.338441ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0819 18:03:19.131494  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0819 18:03:19.131503  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:19.131511  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:19.131519  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:19.138959  390826 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0819 18:03:19.144476  390826 system_pods.go:59] 17 kube-system pods found
	I0819 18:03:19.144510  390826 system_pods.go:61] "coredns-6f6b679f8f-8fjpd" [4bedb900-107a-4f7e-aae7-391b18da4a26] Running
	I0819 18:03:19.144515  390826 system_pods.go:61] "coredns-6f6b679f8f-p65cb" [7f30449e-d4ea-4d6f-a63a-08551024bd04] Running
	I0819 18:03:19.144520  390826 system_pods.go:61] "etcd-ha-086149" [0dc3ab02-31e8-4110-accd-85d2e18db232] Running
	I0819 18:03:19.144524  390826 system_pods.go:61] "etcd-ha-086149-m02" [06fcadf6-a4b1-40c8-8ce8-bc1df1fad746] Running
	I0819 18:03:19.144527  390826 system_pods.go:61] "kindnet-dgj9c" [142f260c-d74e-411f-ac87-f4398f573b94] Running
	I0819 18:03:19.144530  390826 system_pods.go:61] "kindnet-vb66s" [9322737a-5f8a-4d5a-a7d1-ba076bc8f2d8] Running
	I0819 18:03:19.144534  390826 system_pods.go:61] "kube-apiserver-ha-086149" [98466e03-c8b3-4d70-97b0-ba24afe776a9] Running
	I0819 18:03:19.144537  390826 system_pods.go:61] "kube-apiserver-ha-086149-m02" [afbc7c61-72ec-4571-9a5e-3d8afd08ae6b] Running
	I0819 18:03:19.144540  390826 system_pods.go:61] "kube-controller-manager-ha-086149" [910295fd-3d2e-4390-b9cd-9e1169813375] Running
	I0819 18:03:19.144544  390826 system_pods.go:61] "kube-controller-manager-ha-086149-m02" [dad58fc3-85d8-444c-bfb8-3a74c5016f32] Running
	I0819 18:03:19.144547  390826 system_pods.go:61] "kube-proxy-fwkf2" [001a3fe7-633c-44f8-9a8c-7401cec7af54] Running
	I0819 18:03:19.144550  390826 system_pods.go:61] "kube-proxy-vx94r" [8960702f-2f02-4e67-9d4f-02860491e5f2] Running
	I0819 18:03:19.144554  390826 system_pods.go:61] "kube-scheduler-ha-086149" [6d113319-d44e-4a5a-8e0a-f0a890e13e43] Running
	I0819 18:03:19.144557  390826 system_pods.go:61] "kube-scheduler-ha-086149-m02" [5d64ff86-a24d-4836-a7d7-ebb968bb39c8] Running
	I0819 18:03:19.144560  390826 system_pods.go:61] "kube-vip-ha-086149" [25176ed4-e5b0-4e5e-9835-736c856d2643] Running
	I0819 18:03:19.144563  390826 system_pods.go:61] "kube-vip-ha-086149-m02" [8c6b400d-f73e-44b5-a31f-3607329360be] Running
	I0819 18:03:19.144566  390826 system_pods.go:61] "storage-provisioner" [c12159a8-5f84-4d19-aa54-7b56a9669f6c] Running
	I0819 18:03:19.144572  390826 system_pods.go:74] duration metric: took 185.572931ms to wait for pod list to return data ...
	I0819 18:03:19.144587  390826 default_sa.go:34] waiting for default service account to be created ...
	I0819 18:03:19.331565  390826 request.go:632] Waited for 186.891864ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/default/serviceaccounts
	I0819 18:03:19.331653  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/default/serviceaccounts
	I0819 18:03:19.331663  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:19.331685  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:19.331691  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:19.335645  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:03:19.335910  390826 default_sa.go:45] found service account: "default"
	I0819 18:03:19.335931  390826 default_sa.go:55] duration metric: took 191.337823ms for default service account to be created ...
	I0819 18:03:19.335940  390826 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 18:03:19.531534  390826 request.go:632] Waited for 195.502082ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0819 18:03:19.531599  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0819 18:03:19.531606  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:19.531620  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:19.531628  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:19.536807  390826 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0819 18:03:19.543031  390826 system_pods.go:86] 17 kube-system pods found
	I0819 18:03:19.543067  390826 system_pods.go:89] "coredns-6f6b679f8f-8fjpd" [4bedb900-107a-4f7e-aae7-391b18da4a26] Running
	I0819 18:03:19.543076  390826 system_pods.go:89] "coredns-6f6b679f8f-p65cb" [7f30449e-d4ea-4d6f-a63a-08551024bd04] Running
	I0819 18:03:19.543082  390826 system_pods.go:89] "etcd-ha-086149" [0dc3ab02-31e8-4110-accd-85d2e18db232] Running
	I0819 18:03:19.543088  390826 system_pods.go:89] "etcd-ha-086149-m02" [06fcadf6-a4b1-40c8-8ce8-bc1df1fad746] Running
	I0819 18:03:19.543093  390826 system_pods.go:89] "kindnet-dgj9c" [142f260c-d74e-411f-ac87-f4398f573b94] Running
	I0819 18:03:19.543099  390826 system_pods.go:89] "kindnet-vb66s" [9322737a-5f8a-4d5a-a7d1-ba076bc8f2d8] Running
	I0819 18:03:19.543102  390826 system_pods.go:89] "kube-apiserver-ha-086149" [98466e03-c8b3-4d70-97b0-ba24afe776a9] Running
	I0819 18:03:19.543106  390826 system_pods.go:89] "kube-apiserver-ha-086149-m02" [afbc7c61-72ec-4571-9a5e-3d8afd08ae6b] Running
	I0819 18:03:19.543111  390826 system_pods.go:89] "kube-controller-manager-ha-086149" [910295fd-3d2e-4390-b9cd-9e1169813375] Running
	I0819 18:03:19.543117  390826 system_pods.go:89] "kube-controller-manager-ha-086149-m02" [dad58fc3-85d8-444c-bfb8-3a74c5016f32] Running
	I0819 18:03:19.543127  390826 system_pods.go:89] "kube-proxy-fwkf2" [001a3fe7-633c-44f8-9a8c-7401cec7af54] Running
	I0819 18:03:19.543132  390826 system_pods.go:89] "kube-proxy-vx94r" [8960702f-2f02-4e67-9d4f-02860491e5f2] Running
	I0819 18:03:19.543141  390826 system_pods.go:89] "kube-scheduler-ha-086149" [6d113319-d44e-4a5a-8e0a-f0a890e13e43] Running
	I0819 18:03:19.543147  390826 system_pods.go:89] "kube-scheduler-ha-086149-m02" [5d64ff86-a24d-4836-a7d7-ebb968bb39c8] Running
	I0819 18:03:19.543152  390826 system_pods.go:89] "kube-vip-ha-086149" [25176ed4-e5b0-4e5e-9835-736c856d2643] Running
	I0819 18:03:19.543157  390826 system_pods.go:89] "kube-vip-ha-086149-m02" [8c6b400d-f73e-44b5-a31f-3607329360be] Running
	I0819 18:03:19.543164  390826 system_pods.go:89] "storage-provisioner" [c12159a8-5f84-4d19-aa54-7b56a9669f6c] Running
	I0819 18:03:19.543173  390826 system_pods.go:126] duration metric: took 207.224242ms to wait for k8s-apps to be running ...
	I0819 18:03:19.543184  390826 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 18:03:19.543240  390826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:03:19.559271  390826 system_svc.go:56] duration metric: took 16.074576ms WaitForService to wait for kubelet
	I0819 18:03:19.559304  390826 kubeadm.go:582] duration metric: took 23.122680186s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 18:03:19.559326  390826 node_conditions.go:102] verifying NodePressure condition ...
	I0819 18:03:19.731891  390826 request.go:632] Waited for 172.461302ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes
	I0819 18:03:19.731971  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes
	I0819 18:03:19.731978  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:19.731996  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:19.732004  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:19.735656  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:03:19.736479  390826 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 18:03:19.736505  390826 node_conditions.go:123] node cpu capacity is 2
	I0819 18:03:19.736518  390826 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 18:03:19.736521  390826 node_conditions.go:123] node cpu capacity is 2
	I0819 18:03:19.736526  390826 node_conditions.go:105] duration metric: took 177.195708ms to run NodePressure ...
	I0819 18:03:19.736541  390826 start.go:241] waiting for startup goroutines ...
	I0819 18:03:19.736573  390826 start.go:255] writing updated cluster config ...
	I0819 18:03:19.738641  390826 out.go:201] 
	I0819 18:03:19.740006  390826 config.go:182] Loaded profile config "ha-086149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:03:19.740106  390826 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/config.json ...
	I0819 18:03:19.741755  390826 out.go:177] * Starting "ha-086149-m03" control-plane node in "ha-086149" cluster
	I0819 18:03:19.742817  390826 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 18:03:19.742845  390826 cache.go:56] Caching tarball of preloaded images
	I0819 18:03:19.742979  390826 preload.go:172] Found /home/jenkins/minikube-integration/19468-372744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 18:03:19.742997  390826 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 18:03:19.743124  390826 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/config.json ...
	I0819 18:03:19.743337  390826 start.go:360] acquireMachinesLock for ha-086149-m03: {Name:mk24ba67a747357e9ce40f1e460d2bb0bc59cc75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 18:03:19.743395  390826 start.go:364] duration metric: took 31.394µs to acquireMachinesLock for "ha-086149-m03"
	I0819 18:03:19.743420  390826 start.go:93] Provisioning new machine with config: &{Name:ha-086149 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-086149 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.167 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 18:03:19.743550  390826 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0819 18:03:19.744878  390826 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 18:03:19.744991  390826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:03:19.745035  390826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:03:19.760980  390826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37165
	I0819 18:03:19.761382  390826 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:03:19.761864  390826 main.go:141] libmachine: Using API Version  1
	I0819 18:03:19.761897  390826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:03:19.762259  390826 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:03:19.762470  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetMachineName
	I0819 18:03:19.762620  390826 main.go:141] libmachine: (ha-086149-m03) Calling .DriverName
	I0819 18:03:19.762770  390826 start.go:159] libmachine.API.Create for "ha-086149" (driver="kvm2")
	I0819 18:03:19.762798  390826 client.go:168] LocalClient.Create starting
	I0819 18:03:19.762836  390826 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem
	I0819 18:03:19.762870  390826 main.go:141] libmachine: Decoding PEM data...
	I0819 18:03:19.762886  390826 main.go:141] libmachine: Parsing certificate...
	I0819 18:03:19.762957  390826 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem
	I0819 18:03:19.762978  390826 main.go:141] libmachine: Decoding PEM data...
	I0819 18:03:19.762992  390826 main.go:141] libmachine: Parsing certificate...
	I0819 18:03:19.763008  390826 main.go:141] libmachine: Running pre-create checks...
	I0819 18:03:19.763016  390826 main.go:141] libmachine: (ha-086149-m03) Calling .PreCreateCheck
	I0819 18:03:19.763200  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetConfigRaw
	I0819 18:03:19.763570  390826 main.go:141] libmachine: Creating machine...
	I0819 18:03:19.763589  390826 main.go:141] libmachine: (ha-086149-m03) Calling .Create
	I0819 18:03:19.763734  390826 main.go:141] libmachine: (ha-086149-m03) Creating KVM machine...
	I0819 18:03:19.764990  390826 main.go:141] libmachine: (ha-086149-m03) DBG | found existing default KVM network
	I0819 18:03:19.765107  390826 main.go:141] libmachine: (ha-086149-m03) DBG | found existing private KVM network mk-ha-086149
	I0819 18:03:19.765251  390826 main.go:141] libmachine: (ha-086149-m03) Setting up store path in /home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m03 ...
	I0819 18:03:19.765273  390826 main.go:141] libmachine: (ha-086149-m03) Building disk image from file:///home/jenkins/minikube-integration/19468-372744/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0819 18:03:19.765333  390826 main.go:141] libmachine: (ha-086149-m03) DBG | I0819 18:03:19.765243  391617 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19468-372744/.minikube
	I0819 18:03:19.765467  390826 main.go:141] libmachine: (ha-086149-m03) Downloading /home/jenkins/minikube-integration/19468-372744/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19468-372744/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 18:03:20.039210  390826 main.go:141] libmachine: (ha-086149-m03) DBG | I0819 18:03:20.039078  391617 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m03/id_rsa...
	I0819 18:03:20.302554  390826 main.go:141] libmachine: (ha-086149-m03) DBG | I0819 18:03:20.302429  391617 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m03/ha-086149-m03.rawdisk...
	I0819 18:03:20.302587  390826 main.go:141] libmachine: (ha-086149-m03) DBG | Writing magic tar header
	I0819 18:03:20.302599  390826 main.go:141] libmachine: (ha-086149-m03) DBG | Writing SSH key tar header
	I0819 18:03:20.302607  390826 main.go:141] libmachine: (ha-086149-m03) DBG | I0819 18:03:20.302554  391617 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m03 ...
	I0819 18:03:20.302720  390826 main.go:141] libmachine: (ha-086149-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m03
	I0819 18:03:20.302762  390826 main.go:141] libmachine: (ha-086149-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19468-372744/.minikube/machines
	I0819 18:03:20.302777  390826 main.go:141] libmachine: (ha-086149-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19468-372744/.minikube
	I0819 18:03:20.302789  390826 main.go:141] libmachine: (ha-086149-m03) Setting executable bit set on /home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m03 (perms=drwx------)
	I0819 18:03:20.302843  390826 main.go:141] libmachine: (ha-086149-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19468-372744
	I0819 18:03:20.302895  390826 main.go:141] libmachine: (ha-086149-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0819 18:03:20.302913  390826 main.go:141] libmachine: (ha-086149-m03) Setting executable bit set on /home/jenkins/minikube-integration/19468-372744/.minikube/machines (perms=drwxr-xr-x)
	I0819 18:03:20.302928  390826 main.go:141] libmachine: (ha-086149-m03) Setting executable bit set on /home/jenkins/minikube-integration/19468-372744/.minikube (perms=drwxr-xr-x)
	I0819 18:03:20.302936  390826 main.go:141] libmachine: (ha-086149-m03) Setting executable bit set on /home/jenkins/minikube-integration/19468-372744 (perms=drwxrwxr-x)
	I0819 18:03:20.302947  390826 main.go:141] libmachine: (ha-086149-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0819 18:03:20.302958  390826 main.go:141] libmachine: (ha-086149-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0819 18:03:20.302976  390826 main.go:141] libmachine: (ha-086149-m03) Creating domain...
	I0819 18:03:20.302994  390826 main.go:141] libmachine: (ha-086149-m03) DBG | Checking permissions on dir: /home/jenkins
	I0819 18:03:20.303012  390826 main.go:141] libmachine: (ha-086149-m03) DBG | Checking permissions on dir: /home
	I0819 18:03:20.303023  390826 main.go:141] libmachine: (ha-086149-m03) DBG | Skipping /home - not owner
	I0819 18:03:20.303803  390826 main.go:141] libmachine: (ha-086149-m03) define libvirt domain using xml: 
	I0819 18:03:20.303825  390826 main.go:141] libmachine: (ha-086149-m03) <domain type='kvm'>
	I0819 18:03:20.303836  390826 main.go:141] libmachine: (ha-086149-m03)   <name>ha-086149-m03</name>
	I0819 18:03:20.303844  390826 main.go:141] libmachine: (ha-086149-m03)   <memory unit='MiB'>2200</memory>
	I0819 18:03:20.303874  390826 main.go:141] libmachine: (ha-086149-m03)   <vcpu>2</vcpu>
	I0819 18:03:20.303896  390826 main.go:141] libmachine: (ha-086149-m03)   <features>
	I0819 18:03:20.303906  390826 main.go:141] libmachine: (ha-086149-m03)     <acpi/>
	I0819 18:03:20.303915  390826 main.go:141] libmachine: (ha-086149-m03)     <apic/>
	I0819 18:03:20.303920  390826 main.go:141] libmachine: (ha-086149-m03)     <pae/>
	I0819 18:03:20.303927  390826 main.go:141] libmachine: (ha-086149-m03)     
	I0819 18:03:20.303933  390826 main.go:141] libmachine: (ha-086149-m03)   </features>
	I0819 18:03:20.303940  390826 main.go:141] libmachine: (ha-086149-m03)   <cpu mode='host-passthrough'>
	I0819 18:03:20.303945  390826 main.go:141] libmachine: (ha-086149-m03)   
	I0819 18:03:20.303950  390826 main.go:141] libmachine: (ha-086149-m03)   </cpu>
	I0819 18:03:20.303955  390826 main.go:141] libmachine: (ha-086149-m03)   <os>
	I0819 18:03:20.303962  390826 main.go:141] libmachine: (ha-086149-m03)     <type>hvm</type>
	I0819 18:03:20.303968  390826 main.go:141] libmachine: (ha-086149-m03)     <boot dev='cdrom'/>
	I0819 18:03:20.303973  390826 main.go:141] libmachine: (ha-086149-m03)     <boot dev='hd'/>
	I0819 18:03:20.303979  390826 main.go:141] libmachine: (ha-086149-m03)     <bootmenu enable='no'/>
	I0819 18:03:20.303983  390826 main.go:141] libmachine: (ha-086149-m03)   </os>
	I0819 18:03:20.303989  390826 main.go:141] libmachine: (ha-086149-m03)   <devices>
	I0819 18:03:20.303996  390826 main.go:141] libmachine: (ha-086149-m03)     <disk type='file' device='cdrom'>
	I0819 18:03:20.304004  390826 main.go:141] libmachine: (ha-086149-m03)       <source file='/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m03/boot2docker.iso'/>
	I0819 18:03:20.304015  390826 main.go:141] libmachine: (ha-086149-m03)       <target dev='hdc' bus='scsi'/>
	I0819 18:03:20.304023  390826 main.go:141] libmachine: (ha-086149-m03)       <readonly/>
	I0819 18:03:20.304027  390826 main.go:141] libmachine: (ha-086149-m03)     </disk>
	I0819 18:03:20.304034  390826 main.go:141] libmachine: (ha-086149-m03)     <disk type='file' device='disk'>
	I0819 18:03:20.304043  390826 main.go:141] libmachine: (ha-086149-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0819 18:03:20.304051  390826 main.go:141] libmachine: (ha-086149-m03)       <source file='/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m03/ha-086149-m03.rawdisk'/>
	I0819 18:03:20.304058  390826 main.go:141] libmachine: (ha-086149-m03)       <target dev='hda' bus='virtio'/>
	I0819 18:03:20.304063  390826 main.go:141] libmachine: (ha-086149-m03)     </disk>
	I0819 18:03:20.304071  390826 main.go:141] libmachine: (ha-086149-m03)     <interface type='network'>
	I0819 18:03:20.304076  390826 main.go:141] libmachine: (ha-086149-m03)       <source network='mk-ha-086149'/>
	I0819 18:03:20.304083  390826 main.go:141] libmachine: (ha-086149-m03)       <model type='virtio'/>
	I0819 18:03:20.304104  390826 main.go:141] libmachine: (ha-086149-m03)     </interface>
	I0819 18:03:20.304122  390826 main.go:141] libmachine: (ha-086149-m03)     <interface type='network'>
	I0819 18:03:20.304149  390826 main.go:141] libmachine: (ha-086149-m03)       <source network='default'/>
	I0819 18:03:20.304166  390826 main.go:141] libmachine: (ha-086149-m03)       <model type='virtio'/>
	I0819 18:03:20.304181  390826 main.go:141] libmachine: (ha-086149-m03)     </interface>
	I0819 18:03:20.304193  390826 main.go:141] libmachine: (ha-086149-m03)     <serial type='pty'>
	I0819 18:03:20.304203  390826 main.go:141] libmachine: (ha-086149-m03)       <target port='0'/>
	I0819 18:03:20.304214  390826 main.go:141] libmachine: (ha-086149-m03)     </serial>
	I0819 18:03:20.304231  390826 main.go:141] libmachine: (ha-086149-m03)     <console type='pty'>
	I0819 18:03:20.304249  390826 main.go:141] libmachine: (ha-086149-m03)       <target type='serial' port='0'/>
	I0819 18:03:20.304261  390826 main.go:141] libmachine: (ha-086149-m03)     </console>
	I0819 18:03:20.304271  390826 main.go:141] libmachine: (ha-086149-m03)     <rng model='virtio'>
	I0819 18:03:20.304286  390826 main.go:141] libmachine: (ha-086149-m03)       <backend model='random'>/dev/random</backend>
	I0819 18:03:20.304296  390826 main.go:141] libmachine: (ha-086149-m03)     </rng>
	I0819 18:03:20.304308  390826 main.go:141] libmachine: (ha-086149-m03)     
	I0819 18:03:20.304322  390826 main.go:141] libmachine: (ha-086149-m03)     
	I0819 18:03:20.304335  390826 main.go:141] libmachine: (ha-086149-m03)   </devices>
	I0819 18:03:20.304345  390826 main.go:141] libmachine: (ha-086149-m03) </domain>
	I0819 18:03:20.304359  390826 main.go:141] libmachine: (ha-086149-m03) 
	I0819 18:03:20.311221  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined MAC address 52:54:00:ae:a0:91 in network default
	I0819 18:03:20.311840  390826 main.go:141] libmachine: (ha-086149-m03) Ensuring networks are active...
	I0819 18:03:20.311863  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:20.312607  390826 main.go:141] libmachine: (ha-086149-m03) Ensuring network default is active
	I0819 18:03:20.312955  390826 main.go:141] libmachine: (ha-086149-m03) Ensuring network mk-ha-086149 is active
	I0819 18:03:20.313312  390826 main.go:141] libmachine: (ha-086149-m03) Getting domain xml...
	I0819 18:03:20.314122  390826 main.go:141] libmachine: (ha-086149-m03) Creating domain...
	I0819 18:03:21.562949  390826 main.go:141] libmachine: (ha-086149-m03) Waiting to get IP...
	I0819 18:03:21.563827  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:21.564282  390826 main.go:141] libmachine: (ha-086149-m03) DBG | unable to find current IP address of domain ha-086149-m03 in network mk-ha-086149
	I0819 18:03:21.564318  390826 main.go:141] libmachine: (ha-086149-m03) DBG | I0819 18:03:21.564272  391617 retry.go:31] will retry after 287.519385ms: waiting for machine to come up
	I0819 18:03:21.853642  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:21.854188  390826 main.go:141] libmachine: (ha-086149-m03) DBG | unable to find current IP address of domain ha-086149-m03 in network mk-ha-086149
	I0819 18:03:21.854218  390826 main.go:141] libmachine: (ha-086149-m03) DBG | I0819 18:03:21.854115  391617 retry.go:31] will retry after 380.562809ms: waiting for machine to come up
	I0819 18:03:22.236389  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:22.236849  390826 main.go:141] libmachine: (ha-086149-m03) DBG | unable to find current IP address of domain ha-086149-m03 in network mk-ha-086149
	I0819 18:03:22.236877  390826 main.go:141] libmachine: (ha-086149-m03) DBG | I0819 18:03:22.236812  391617 retry.go:31] will retry after 327.555766ms: waiting for machine to come up
	I0819 18:03:22.566254  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:22.566623  390826 main.go:141] libmachine: (ha-086149-m03) DBG | unable to find current IP address of domain ha-086149-m03 in network mk-ha-086149
	I0819 18:03:22.566648  390826 main.go:141] libmachine: (ha-086149-m03) DBG | I0819 18:03:22.566579  391617 retry.go:31] will retry after 411.488107ms: waiting for machine to come up
	I0819 18:03:22.979125  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:22.979687  390826 main.go:141] libmachine: (ha-086149-m03) DBG | unable to find current IP address of domain ha-086149-m03 in network mk-ha-086149
	I0819 18:03:22.979717  390826 main.go:141] libmachine: (ha-086149-m03) DBG | I0819 18:03:22.979605  391617 retry.go:31] will retry after 520.603963ms: waiting for machine to come up
	I0819 18:03:23.502110  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:23.502597  390826 main.go:141] libmachine: (ha-086149-m03) DBG | unable to find current IP address of domain ha-086149-m03 in network mk-ha-086149
	I0819 18:03:23.502620  390826 main.go:141] libmachine: (ha-086149-m03) DBG | I0819 18:03:23.502547  391617 retry.go:31] will retry after 785.663535ms: waiting for machine to come up
	I0819 18:03:24.289488  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:24.289969  390826 main.go:141] libmachine: (ha-086149-m03) DBG | unable to find current IP address of domain ha-086149-m03 in network mk-ha-086149
	I0819 18:03:24.289999  390826 main.go:141] libmachine: (ha-086149-m03) DBG | I0819 18:03:24.289903  391617 retry.go:31] will retry after 1.114679695s: waiting for machine to come up
	I0819 18:03:25.405954  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:25.406298  390826 main.go:141] libmachine: (ha-086149-m03) DBG | unable to find current IP address of domain ha-086149-m03 in network mk-ha-086149
	I0819 18:03:25.406320  390826 main.go:141] libmachine: (ha-086149-m03) DBG | I0819 18:03:25.406252  391617 retry.go:31] will retry after 1.122956034s: waiting for machine to come up
	I0819 18:03:26.530546  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:26.530920  390826 main.go:141] libmachine: (ha-086149-m03) DBG | unable to find current IP address of domain ha-086149-m03 in network mk-ha-086149
	I0819 18:03:26.530945  390826 main.go:141] libmachine: (ha-086149-m03) DBG | I0819 18:03:26.530869  391617 retry.go:31] will retry after 1.212325896s: waiting for machine to come up
	I0819 18:03:27.744699  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:27.745099  390826 main.go:141] libmachine: (ha-086149-m03) DBG | unable to find current IP address of domain ha-086149-m03 in network mk-ha-086149
	I0819 18:03:27.745134  390826 main.go:141] libmachine: (ha-086149-m03) DBG | I0819 18:03:27.745053  391617 retry.go:31] will retry after 1.909860275s: waiting for machine to come up
	I0819 18:03:29.657018  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:29.657535  390826 main.go:141] libmachine: (ha-086149-m03) DBG | unable to find current IP address of domain ha-086149-m03 in network mk-ha-086149
	I0819 18:03:29.657560  390826 main.go:141] libmachine: (ha-086149-m03) DBG | I0819 18:03:29.657483  391617 retry.go:31] will retry after 2.070750747s: waiting for machine to come up
	I0819 18:03:31.729452  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:31.729972  390826 main.go:141] libmachine: (ha-086149-m03) DBG | unable to find current IP address of domain ha-086149-m03 in network mk-ha-086149
	I0819 18:03:31.730001  390826 main.go:141] libmachine: (ha-086149-m03) DBG | I0819 18:03:31.729906  391617 retry.go:31] will retry after 2.499787973s: waiting for machine to come up
	I0819 18:03:34.231619  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:34.232035  390826 main.go:141] libmachine: (ha-086149-m03) DBG | unable to find current IP address of domain ha-086149-m03 in network mk-ha-086149
	I0819 18:03:34.232068  390826 main.go:141] libmachine: (ha-086149-m03) DBG | I0819 18:03:34.231974  391617 retry.go:31] will retry after 3.724609684s: waiting for machine to come up
	I0819 18:03:37.960873  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:37.961342  390826 main.go:141] libmachine: (ha-086149-m03) DBG | unable to find current IP address of domain ha-086149-m03 in network mk-ha-086149
	I0819 18:03:37.961377  390826 main.go:141] libmachine: (ha-086149-m03) DBG | I0819 18:03:37.961291  391617 retry.go:31] will retry after 4.221691155s: waiting for machine to come up
	I0819 18:03:42.184935  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:42.185477  390826 main.go:141] libmachine: (ha-086149-m03) Found IP for machine: 192.168.39.121
	I0819 18:03:42.185514  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has current primary IP address 192.168.39.121 and MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:42.185524  390826 main.go:141] libmachine: (ha-086149-m03) Reserving static IP address...
	I0819 18:03:42.186031  390826 main.go:141] libmachine: (ha-086149-m03) DBG | unable to find host DHCP lease matching {name: "ha-086149-m03", mac: "52:54:00:dc:29:16", ip: "192.168.39.121"} in network mk-ha-086149
	I0819 18:03:42.261896  390826 main.go:141] libmachine: (ha-086149-m03) DBG | Getting to WaitForSSH function...
	I0819 18:03:42.261933  390826 main.go:141] libmachine: (ha-086149-m03) Reserved static IP address: 192.168.39.121
	I0819 18:03:42.261942  390826 main.go:141] libmachine: (ha-086149-m03) Waiting for SSH to be available...
	I0819 18:03:42.264703  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:42.265036  390826 main.go:141] libmachine: (ha-086149-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:dc:29:16", ip: ""} in network mk-ha-086149
	I0819 18:03:42.265064  390826 main.go:141] libmachine: (ha-086149-m03) DBG | unable to find defined IP address of network mk-ha-086149 interface with MAC address 52:54:00:dc:29:16
	I0819 18:03:42.265234  390826 main.go:141] libmachine: (ha-086149-m03) DBG | Using SSH client type: external
	I0819 18:03:42.265265  390826 main.go:141] libmachine: (ha-086149-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m03/id_rsa (-rw-------)
	I0819 18:03:42.265301  390826 main.go:141] libmachine: (ha-086149-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 18:03:42.265318  390826 main.go:141] libmachine: (ha-086149-m03) DBG | About to run SSH command:
	I0819 18:03:42.265333  390826 main.go:141] libmachine: (ha-086149-m03) DBG | exit 0
	I0819 18:03:42.268920  390826 main.go:141] libmachine: (ha-086149-m03) DBG | SSH cmd err, output: exit status 255: 
	I0819 18:03:42.268942  390826 main.go:141] libmachine: (ha-086149-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0819 18:03:42.268951  390826 main.go:141] libmachine: (ha-086149-m03) DBG | command : exit 0
	I0819 18:03:42.268956  390826 main.go:141] libmachine: (ha-086149-m03) DBG | err     : exit status 255
	I0819 18:03:42.268988  390826 main.go:141] libmachine: (ha-086149-m03) DBG | output  : 
	I0819 18:03:45.269987  390826 main.go:141] libmachine: (ha-086149-m03) DBG | Getting to WaitForSSH function...
	I0819 18:03:45.272426  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:45.272810  390826 main.go:141] libmachine: (ha-086149-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:29:16", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:03:35 +0000 UTC Type:0 Mac:52:54:00:dc:29:16 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-086149-m03 Clientid:01:52:54:00:dc:29:16}
	I0819 18:03:45.272864  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined IP address 192.168.39.121 and MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:45.272961  390826 main.go:141] libmachine: (ha-086149-m03) DBG | Using SSH client type: external
	I0819 18:03:45.272995  390826 main.go:141] libmachine: (ha-086149-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m03/id_rsa (-rw-------)
	I0819 18:03:45.273024  390826 main.go:141] libmachine: (ha-086149-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.121 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 18:03:45.273037  390826 main.go:141] libmachine: (ha-086149-m03) DBG | About to run SSH command:
	I0819 18:03:45.273054  390826 main.go:141] libmachine: (ha-086149-m03) DBG | exit 0
	I0819 18:03:45.399822  390826 main.go:141] libmachine: (ha-086149-m03) DBG | SSH cmd err, output: <nil>: 
	I0819 18:03:45.400094  390826 main.go:141] libmachine: (ha-086149-m03) KVM machine creation complete!
	I0819 18:03:45.400461  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetConfigRaw
	I0819 18:03:45.401263  390826 main.go:141] libmachine: (ha-086149-m03) Calling .DriverName
	I0819 18:03:45.401502  390826 main.go:141] libmachine: (ha-086149-m03) Calling .DriverName
	I0819 18:03:45.401684  390826 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0819 18:03:45.401702  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetState
	I0819 18:03:45.403056  390826 main.go:141] libmachine: Detecting operating system of created instance...
	I0819 18:03:45.403074  390826 main.go:141] libmachine: Waiting for SSH to be available...
	I0819 18:03:45.403087  390826 main.go:141] libmachine: Getting to WaitForSSH function...
	I0819 18:03:45.403100  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHHostname
	I0819 18:03:45.405437  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:45.405814  390826 main.go:141] libmachine: (ha-086149-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:29:16", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:03:35 +0000 UTC Type:0 Mac:52:54:00:dc:29:16 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-086149-m03 Clientid:01:52:54:00:dc:29:16}
	I0819 18:03:45.405918  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHPort
	I0819 18:03:45.405924  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined IP address 192.168.39.121 and MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:45.406093  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHKeyPath
	I0819 18:03:45.406253  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHKeyPath
	I0819 18:03:45.406386  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHUsername
	I0819 18:03:45.406565  390826 main.go:141] libmachine: Using SSH client type: native
	I0819 18:03:45.406993  390826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I0819 18:03:45.407014  390826 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0819 18:03:45.507039  390826 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 18:03:45.507061  390826 main.go:141] libmachine: Detecting the provisioner...
	I0819 18:03:45.507069  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHHostname
	I0819 18:03:45.509836  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:45.510167  390826 main.go:141] libmachine: (ha-086149-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:29:16", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:03:35 +0000 UTC Type:0 Mac:52:54:00:dc:29:16 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-086149-m03 Clientid:01:52:54:00:dc:29:16}
	I0819 18:03:45.510202  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined IP address 192.168.39.121 and MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:45.510329  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHPort
	I0819 18:03:45.510518  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHKeyPath
	I0819 18:03:45.510702  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHKeyPath
	I0819 18:03:45.510843  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHUsername
	I0819 18:03:45.511049  390826 main.go:141] libmachine: Using SSH client type: native
	I0819 18:03:45.511259  390826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I0819 18:03:45.511273  390826 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0819 18:03:45.612553  390826 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0819 18:03:45.612627  390826 main.go:141] libmachine: found compatible host: buildroot
	I0819 18:03:45.612636  390826 main.go:141] libmachine: Provisioning with buildroot...
	I0819 18:03:45.612648  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetMachineName
	I0819 18:03:45.612913  390826 buildroot.go:166] provisioning hostname "ha-086149-m03"
	I0819 18:03:45.612940  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetMachineName
	I0819 18:03:45.613126  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHHostname
	I0819 18:03:45.616510  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:45.616855  390826 main.go:141] libmachine: (ha-086149-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:29:16", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:03:35 +0000 UTC Type:0 Mac:52:54:00:dc:29:16 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-086149-m03 Clientid:01:52:54:00:dc:29:16}
	I0819 18:03:45.616877  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined IP address 192.168.39.121 and MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:45.617041  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHPort
	I0819 18:03:45.617258  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHKeyPath
	I0819 18:03:45.617452  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHKeyPath
	I0819 18:03:45.617602  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHUsername
	I0819 18:03:45.617764  390826 main.go:141] libmachine: Using SSH client type: native
	I0819 18:03:45.617953  390826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I0819 18:03:45.617968  390826 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-086149-m03 && echo "ha-086149-m03" | sudo tee /etc/hostname
	I0819 18:03:45.737142  390826 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-086149-m03
	
	I0819 18:03:45.737171  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHHostname
	I0819 18:03:45.739860  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:45.740210  390826 main.go:141] libmachine: (ha-086149-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:29:16", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:03:35 +0000 UTC Type:0 Mac:52:54:00:dc:29:16 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-086149-m03 Clientid:01:52:54:00:dc:29:16}
	I0819 18:03:45.740238  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined IP address 192.168.39.121 and MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:45.740391  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHPort
	I0819 18:03:45.740585  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHKeyPath
	I0819 18:03:45.740744  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHKeyPath
	I0819 18:03:45.740913  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHUsername
	I0819 18:03:45.741112  390826 main.go:141] libmachine: Using SSH client type: native
	I0819 18:03:45.741291  390826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I0819 18:03:45.741307  390826 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-086149-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-086149-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-086149-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 18:03:45.854110  390826 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 18:03:45.854149  390826 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19468-372744/.minikube CaCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19468-372744/.minikube}
	I0819 18:03:45.854177  390826 buildroot.go:174] setting up certificates
	I0819 18:03:45.854191  390826 provision.go:84] configureAuth start
	I0819 18:03:45.854211  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetMachineName
	I0819 18:03:45.854510  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetIP
	I0819 18:03:45.857102  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:45.857533  390826 main.go:141] libmachine: (ha-086149-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:29:16", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:03:35 +0000 UTC Type:0 Mac:52:54:00:dc:29:16 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-086149-m03 Clientid:01:52:54:00:dc:29:16}
	I0819 18:03:45.857565  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined IP address 192.168.39.121 and MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:45.857619  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHHostname
	I0819 18:03:45.859546  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:45.859906  390826 main.go:141] libmachine: (ha-086149-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:29:16", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:03:35 +0000 UTC Type:0 Mac:52:54:00:dc:29:16 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-086149-m03 Clientid:01:52:54:00:dc:29:16}
	I0819 18:03:45.859935  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined IP address 192.168.39.121 and MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:45.860102  390826 provision.go:143] copyHostCerts
	I0819 18:03:45.860136  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem
	I0819 18:03:45.860178  390826 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem, removing ...
	I0819 18:03:45.860194  390826 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem
	I0819 18:03:45.860304  390826 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem (1082 bytes)
	I0819 18:03:45.860406  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem
	I0819 18:03:45.860434  390826 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem, removing ...
	I0819 18:03:45.860444  390826 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem
	I0819 18:03:45.860484  390826 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem (1123 bytes)
	I0819 18:03:45.860554  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem
	I0819 18:03:45.860577  390826 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem, removing ...
	I0819 18:03:45.860585  390826 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem
	I0819 18:03:45.860612  390826 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem (1675 bytes)
	I0819 18:03:45.860669  390826 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem org=jenkins.ha-086149-m03 san=[127.0.0.1 192.168.39.121 ha-086149-m03 localhost minikube]
	I0819 18:03:46.063456  390826 provision.go:177] copyRemoteCerts
	I0819 18:03:46.063521  390826 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 18:03:46.063548  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHHostname
	I0819 18:03:46.066201  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:46.066553  390826 main.go:141] libmachine: (ha-086149-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:29:16", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:03:35 +0000 UTC Type:0 Mac:52:54:00:dc:29:16 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-086149-m03 Clientid:01:52:54:00:dc:29:16}
	I0819 18:03:46.066592  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined IP address 192.168.39.121 and MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:46.066809  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHPort
	I0819 18:03:46.067042  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHKeyPath
	I0819 18:03:46.067205  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHUsername
	I0819 18:03:46.067339  390826 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m03/id_rsa Username:docker}
	I0819 18:03:46.145768  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 18:03:46.145857  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 18:03:46.170315  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 18:03:46.170396  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0819 18:03:46.195880  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 18:03:46.195969  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 18:03:46.220716  390826 provision.go:87] duration metric: took 366.505975ms to configureAuth
	I0819 18:03:46.220747  390826 buildroot.go:189] setting minikube options for container-runtime
	I0819 18:03:46.221026  390826 config.go:182] Loaded profile config "ha-086149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:03:46.221124  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHHostname
	I0819 18:03:46.223980  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:46.224366  390826 main.go:141] libmachine: (ha-086149-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:29:16", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:03:35 +0000 UTC Type:0 Mac:52:54:00:dc:29:16 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-086149-m03 Clientid:01:52:54:00:dc:29:16}
	I0819 18:03:46.224397  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined IP address 192.168.39.121 and MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:46.224551  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHPort
	I0819 18:03:46.224742  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHKeyPath
	I0819 18:03:46.224948  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHKeyPath
	I0819 18:03:46.225092  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHUsername
	I0819 18:03:46.225286  390826 main.go:141] libmachine: Using SSH client type: native
	I0819 18:03:46.225484  390826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I0819 18:03:46.225500  390826 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 18:03:46.486402  390826 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 18:03:46.486443  390826 main.go:141] libmachine: Checking connection to Docker...
	I0819 18:03:46.486455  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetURL
	I0819 18:03:46.487831  390826 main.go:141] libmachine: (ha-086149-m03) DBG | Using libvirt version 6000000
	I0819 18:03:46.489937  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:46.490329  390826 main.go:141] libmachine: (ha-086149-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:29:16", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:03:35 +0000 UTC Type:0 Mac:52:54:00:dc:29:16 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-086149-m03 Clientid:01:52:54:00:dc:29:16}
	I0819 18:03:46.490355  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined IP address 192.168.39.121 and MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:46.490545  390826 main.go:141] libmachine: Docker is up and running!
	I0819 18:03:46.490562  390826 main.go:141] libmachine: Reticulating splines...
	I0819 18:03:46.490570  390826 client.go:171] duration metric: took 26.727760379s to LocalClient.Create
	I0819 18:03:46.490594  390826 start.go:167] duration metric: took 26.727824625s to libmachine.API.Create "ha-086149"
	I0819 18:03:46.490604  390826 start.go:293] postStartSetup for "ha-086149-m03" (driver="kvm2")
	I0819 18:03:46.490614  390826 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 18:03:46.490640  390826 main.go:141] libmachine: (ha-086149-m03) Calling .DriverName
	I0819 18:03:46.490898  390826 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 18:03:46.490925  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHHostname
	I0819 18:03:46.493180  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:46.493483  390826 main.go:141] libmachine: (ha-086149-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:29:16", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:03:35 +0000 UTC Type:0 Mac:52:54:00:dc:29:16 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-086149-m03 Clientid:01:52:54:00:dc:29:16}
	I0819 18:03:46.493513  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined IP address 192.168.39.121 and MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:46.493680  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHPort
	I0819 18:03:46.493889  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHKeyPath
	I0819 18:03:46.494032  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHUsername
	I0819 18:03:46.494164  390826 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m03/id_rsa Username:docker}
	I0819 18:03:46.574090  390826 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 18:03:46.578707  390826 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 18:03:46.578740  390826 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-372744/.minikube/addons for local assets ...
	I0819 18:03:46.578823  390826 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-372744/.minikube/files for local assets ...
	I0819 18:03:46.578922  390826 filesync.go:149] local asset: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem -> 3800092.pem in /etc/ssl/certs
	I0819 18:03:46.578936  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem -> /etc/ssl/certs/3800092.pem
	I0819 18:03:46.579074  390826 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 18:03:46.588838  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem --> /etc/ssl/certs/3800092.pem (1708 bytes)
	I0819 18:03:46.613093  390826 start.go:296] duration metric: took 122.46782ms for postStartSetup
	I0819 18:03:46.613152  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetConfigRaw
	I0819 18:03:46.613789  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetIP
	I0819 18:03:46.616297  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:46.616623  390826 main.go:141] libmachine: (ha-086149-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:29:16", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:03:35 +0000 UTC Type:0 Mac:52:54:00:dc:29:16 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-086149-m03 Clientid:01:52:54:00:dc:29:16}
	I0819 18:03:46.616654  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined IP address 192.168.39.121 and MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:46.616956  390826 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/config.json ...
	I0819 18:03:46.617168  390826 start.go:128] duration metric: took 26.873605845s to createHost
	I0819 18:03:46.617195  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHHostname
	I0819 18:03:46.619322  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:46.619667  390826 main.go:141] libmachine: (ha-086149-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:29:16", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:03:35 +0000 UTC Type:0 Mac:52:54:00:dc:29:16 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-086149-m03 Clientid:01:52:54:00:dc:29:16}
	I0819 18:03:46.619714  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined IP address 192.168.39.121 and MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:46.619818  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHPort
	I0819 18:03:46.619992  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHKeyPath
	I0819 18:03:46.620150  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHKeyPath
	I0819 18:03:46.620300  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHUsername
	I0819 18:03:46.620482  390826 main.go:141] libmachine: Using SSH client type: native
	I0819 18:03:46.620675  390826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I0819 18:03:46.620696  390826 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 18:03:46.724518  390826 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724090626.701943273
	
	I0819 18:03:46.724543  390826 fix.go:216] guest clock: 1724090626.701943273
	I0819 18:03:46.724553  390826 fix.go:229] Guest: 2024-08-19 18:03:46.701943273 +0000 UTC Remote: 2024-08-19 18:03:46.61718268 +0000 UTC m=+152.411079094 (delta=84.760593ms)
	I0819 18:03:46.724574  390826 fix.go:200] guest clock delta is within tolerance: 84.760593ms
	I0819 18:03:46.724581  390826 start.go:83] releasing machines lock for "ha-086149-m03", held for 26.981173416s
	I0819 18:03:46.724610  390826 main.go:141] libmachine: (ha-086149-m03) Calling .DriverName
	I0819 18:03:46.724908  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetIP
	I0819 18:03:46.727738  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:46.728140  390826 main.go:141] libmachine: (ha-086149-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:29:16", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:03:35 +0000 UTC Type:0 Mac:52:54:00:dc:29:16 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-086149-m03 Clientid:01:52:54:00:dc:29:16}
	I0819 18:03:46.728175  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined IP address 192.168.39.121 and MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:46.730672  390826 out.go:177] * Found network options:
	I0819 18:03:46.731980  390826 out.go:177]   - NO_PROXY=192.168.39.249,192.168.39.167
	W0819 18:03:46.733217  390826 proxy.go:119] fail to check proxy env: Error ip not in block
	W0819 18:03:46.733257  390826 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 18:03:46.733282  390826 main.go:141] libmachine: (ha-086149-m03) Calling .DriverName
	I0819 18:03:46.733975  390826 main.go:141] libmachine: (ha-086149-m03) Calling .DriverName
	I0819 18:03:46.734206  390826 main.go:141] libmachine: (ha-086149-m03) Calling .DriverName
	I0819 18:03:46.734324  390826 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 18:03:46.734366  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHHostname
	W0819 18:03:46.734442  390826 proxy.go:119] fail to check proxy env: Error ip not in block
	W0819 18:03:46.734468  390826 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 18:03:46.734544  390826 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 18:03:46.734569  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHHostname
	I0819 18:03:46.737322  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:46.737704  390826 main.go:141] libmachine: (ha-086149-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:29:16", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:03:35 +0000 UTC Type:0 Mac:52:54:00:dc:29:16 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-086149-m03 Clientid:01:52:54:00:dc:29:16}
	I0819 18:03:46.737739  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined IP address 192.168.39.121 and MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:46.737759  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:46.737855  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHPort
	I0819 18:03:46.738053  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHKeyPath
	I0819 18:03:46.738151  390826 main.go:141] libmachine: (ha-086149-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:29:16", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:03:35 +0000 UTC Type:0 Mac:52:54:00:dc:29:16 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-086149-m03 Clientid:01:52:54:00:dc:29:16}
	I0819 18:03:46.738176  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined IP address 192.168.39.121 and MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:46.738260  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHUsername
	I0819 18:03:46.738362  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHPort
	I0819 18:03:46.738441  390826 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m03/id_rsa Username:docker}
	I0819 18:03:46.738514  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHKeyPath
	I0819 18:03:46.738666  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHUsername
	I0819 18:03:46.738792  390826 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m03/id_rsa Username:docker}
	I0819 18:03:46.968222  390826 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 18:03:46.975455  390826 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 18:03:46.975532  390826 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 18:03:46.994322  390826 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 18:03:46.994347  390826 start.go:495] detecting cgroup driver to use...
	I0819 18:03:46.994414  390826 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 18:03:47.011730  390826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 18:03:47.026577  390826 docker.go:217] disabling cri-docker service (if available) ...
	I0819 18:03:47.026633  390826 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 18:03:47.041533  390826 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 18:03:47.056162  390826 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 18:03:47.167389  390826 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 18:03:47.313793  390826 docker.go:233] disabling docker service ...
	I0819 18:03:47.313873  390826 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 18:03:47.328361  390826 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 18:03:47.342498  390826 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 18:03:47.493438  390826 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 18:03:47.610714  390826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 18:03:47.626461  390826 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 18:03:47.647036  390826 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 18:03:47.647094  390826 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:03:47.659477  390826 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 18:03:47.659549  390826 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:03:47.670849  390826 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:03:47.681739  390826 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:03:47.692596  390826 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 18:03:47.704404  390826 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:03:47.715964  390826 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:03:47.734064  390826 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:03:47.745411  390826 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 18:03:47.755479  390826 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 18:03:47.755547  390826 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 18:03:47.780800  390826 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 18:03:47.793377  390826 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 18:03:47.933910  390826 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 18:03:48.078348  390826 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 18:03:48.078455  390826 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 18:03:48.083459  390826 start.go:563] Will wait 60s for crictl version
	I0819 18:03:48.083519  390826 ssh_runner.go:195] Run: which crictl
	I0819 18:03:48.087505  390826 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 18:03:48.135923  390826 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 18:03:48.136006  390826 ssh_runner.go:195] Run: crio --version
	I0819 18:03:48.165703  390826 ssh_runner.go:195] Run: crio --version
	I0819 18:03:48.199600  390826 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 18:03:48.200917  390826 out.go:177]   - env NO_PROXY=192.168.39.249
	I0819 18:03:48.202367  390826 out.go:177]   - env NO_PROXY=192.168.39.249,192.168.39.167
	I0819 18:03:48.203631  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetIP
	I0819 18:03:48.206345  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:48.206716  390826 main.go:141] libmachine: (ha-086149-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:29:16", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:03:35 +0000 UTC Type:0 Mac:52:54:00:dc:29:16 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-086149-m03 Clientid:01:52:54:00:dc:29:16}
	I0819 18:03:48.206749  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined IP address 192.168.39.121 and MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:48.206952  390826 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 18:03:48.211794  390826 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 18:03:48.224853  390826 mustload.go:65] Loading cluster: ha-086149
	I0819 18:03:48.225134  390826 config.go:182] Loaded profile config "ha-086149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:03:48.225493  390826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:03:48.225551  390826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:03:48.241022  390826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34715
	I0819 18:03:48.241501  390826 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:03:48.241979  390826 main.go:141] libmachine: Using API Version  1
	I0819 18:03:48.241998  390826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:03:48.242413  390826 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:03:48.242604  390826 main.go:141] libmachine: (ha-086149) Calling .GetState
	I0819 18:03:48.244144  390826 host.go:66] Checking if "ha-086149" exists ...
	I0819 18:03:48.244541  390826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:03:48.244585  390826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:03:48.259491  390826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46449
	I0819 18:03:48.260161  390826 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:03:48.260668  390826 main.go:141] libmachine: Using API Version  1
	I0819 18:03:48.260695  390826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:03:48.261068  390826 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:03:48.261282  390826 main.go:141] libmachine: (ha-086149) Calling .DriverName
	I0819 18:03:48.261480  390826 certs.go:68] Setting up /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149 for IP: 192.168.39.121
	I0819 18:03:48.261491  390826 certs.go:194] generating shared ca certs ...
	I0819 18:03:48.261509  390826 certs.go:226] acquiring lock for ca certs: {Name:mk639e03f593e0bccac045f6e9f5ba3b96cc81e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:03:48.261630  390826 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key
	I0819 18:03:48.261673  390826 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key
	I0819 18:03:48.261682  390826 certs.go:256] generating profile certs ...
	I0819 18:03:48.261752  390826 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/client.key
	I0819 18:03:48.261775  390826 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.key.e2003681
	I0819 18:03:48.261790  390826 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.crt.e2003681 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.249 192.168.39.167 192.168.39.121 192.168.39.254]
	I0819 18:03:48.530583  390826 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.crt.e2003681 ...
	I0819 18:03:48.530617  390826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.crt.e2003681: {Name:mk6e3f1430e8073774c0e837d2d1e72b4e3b6cd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:03:48.530786  390826 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.key.e2003681 ...
	I0819 18:03:48.530801  390826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.key.e2003681: {Name:mk5c3eff97ebe025fa66882eab16f0ed1dc1cd31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:03:48.530873  390826 certs.go:381] copying /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.crt.e2003681 -> /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.crt
	I0819 18:03:48.531012  390826 certs.go:385] copying /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.key.e2003681 -> /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.key
	I0819 18:03:48.531151  390826 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/proxy-client.key
	I0819 18:03:48.531169  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 18:03:48.531183  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 18:03:48.531196  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 18:03:48.531209  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 18:03:48.531221  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0819 18:03:48.531234  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0819 18:03:48.531249  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0819 18:03:48.531263  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0819 18:03:48.531311  390826 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem (1338 bytes)
	W0819 18:03:48.531339  390826 certs.go:480] ignoring /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009_empty.pem, impossibly tiny 0 bytes
	I0819 18:03:48.531349  390826 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 18:03:48.531368  390826 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem (1082 bytes)
	I0819 18:03:48.531389  390826 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem (1123 bytes)
	I0819 18:03:48.531409  390826 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem (1675 bytes)
	I0819 18:03:48.531449  390826 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem (1708 bytes)
	I0819 18:03:48.531480  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem -> /usr/share/ca-certificates/3800092.pem
	I0819 18:03:48.531494  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:03:48.531512  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem -> /usr/share/ca-certificates/380009.pem
	I0819 18:03:48.531547  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHHostname
	I0819 18:03:48.535035  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:03:48.535535  390826 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:03:48.535560  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:03:48.535798  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHPort
	I0819 18:03:48.536047  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHKeyPath
	I0819 18:03:48.536234  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHUsername
	I0819 18:03:48.536390  390826 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149/id_rsa Username:docker}
	I0819 18:03:48.608114  390826 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0819 18:03:48.613669  390826 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0819 18:03:48.625657  390826 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0819 18:03:48.629793  390826 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0819 18:03:48.640760  390826 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0819 18:03:48.644960  390826 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0819 18:03:48.656070  390826 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0819 18:03:48.662684  390826 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0819 18:03:48.674212  390826 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0819 18:03:48.678812  390826 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0819 18:03:48.690281  390826 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0819 18:03:48.694691  390826 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0819 18:03:48.705848  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 18:03:48.734198  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 18:03:48.758895  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 18:03:48.785766  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 18:03:48.810763  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0819 18:03:48.835521  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 18:03:48.862239  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 18:03:48.887336  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 18:03:48.913014  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem --> /usr/share/ca-certificates/3800092.pem (1708 bytes)
	I0819 18:03:48.939494  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 18:03:48.966050  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem --> /usr/share/ca-certificates/380009.pem (1338 bytes)
	I0819 18:03:48.992403  390826 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0819 18:03:49.010524  390826 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0819 18:03:49.028443  390826 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0819 18:03:49.046238  390826 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0819 18:03:49.064239  390826 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0819 18:03:49.083179  390826 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0819 18:03:49.100385  390826 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0819 18:03:49.118509  390826 ssh_runner.go:195] Run: openssl version
	I0819 18:03:49.124644  390826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3800092.pem && ln -fs /usr/share/ca-certificates/3800092.pem /etc/ssl/certs/3800092.pem"
	I0819 18:03:49.135796  390826 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3800092.pem
	I0819 18:03:49.140415  390826 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 17:56 /usr/share/ca-certificates/3800092.pem
	I0819 18:03:49.140488  390826 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3800092.pem
	I0819 18:03:49.146811  390826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3800092.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 18:03:49.159207  390826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 18:03:49.171214  390826 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:03:49.176781  390826 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:03:49.176860  390826 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:03:49.182907  390826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 18:03:49.194856  390826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/380009.pem && ln -fs /usr/share/ca-certificates/380009.pem /etc/ssl/certs/380009.pem"
	I0819 18:03:49.207429  390826 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/380009.pem
	I0819 18:03:49.212225  390826 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 17:56 /usr/share/ca-certificates/380009.pem
	I0819 18:03:49.212307  390826 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/380009.pem
	I0819 18:03:49.218334  390826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/380009.pem /etc/ssl/certs/51391683.0"
	I0819 18:03:49.229746  390826 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 18:03:49.234052  390826 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 18:03:49.234124  390826 kubeadm.go:934] updating node {m03 192.168.39.121 8443 v1.31.0 crio true true} ...
	I0819 18:03:49.234234  390826 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-086149-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.121
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-086149 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 18:03:49.234272  390826 kube-vip.go:115] generating kube-vip config ...
	I0819 18:03:49.234320  390826 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0819 18:03:49.252054  390826 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0819 18:03:49.252251  390826 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0819 18:03:49.252340  390826 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 18:03:49.265031  390826 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0819 18:03:49.265106  390826 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0819 18:03:49.276590  390826 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256
	I0819 18:03:49.276599  390826 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0819 18:03:49.276630  390826 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256
	I0819 18:03:49.276667  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0819 18:03:49.276680  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0819 18:03:49.276653  390826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:03:49.276757  390826 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0819 18:03:49.276758  390826 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0819 18:03:49.293522  390826 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0819 18:03:49.293549  390826 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0819 18:03:49.293557  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0819 18:03:49.293571  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0819 18:03:49.293576  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0819 18:03:49.293643  390826 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0819 18:03:49.322684  390826 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0819 18:03:49.322738  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0819 18:03:50.233307  390826 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0819 18:03:50.244938  390826 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0819 18:03:50.263266  390826 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 18:03:50.282149  390826 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0819 18:03:50.300705  390826 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0819 18:03:50.304802  390826 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 18:03:50.319084  390826 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 18:03:50.459263  390826 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 18:03:50.478155  390826 host.go:66] Checking if "ha-086149" exists ...
	I0819 18:03:50.478705  390826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:03:50.478766  390826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:03:50.495371  390826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36785
	I0819 18:03:50.495861  390826 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:03:50.496418  390826 main.go:141] libmachine: Using API Version  1
	I0819 18:03:50.496447  390826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:03:50.496782  390826 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:03:50.497071  390826 main.go:141] libmachine: (ha-086149) Calling .DriverName
	I0819 18:03:50.497222  390826 start.go:317] joinCluster: &{Name:ha-086149 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cluster
Name:ha-086149 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.167 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.121 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 18:03:50.497453  390826 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0819 18:03:50.497477  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHHostname
	I0819 18:03:50.500909  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:03:50.501506  390826 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:03:50.501545  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:03:50.501750  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHPort
	I0819 18:03:50.501950  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHKeyPath
	I0819 18:03:50.502104  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHUsername
	I0819 18:03:50.502278  390826 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149/id_rsa Username:docker}
	I0819 18:03:50.646804  390826 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.121 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 18:03:50.646866  390826 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 6x5lez.k6vwxltnwheu1hpl --discovery-token-ca-cert-hash sha256:3fcbd90565c5acbc36a47b2db682cb22dce9b172c9bf3af21e506ebb67608039 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-086149-m03 --control-plane --apiserver-advertise-address=192.168.39.121 --apiserver-bind-port=8443"
	I0819 18:04:12.687591  390826 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 6x5lez.k6vwxltnwheu1hpl --discovery-token-ca-cert-hash sha256:3fcbd90565c5acbc36a47b2db682cb22dce9b172c9bf3af21e506ebb67608039 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-086149-m03 --control-plane --apiserver-advertise-address=192.168.39.121 --apiserver-bind-port=8443": (22.040692153s)
	I0819 18:04:12.687633  390826 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0819 18:04:13.312539  390826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-086149-m03 minikube.k8s.io/updated_at=2024_08_19T18_04_13_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=9c2db9d51ec33b5c53a86e9ba3d384ee332e3411 minikube.k8s.io/name=ha-086149 minikube.k8s.io/primary=false
	I0819 18:04:13.457097  390826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-086149-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0819 18:04:13.569807  390826 start.go:319] duration metric: took 23.072581927s to joinCluster
	I0819 18:04:13.569882  390826 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.121 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 18:04:13.570288  390826 config.go:182] Loaded profile config "ha-086149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:04:13.572111  390826 out.go:177] * Verifying Kubernetes components...
	I0819 18:04:13.573929  390826 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 18:04:13.828073  390826 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 18:04:13.844916  390826 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19468-372744/kubeconfig
	I0819 18:04:13.845293  390826 kapi.go:59] client config for ha-086149: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/client.crt", KeyFile:"/home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/client.key", CAFile:"/home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f18d20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0819 18:04:13.845381  390826 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.249:8443
	I0819 18:04:13.845704  390826 node_ready.go:35] waiting up to 6m0s for node "ha-086149-m03" to be "Ready" ...
	I0819 18:04:13.845814  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:13.845826  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:13.845838  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:13.845850  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:13.849539  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:14.346905  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:14.346934  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:14.346947  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:14.346952  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:14.350340  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:14.846036  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:14.846068  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:14.846079  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:14.846084  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:14.849971  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:15.346541  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:15.346565  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:15.346574  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:15.346578  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:15.350495  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:15.846729  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:15.846753  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:15.846762  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:15.846767  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:15.850076  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:15.850601  390826 node_ready.go:53] node "ha-086149-m03" has status "Ready":"False"
	I0819 18:04:16.346369  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:16.346397  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:16.346408  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:16.346414  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:16.349512  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:16.846583  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:16.846602  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:16.846611  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:16.846615  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:16.850015  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:17.346969  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:17.346998  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:17.347017  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:17.347026  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:17.350830  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:17.846710  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:17.846733  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:17.846742  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:17.846748  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:17.853356  390826 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0819 18:04:17.853948  390826 node_ready.go:53] node "ha-086149-m03" has status "Ready":"False"
	I0819 18:04:18.346914  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:18.346939  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:18.346948  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:18.346951  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:18.350629  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:18.846873  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:18.846901  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:18.846911  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:18.846916  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:18.850632  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:19.346761  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:19.346789  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:19.346803  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:19.346809  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:19.350515  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:19.846438  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:19.846462  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:19.846471  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:19.846475  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:19.850219  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:20.346791  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:20.346818  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:20.346827  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:20.346831  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:20.351232  390826 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 18:04:20.351910  390826 node_ready.go:53] node "ha-086149-m03" has status "Ready":"False"
	I0819 18:04:20.846718  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:20.846742  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:20.846751  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:20.846754  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:20.850708  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:21.346270  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:21.346306  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:21.346319  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:21.346325  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:21.350140  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:21.846014  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:21.846042  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:21.846055  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:21.846062  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:21.849821  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:22.346851  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:22.346874  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:22.346890  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:22.346896  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:22.350181  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:22.846211  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:22.846234  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:22.846244  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:22.846248  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:22.850284  390826 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 18:04:22.851170  390826 node_ready.go:53] node "ha-086149-m03" has status "Ready":"False"
	I0819 18:04:23.346555  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:23.346581  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:23.346591  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:23.346596  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:23.350737  390826 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 18:04:23.846961  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:23.846985  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:23.846993  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:23.846996  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:23.850329  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:24.346793  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:24.346823  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:24.346834  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:24.346840  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:24.350007  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:24.846921  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:24.846944  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:24.846952  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:24.846956  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:24.850374  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:25.346991  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:25.347016  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:25.347027  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:25.347034  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:25.350611  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:25.351307  390826 node_ready.go:53] node "ha-086149-m03" has status "Ready":"False"
	I0819 18:04:25.846064  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:25.846088  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:25.846096  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:25.846100  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:25.849560  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:26.346018  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:26.346042  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:26.346051  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:26.346056  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:26.349570  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:26.846165  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:26.846192  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:26.846201  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:26.846204  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:26.849609  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:27.346757  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:27.346784  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:27.346795  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:27.346801  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:27.350244  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:27.846196  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:27.846226  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:27.846238  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:27.846246  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:27.854340  390826 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0819 18:04:27.854975  390826 node_ready.go:53] node "ha-086149-m03" has status "Ready":"False"
	I0819 18:04:28.346127  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:28.346152  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:28.346161  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:28.346165  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:28.349629  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:28.846420  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:28.846448  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:28.846458  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:28.846464  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:28.849925  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:29.345989  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:29.346015  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:29.346023  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:29.346027  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:29.349422  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:29.846338  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:29.846361  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:29.846369  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:29.846373  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:29.849856  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:30.346240  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:30.346265  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:30.346276  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:30.346283  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:30.349830  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:30.350507  390826 node_ready.go:53] node "ha-086149-m03" has status "Ready":"False"
	I0819 18:04:30.846234  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:30.846260  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:30.846269  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:30.846274  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:30.849946  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:31.346467  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:31.346492  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:31.346501  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:31.346506  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:31.350232  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:31.846155  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:31.846181  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:31.846190  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:31.846195  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:31.849420  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:32.346432  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:32.346455  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:32.346464  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:32.346469  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:32.350283  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:32.351930  390826 node_ready.go:49] node "ha-086149-m03" has status "Ready":"True"
	I0819 18:04:32.351973  390826 node_ready.go:38] duration metric: took 18.506247273s for node "ha-086149-m03" to be "Ready" ...
	I0819 18:04:32.351987  390826 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 18:04:32.352088  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0819 18:04:32.352101  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:32.352112  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:32.352118  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:32.359889  390826 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0819 18:04:32.366650  390826 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-8fjpd" in "kube-system" namespace to be "Ready" ...
	I0819 18:04:32.366736  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-8fjpd
	I0819 18:04:32.366744  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:32.366752  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:32.366755  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:32.369474  390826 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:04:32.369992  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149
	I0819 18:04:32.370007  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:32.370015  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:32.370018  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:32.372990  390826 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:04:32.373543  390826 pod_ready.go:93] pod "coredns-6f6b679f8f-8fjpd" in "kube-system" namespace has status "Ready":"True"
	I0819 18:04:32.373566  390826 pod_ready.go:82] duration metric: took 6.888361ms for pod "coredns-6f6b679f8f-8fjpd" in "kube-system" namespace to be "Ready" ...
	I0819 18:04:32.373579  390826 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-p65cb" in "kube-system" namespace to be "Ready" ...
	I0819 18:04:32.373647  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-p65cb
	I0819 18:04:32.373658  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:32.373667  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:32.373687  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:32.376325  390826 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:04:32.376857  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149
	I0819 18:04:32.376872  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:32.376880  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:32.376884  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:32.380110  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:32.381042  390826 pod_ready.go:93] pod "coredns-6f6b679f8f-p65cb" in "kube-system" namespace has status "Ready":"True"
	I0819 18:04:32.381060  390826 pod_ready.go:82] duration metric: took 7.473792ms for pod "coredns-6f6b679f8f-p65cb" in "kube-system" namespace to be "Ready" ...
	I0819 18:04:32.381070  390826 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-086149" in "kube-system" namespace to be "Ready" ...
	I0819 18:04:32.381114  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-086149
	I0819 18:04:32.381122  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:32.381140  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:32.381147  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:32.384359  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:32.385039  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149
	I0819 18:04:32.385054  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:32.385063  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:32.385070  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:32.387506  390826 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:04:32.388151  390826 pod_ready.go:93] pod "etcd-ha-086149" in "kube-system" namespace has status "Ready":"True"
	I0819 18:04:32.388168  390826 pod_ready.go:82] duration metric: took 7.092714ms for pod "etcd-ha-086149" in "kube-system" namespace to be "Ready" ...
	I0819 18:04:32.388177  390826 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-086149-m02" in "kube-system" namespace to be "Ready" ...
	I0819 18:04:32.388218  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-086149-m02
	I0819 18:04:32.388226  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:32.388233  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:32.388238  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:32.390613  390826 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:04:32.391213  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:04:32.391226  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:32.391232  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:32.391238  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:32.393364  390826 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:04:32.393931  390826 pod_ready.go:93] pod "etcd-ha-086149-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 18:04:32.393948  390826 pod_ready.go:82] duration metric: took 5.765365ms for pod "etcd-ha-086149-m02" in "kube-system" namespace to be "Ready" ...
	I0819 18:04:32.393959  390826 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-086149-m03" in "kube-system" namespace to be "Ready" ...
	I0819 18:04:32.546816  390826 request.go:632] Waited for 152.771522ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-086149-m03
	I0819 18:04:32.546893  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-086149-m03
	I0819 18:04:32.546903  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:32.546918  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:32.546928  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:32.550551  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:32.746678  390826 request.go:632] Waited for 195.290084ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:32.746738  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:32.746746  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:32.746764  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:32.746773  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:32.750195  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:32.750639  390826 pod_ready.go:93] pod "etcd-ha-086149-m03" in "kube-system" namespace has status "Ready":"True"
	I0819 18:04:32.750656  390826 pod_ready.go:82] duration metric: took 356.689273ms for pod "etcd-ha-086149-m03" in "kube-system" namespace to be "Ready" ...
	I0819 18:04:32.750674  390826 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-086149" in "kube-system" namespace to be "Ready" ...
	I0819 18:04:32.946849  390826 request.go:632] Waited for 196.085092ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-086149
	I0819 18:04:32.946918  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-086149
	I0819 18:04:32.946924  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:32.946931  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:32.946936  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:32.950468  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:33.147488  390826 request.go:632] Waited for 196.367007ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-086149
	I0819 18:04:33.147562  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149
	I0819 18:04:33.147567  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:33.147575  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:33.147581  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:33.150962  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:33.151882  390826 pod_ready.go:93] pod "kube-apiserver-ha-086149" in "kube-system" namespace has status "Ready":"True"
	I0819 18:04:33.151905  390826 pod_ready.go:82] duration metric: took 401.22217ms for pod "kube-apiserver-ha-086149" in "kube-system" namespace to be "Ready" ...
	I0819 18:04:33.151917  390826 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-086149-m02" in "kube-system" namespace to be "Ready" ...
	I0819 18:04:33.346707  390826 request.go:632] Waited for 194.702075ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-086149-m02
	I0819 18:04:33.346796  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-086149-m02
	I0819 18:04:33.346808  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:33.346817  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:33.346825  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:33.350430  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:33.546596  390826 request.go:632] Waited for 195.286829ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:04:33.546683  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:04:33.546692  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:33.546700  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:33.546705  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:33.551049  390826 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 18:04:33.551746  390826 pod_ready.go:93] pod "kube-apiserver-ha-086149-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 18:04:33.551777  390826 pod_ready.go:82] duration metric: took 399.852789ms for pod "kube-apiserver-ha-086149-m02" in "kube-system" namespace to be "Ready" ...
	I0819 18:04:33.551791  390826 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-086149-m03" in "kube-system" namespace to be "Ready" ...
	I0819 18:04:33.746717  390826 request.go:632] Waited for 194.821367ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-086149-m03
	I0819 18:04:33.746777  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-086149-m03
	I0819 18:04:33.746782  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:33.746789  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:33.746796  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:33.750604  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:33.947221  390826 request.go:632] Waited for 195.38286ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:33.947304  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:33.947315  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:33.947329  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:33.947341  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:33.950842  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:33.951897  390826 pod_ready.go:93] pod "kube-apiserver-ha-086149-m03" in "kube-system" namespace has status "Ready":"True"
	I0819 18:04:33.951917  390826 pod_ready.go:82] duration metric: took 400.118494ms for pod "kube-apiserver-ha-086149-m03" in "kube-system" namespace to be "Ready" ...
	I0819 18:04:33.951927  390826 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-086149" in "kube-system" namespace to be "Ready" ...
	I0819 18:04:34.147020  390826 request.go:632] Waited for 194.980048ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-086149
	I0819 18:04:34.147083  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-086149
	I0819 18:04:34.147090  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:34.147098  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:34.147102  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:34.150960  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:34.347360  390826 request.go:632] Waited for 195.328364ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-086149
	I0819 18:04:34.347446  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149
	I0819 18:04:34.347457  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:34.347470  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:34.347480  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:34.351092  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:34.351857  390826 pod_ready.go:93] pod "kube-controller-manager-ha-086149" in "kube-system" namespace has status "Ready":"True"
	I0819 18:04:34.351887  390826 pod_ready.go:82] duration metric: took 399.95211ms for pod "kube-controller-manager-ha-086149" in "kube-system" namespace to be "Ready" ...
	I0819 18:04:34.351903  390826 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-086149-m02" in "kube-system" namespace to be "Ready" ...
	I0819 18:04:34.547332  390826 request.go:632] Waited for 195.247162ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-086149-m02
	I0819 18:04:34.547414  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-086149-m02
	I0819 18:04:34.547426  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:34.547440  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:34.547448  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:34.550597  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:34.746892  390826 request.go:632] Waited for 195.376173ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:04:34.746979  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:04:34.746988  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:34.746997  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:34.747006  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:34.750140  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:34.750824  390826 pod_ready.go:93] pod "kube-controller-manager-ha-086149-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 18:04:34.750843  390826 pod_ready.go:82] duration metric: took 398.929687ms for pod "kube-controller-manager-ha-086149-m02" in "kube-system" namespace to be "Ready" ...
	I0819 18:04:34.750859  390826 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-086149-m03" in "kube-system" namespace to be "Ready" ...
	I0819 18:04:34.947372  390826 request.go:632] Waited for 196.431945ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-086149-m03
	I0819 18:04:34.947437  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-086149-m03
	I0819 18:04:34.947442  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:34.947450  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:34.947455  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:34.951173  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:35.146575  390826 request.go:632] Waited for 194.306794ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:35.146642  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:35.146650  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:35.146660  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:35.146669  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:35.149906  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:35.150538  390826 pod_ready.go:93] pod "kube-controller-manager-ha-086149-m03" in "kube-system" namespace has status "Ready":"True"
	I0819 18:04:35.150557  390826 pod_ready.go:82] duration metric: took 399.692281ms for pod "kube-controller-manager-ha-086149-m03" in "kube-system" namespace to be "Ready" ...
	I0819 18:04:35.150568  390826 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8snb5" in "kube-system" namespace to be "Ready" ...
	I0819 18:04:35.347236  390826 request.go:632] Waited for 196.586465ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8snb5
	I0819 18:04:35.347302  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8snb5
	I0819 18:04:35.347307  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:35.347316  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:35.347319  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:35.350883  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:35.547128  390826 request.go:632] Waited for 195.353155ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:35.547188  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:35.547193  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:35.547201  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:35.547207  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:35.550473  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:35.551110  390826 pod_ready.go:93] pod "kube-proxy-8snb5" in "kube-system" namespace has status "Ready":"True"
	I0819 18:04:35.551129  390826 pod_ready.go:82] duration metric: took 400.555696ms for pod "kube-proxy-8snb5" in "kube-system" namespace to be "Ready" ...
	I0819 18:04:35.551141  390826 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fwkf2" in "kube-system" namespace to be "Ready" ...
	I0819 18:04:35.747312  390826 request.go:632] Waited for 196.091883ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fwkf2
	I0819 18:04:35.747404  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fwkf2
	I0819 18:04:35.747410  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:35.747418  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:35.747427  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:35.751161  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:35.946854  390826 request.go:632] Waited for 194.274206ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-086149
	I0819 18:04:35.946924  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149
	I0819 18:04:35.946930  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:35.946940  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:35.946950  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:35.949959  390826 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:04:35.950784  390826 pod_ready.go:93] pod "kube-proxy-fwkf2" in "kube-system" namespace has status "Ready":"True"
	I0819 18:04:35.950803  390826 pod_ready.go:82] duration metric: took 399.650676ms for pod "kube-proxy-fwkf2" in "kube-system" namespace to be "Ready" ...
	I0819 18:04:35.950814  390826 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vx94r" in "kube-system" namespace to be "Ready" ...
	I0819 18:04:36.146946  390826 request.go:632] Waited for 196.043967ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vx94r
	I0819 18:04:36.147019  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vx94r
	I0819 18:04:36.147025  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:36.147033  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:36.147038  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:36.150726  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:36.346845  390826 request.go:632] Waited for 195.38793ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:04:36.346912  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:04:36.346918  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:36.346926  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:36.346930  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:36.350328  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:36.351045  390826 pod_ready.go:93] pod "kube-proxy-vx94r" in "kube-system" namespace has status "Ready":"True"
	I0819 18:04:36.351071  390826 pod_ready.go:82] duration metric: took 400.249518ms for pod "kube-proxy-vx94r" in "kube-system" namespace to be "Ready" ...
	I0819 18:04:36.351085  390826 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-086149" in "kube-system" namespace to be "Ready" ...
	I0819 18:04:36.547228  390826 request.go:632] Waited for 196.042508ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-086149
	I0819 18:04:36.547298  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-086149
	I0819 18:04:36.547303  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:36.547316  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:36.547320  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:36.551158  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:36.747250  390826 request.go:632] Waited for 195.383213ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-086149
	I0819 18:04:36.747325  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149
	I0819 18:04:36.747333  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:36.747342  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:36.747371  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:36.750310  390826 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:04:36.750994  390826 pod_ready.go:93] pod "kube-scheduler-ha-086149" in "kube-system" namespace has status "Ready":"True"
	I0819 18:04:36.751023  390826 pod_ready.go:82] duration metric: took 399.92967ms for pod "kube-scheduler-ha-086149" in "kube-system" namespace to be "Ready" ...
	I0819 18:04:36.751039  390826 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-086149-m02" in "kube-system" namespace to be "Ready" ...
	I0819 18:04:36.946962  390826 request.go:632] Waited for 195.825478ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-086149-m02
	I0819 18:04:36.947043  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-086149-m02
	I0819 18:04:36.947048  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:36.947056  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:36.947061  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:36.950479  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:37.146460  390826 request.go:632] Waited for 195.287394ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:04:37.146546  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:04:37.146552  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:37.146559  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:37.146566  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:37.150208  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:37.151006  390826 pod_ready.go:93] pod "kube-scheduler-ha-086149-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 18:04:37.151027  390826 pod_ready.go:82] duration metric: took 399.979634ms for pod "kube-scheduler-ha-086149-m02" in "kube-system" namespace to be "Ready" ...
	I0819 18:04:37.151037  390826 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-086149-m03" in "kube-system" namespace to be "Ready" ...
	I0819 18:04:37.347103  390826 request.go:632] Waited for 195.969715ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-086149-m03
	I0819 18:04:37.347198  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-086149-m03
	I0819 18:04:37.347215  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:37.347228  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:37.347237  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:37.350608  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:37.547132  390826 request.go:632] Waited for 195.865595ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:37.547206  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:37.547215  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:37.547232  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:37.547241  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:37.551223  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:37.551989  390826 pod_ready.go:93] pod "kube-scheduler-ha-086149-m03" in "kube-system" namespace has status "Ready":"True"
	I0819 18:04:37.552010  390826 pod_ready.go:82] duration metric: took 400.966575ms for pod "kube-scheduler-ha-086149-m03" in "kube-system" namespace to be "Ready" ...
	I0819 18:04:37.552022  390826 pod_ready.go:39] duration metric: took 5.200017437s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 18:04:37.552038  390826 api_server.go:52] waiting for apiserver process to appear ...
	I0819 18:04:37.552091  390826 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:04:37.573907  390826 api_server.go:72] duration metric: took 24.003963962s to wait for apiserver process to appear ...
	I0819 18:04:37.573952  390826 api_server.go:88] waiting for apiserver healthz status ...
	I0819 18:04:37.573979  390826 api_server.go:253] Checking apiserver healthz at https://192.168.39.249:8443/healthz ...
	I0819 18:04:37.578518  390826 api_server.go:279] https://192.168.39.249:8443/healthz returned 200:
	ok
	I0819 18:04:37.578596  390826 round_trippers.go:463] GET https://192.168.39.249:8443/version
	I0819 18:04:37.578605  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:37.578613  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:37.578619  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:37.579424  390826 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0819 18:04:37.579486  390826 api_server.go:141] control plane version: v1.31.0
	I0819 18:04:37.579499  390826 api_server.go:131] duration metric: took 5.540572ms to wait for apiserver health ...
	I0819 18:04:37.579507  390826 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 18:04:37.746950  390826 request.go:632] Waited for 167.353562ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0819 18:04:37.747044  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0819 18:04:37.747052  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:37.747064  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:37.747070  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:37.752732  390826 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0819 18:04:37.759988  390826 system_pods.go:59] 24 kube-system pods found
	I0819 18:04:37.760020  390826 system_pods.go:61] "coredns-6f6b679f8f-8fjpd" [4bedb900-107a-4f7e-aae7-391b18da4a26] Running
	I0819 18:04:37.760026  390826 system_pods.go:61] "coredns-6f6b679f8f-p65cb" [7f30449e-d4ea-4d6f-a63a-08551024bd04] Running
	I0819 18:04:37.760030  390826 system_pods.go:61] "etcd-ha-086149" [0dc3ab02-31e8-4110-accd-85d2e18db232] Running
	I0819 18:04:37.760033  390826 system_pods.go:61] "etcd-ha-086149-m02" [06fcadf6-a4b1-40c8-8ce8-bc1df1fad746] Running
	I0819 18:04:37.760036  390826 system_pods.go:61] "etcd-ha-086149-m03" [244fa866-cb01-4b01-b0a8-68081b70e0e7] Running
	I0819 18:04:37.760039  390826 system_pods.go:61] "kindnet-dgj9c" [142f260c-d74e-411f-ac87-f4398f573b94] Running
	I0819 18:04:37.760042  390826 system_pods.go:61] "kindnet-vb66s" [9322737a-5f8a-4d5a-a7d1-ba076bc8f2d8] Running
	I0819 18:04:37.760045  390826 system_pods.go:61] "kindnet-x87ch" [aa623766-8f51-4570-822c-c2efc1ce338c] Running
	I0819 18:04:37.760048  390826 system_pods.go:61] "kube-apiserver-ha-086149" [98466e03-c8b3-4d70-97b0-ba24afe776a9] Running
	I0819 18:04:37.760052  390826 system_pods.go:61] "kube-apiserver-ha-086149-m02" [afbc7c61-72ec-4571-9a5e-3d8afd08ae6b] Running
	I0819 18:04:37.760055  390826 system_pods.go:61] "kube-apiserver-ha-086149-m03" [1732b952-982b-4744-86a2-0b0bcad77b83] Running
	I0819 18:04:37.760058  390826 system_pods.go:61] "kube-controller-manager-ha-086149" [910295fd-3d2e-4390-b9cd-9e1169813375] Running
	I0819 18:04:37.760062  390826 system_pods.go:61] "kube-controller-manager-ha-086149-m02" [dad58fc3-85d8-444c-bfb8-3a74c5016f32] Running
	I0819 18:04:37.760065  390826 system_pods.go:61] "kube-controller-manager-ha-086149-m03" [3b251cc7-f532-47e4-9dd5-44d7bf8a51b6] Running
	I0819 18:04:37.760068  390826 system_pods.go:61] "kube-proxy-8snb5" [a79f5f3e-c2e0-4d5c-a603-623dab860fa5] Running
	I0819 18:04:37.760072  390826 system_pods.go:61] "kube-proxy-fwkf2" [001a3fe7-633c-44f8-9a8c-7401cec7af54] Running
	I0819 18:04:37.760075  390826 system_pods.go:61] "kube-proxy-vx94r" [8960702f-2f02-4e67-9d4f-02860491e5f2] Running
	I0819 18:04:37.760079  390826 system_pods.go:61] "kube-scheduler-ha-086149" [6d113319-d44e-4a5a-8e0a-f0a890e13e43] Running
	I0819 18:04:37.760083  390826 system_pods.go:61] "kube-scheduler-ha-086149-m02" [5d64ff86-a24d-4836-a7d7-ebb968bb39c8] Running
	I0819 18:04:37.760086  390826 system_pods.go:61] "kube-scheduler-ha-086149-m03" [fcd18473-942f-4ced-ae57-46ac80a0f60f] Running
	I0819 18:04:37.760088  390826 system_pods.go:61] "kube-vip-ha-086149" [25176ed4-e5b0-4e5e-9835-736c856d2643] Running
	I0819 18:04:37.760091  390826 system_pods.go:61] "kube-vip-ha-086149-m02" [8c6b400d-f73e-44b5-a31f-3607329360be] Running
	I0819 18:04:37.760094  390826 system_pods.go:61] "kube-vip-ha-086149-m03" [09c25237-cadd-43b1-95ab-212c2d47a20d] Running
	I0819 18:04:37.760097  390826 system_pods.go:61] "storage-provisioner" [c12159a8-5f84-4d19-aa54-7b56a9669f6c] Running
	I0819 18:04:37.760104  390826 system_pods.go:74] duration metric: took 180.589003ms to wait for pod list to return data ...
	I0819 18:04:37.760114  390826 default_sa.go:34] waiting for default service account to be created ...
	I0819 18:04:37.946508  390826 request.go:632] Waited for 186.293544ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/default/serviceaccounts
	I0819 18:04:37.946580  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/default/serviceaccounts
	I0819 18:04:37.946587  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:37.946598  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:37.946607  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:37.950571  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:37.950729  390826 default_sa.go:45] found service account: "default"
	I0819 18:04:37.950746  390826 default_sa.go:55] duration metric: took 190.624862ms for default service account to be created ...
	I0819 18:04:37.950760  390826 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 18:04:38.147151  390826 request.go:632] Waited for 196.282924ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0819 18:04:38.147247  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0819 18:04:38.147259  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:38.147271  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:38.147283  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:38.154184  390826 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0819 18:04:38.163017  390826 system_pods.go:86] 24 kube-system pods found
	I0819 18:04:38.163051  390826 system_pods.go:89] "coredns-6f6b679f8f-8fjpd" [4bedb900-107a-4f7e-aae7-391b18da4a26] Running
	I0819 18:04:38.163057  390826 system_pods.go:89] "coredns-6f6b679f8f-p65cb" [7f30449e-d4ea-4d6f-a63a-08551024bd04] Running
	I0819 18:04:38.163062  390826 system_pods.go:89] "etcd-ha-086149" [0dc3ab02-31e8-4110-accd-85d2e18db232] Running
	I0819 18:04:38.163066  390826 system_pods.go:89] "etcd-ha-086149-m02" [06fcadf6-a4b1-40c8-8ce8-bc1df1fad746] Running
	I0819 18:04:38.163071  390826 system_pods.go:89] "etcd-ha-086149-m03" [244fa866-cb01-4b01-b0a8-68081b70e0e7] Running
	I0819 18:04:38.163075  390826 system_pods.go:89] "kindnet-dgj9c" [142f260c-d74e-411f-ac87-f4398f573b94] Running
	I0819 18:04:38.163079  390826 system_pods.go:89] "kindnet-vb66s" [9322737a-5f8a-4d5a-a7d1-ba076bc8f2d8] Running
	I0819 18:04:38.163083  390826 system_pods.go:89] "kindnet-x87ch" [aa623766-8f51-4570-822c-c2efc1ce338c] Running
	I0819 18:04:38.163089  390826 system_pods.go:89] "kube-apiserver-ha-086149" [98466e03-c8b3-4d70-97b0-ba24afe776a9] Running
	I0819 18:04:38.163094  390826 system_pods.go:89] "kube-apiserver-ha-086149-m02" [afbc7c61-72ec-4571-9a5e-3d8afd08ae6b] Running
	I0819 18:04:38.163100  390826 system_pods.go:89] "kube-apiserver-ha-086149-m03" [1732b952-982b-4744-86a2-0b0bcad77b83] Running
	I0819 18:04:38.163105  390826 system_pods.go:89] "kube-controller-manager-ha-086149" [910295fd-3d2e-4390-b9cd-9e1169813375] Running
	I0819 18:04:38.163110  390826 system_pods.go:89] "kube-controller-manager-ha-086149-m02" [dad58fc3-85d8-444c-bfb8-3a74c5016f32] Running
	I0819 18:04:38.163116  390826 system_pods.go:89] "kube-controller-manager-ha-086149-m03" [3b251cc7-f532-47e4-9dd5-44d7bf8a51b6] Running
	I0819 18:04:38.163126  390826 system_pods.go:89] "kube-proxy-8snb5" [a79f5f3e-c2e0-4d5c-a603-623dab860fa5] Running
	I0819 18:04:38.163130  390826 system_pods.go:89] "kube-proxy-fwkf2" [001a3fe7-633c-44f8-9a8c-7401cec7af54] Running
	I0819 18:04:38.163134  390826 system_pods.go:89] "kube-proxy-vx94r" [8960702f-2f02-4e67-9d4f-02860491e5f2] Running
	I0819 18:04:38.163137  390826 system_pods.go:89] "kube-scheduler-ha-086149" [6d113319-d44e-4a5a-8e0a-f0a890e13e43] Running
	I0819 18:04:38.163141  390826 system_pods.go:89] "kube-scheduler-ha-086149-m02" [5d64ff86-a24d-4836-a7d7-ebb968bb39c8] Running
	I0819 18:04:38.163144  390826 system_pods.go:89] "kube-scheduler-ha-086149-m03" [fcd18473-942f-4ced-ae57-46ac80a0f60f] Running
	I0819 18:04:38.163151  390826 system_pods.go:89] "kube-vip-ha-086149" [25176ed4-e5b0-4e5e-9835-736c856d2643] Running
	I0819 18:04:38.163156  390826 system_pods.go:89] "kube-vip-ha-086149-m02" [8c6b400d-f73e-44b5-a31f-3607329360be] Running
	I0819 18:04:38.163161  390826 system_pods.go:89] "kube-vip-ha-086149-m03" [09c25237-cadd-43b1-95ab-212c2d47a20d] Running
	I0819 18:04:38.163166  390826 system_pods.go:89] "storage-provisioner" [c12159a8-5f84-4d19-aa54-7b56a9669f6c] Running
	I0819 18:04:38.163176  390826 system_pods.go:126] duration metric: took 212.405865ms to wait for k8s-apps to be running ...
	I0819 18:04:38.163189  390826 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 18:04:38.163249  390826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:04:38.179201  390826 system_svc.go:56] duration metric: took 15.999867ms WaitForService to wait for kubelet
	I0819 18:04:38.179238  390826 kubeadm.go:582] duration metric: took 24.609302326s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 18:04:38.179260  390826 node_conditions.go:102] verifying NodePressure condition ...
	I0819 18:04:38.347453  390826 request.go:632] Waited for 168.074628ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes
	I0819 18:04:38.347523  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes
	I0819 18:04:38.347528  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:38.347536  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:38.347542  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:38.351853  390826 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 18:04:38.353202  390826 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 18:04:38.353234  390826 node_conditions.go:123] node cpu capacity is 2
	I0819 18:04:38.353250  390826 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 18:04:38.353255  390826 node_conditions.go:123] node cpu capacity is 2
	I0819 18:04:38.353261  390826 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 18:04:38.353265  390826 node_conditions.go:123] node cpu capacity is 2
	I0819 18:04:38.353271  390826 node_conditions.go:105] duration metric: took 174.004921ms to run NodePressure ...
	I0819 18:04:38.353284  390826 start.go:241] waiting for startup goroutines ...
	I0819 18:04:38.353313  390826 start.go:255] writing updated cluster config ...
	I0819 18:04:38.353807  390826 ssh_runner.go:195] Run: rm -f paused
	I0819 18:04:38.407159  390826 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 18:04:38.409063  390826 out.go:177] * Done! kubectl is now configured to use "ha-086149" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 19 18:08:17 ha-086149 crio[686]: time="2024-08-19 18:08:17.522992279Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090897522972055,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c2da7bc0-fe68-4db5-8873-1fa75b2c0c09 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:08:17 ha-086149 crio[686]: time="2024-08-19 18:08:17.523550897Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=efa669df-a932-4184-ac54-83fe1448bc22 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:08:17 ha-086149 crio[686]: time="2024-08-19 18:08:17.523599866Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=efa669df-a932-4184-ac54-83fe1448bc22 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:08:17 ha-086149 crio[686]: time="2024-08-19 18:08:17.523823522Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ef0b28473496e4ab21e3f86bc64eb662e5c22e59e4a56f80f7bdad009460c73d,PodSandboxId:0f784aeccda9e0bff51a30b97a310813be1e271fdaae54f30006645ed5ae31b1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724090682352148658,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-fd2dw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f5e2f831-487f-4edb-b6c1-b391906a6d5b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4208b72f7684106eeabb79597e9a16912d86fddf552d810668e52ee86e4cacf,PodSandboxId:5b83e59b0dd3110115fa51715b6d8f6d29e006636ab031766095bcb6200ff245,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724090536333628873,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-p65cb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f30449e-d4ea-4d6f-a63a-08551024bd04,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86aec3b9357709107938f07e57e09bef332ea9baea288a18bb10389d5108084b,PodSandboxId:86507aaa25957ebc7ff023a8f042b236a729503785cd3163a2a44e79daf28a80,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724090536330177075,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-8fjpd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
4bedb900-107a-4f7e-aae7-391b18da4a26,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de3b095c19e3f3ff1bf0fb76700cc09513b591cda7c219c31dee7842602944b4,PodSandboxId:537bb09282b606b44a00c1c617ce2ce8f82082247274da7d8632728cdecd594d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1724090536201546220,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c12159a8-5f84-4d19-aa54-7b56a9669f6c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66fd9c9b32e5e0294c89ebc2ee3c443fda85c40c3ad5b05d42357b4968e8d305,PodSandboxId:3c6e833618ab7965e295c1f82164c28a64e619a82a0a8a90542c16f004e32954,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1724090524117954693,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vb66s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9322737a-5f8a-4d5a-a7d1-ba076bc8f2d8,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb8cccc1568bbb207d2c7c285f3897a7a425cba60f4dfcf3e8daa8082fc38ef0,PodSandboxId:dc27fd8c8c4a6cec062f5420b6ed3489f5b075fb1eb4e02074e5505c76d238e5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172409052
0283716084,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fwkf2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 001a3fe7-633c-44f8-9a8c-7401cec7af54,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cbf110391a2708b365a6d117cd1facf1a5820add049c9338b5eaa12f02254e4,PodSandboxId:14b36b352300967c929247cec1ddcb31ac17615e8281918ab214b49a770c21a1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172409051219
3183830,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8cce7fff82cf979e3ad7d68f6f416e8,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5e746178ed6a3645979a5bd617a6d9f408bb3e6af232f31409c7e79a0c4f6b2,PodSandboxId:9b826611f7fb43dc5f6fb5c26f55533ebe177f1d584f77bd7a2a32978c1478e5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724090509187324635,Labels:map[strin
g]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab6b0fe91f166a5c05b58933ead885f6,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:426a12b48132d73e1b93e6a7fb5b3420868e384eb280274c6ee81ae6f6bcea12,PodSandboxId:4cd25796bc67e8c9b4a666188feb3addfa806bf372a40c47a0ed8a3e3576c9a2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724090509150960473,Labels:map[string]string{io.k
ubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcf0b1666b512c678d4309e6a2bd2773,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f729929f59edc9bd3c0ec7e99f4b984f94d6b6ec06edf83cf6dc3efba7a1fe5,PodSandboxId:d0637e1ac222cb0d4d6abc71c2af0485d7935e0b55308bdb6a1af649031fef39,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724090509066737802,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,
io.kubernetes.pod.name: kube-apiserver-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9269a2cf31966e0bbf30b6554fa311ee,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0e66231bf791048a9932068b5f28d8479613545885bea8e42cf9c79913ffccd,PodSandboxId:1f46f8e2ba79c3a9b9a7f9729c154fc9c495e280d0a9fac6dc4fdf837a2e0b73,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724090509024741947,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.nam
e: kube-scheduler-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 465e756b61a05a6f1c4dfeba2adbdeeb,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=efa669df-a932-4184-ac54-83fe1448bc22 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:08:17 ha-086149 crio[686]: time="2024-08-19 18:08:17.572049578Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6db70b12-f149-4098-a47f-8ab6d1e9a42a name=/runtime.v1.RuntimeService/Version
	Aug 19 18:08:17 ha-086149 crio[686]: time="2024-08-19 18:08:17.572192816Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6db70b12-f149-4098-a47f-8ab6d1e9a42a name=/runtime.v1.RuntimeService/Version
	Aug 19 18:08:17 ha-086149 crio[686]: time="2024-08-19 18:08:17.573493153Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=45242bcc-83fa-48ac-9b35-8c7f7dd27e8c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:08:17 ha-086149 crio[686]: time="2024-08-19 18:08:17.573933389Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090897573911335,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=45242bcc-83fa-48ac-9b35-8c7f7dd27e8c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:08:17 ha-086149 crio[686]: time="2024-08-19 18:08:17.574891685Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cef18c65-7cdf-436f-baa1-536789843cc4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:08:17 ha-086149 crio[686]: time="2024-08-19 18:08:17.574946034Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cef18c65-7cdf-436f-baa1-536789843cc4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:08:17 ha-086149 crio[686]: time="2024-08-19 18:08:17.575235402Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ef0b28473496e4ab21e3f86bc64eb662e5c22e59e4a56f80f7bdad009460c73d,PodSandboxId:0f784aeccda9e0bff51a30b97a310813be1e271fdaae54f30006645ed5ae31b1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724090682352148658,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-fd2dw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f5e2f831-487f-4edb-b6c1-b391906a6d5b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4208b72f7684106eeabb79597e9a16912d86fddf552d810668e52ee86e4cacf,PodSandboxId:5b83e59b0dd3110115fa51715b6d8f6d29e006636ab031766095bcb6200ff245,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724090536333628873,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-p65cb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f30449e-d4ea-4d6f-a63a-08551024bd04,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86aec3b9357709107938f07e57e09bef332ea9baea288a18bb10389d5108084b,PodSandboxId:86507aaa25957ebc7ff023a8f042b236a729503785cd3163a2a44e79daf28a80,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724090536330177075,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-8fjpd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
4bedb900-107a-4f7e-aae7-391b18da4a26,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de3b095c19e3f3ff1bf0fb76700cc09513b591cda7c219c31dee7842602944b4,PodSandboxId:537bb09282b606b44a00c1c617ce2ce8f82082247274da7d8632728cdecd594d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1724090536201546220,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c12159a8-5f84-4d19-aa54-7b56a9669f6c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66fd9c9b32e5e0294c89ebc2ee3c443fda85c40c3ad5b05d42357b4968e8d305,PodSandboxId:3c6e833618ab7965e295c1f82164c28a64e619a82a0a8a90542c16f004e32954,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1724090524117954693,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vb66s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9322737a-5f8a-4d5a-a7d1-ba076bc8f2d8,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb8cccc1568bbb207d2c7c285f3897a7a425cba60f4dfcf3e8daa8082fc38ef0,PodSandboxId:dc27fd8c8c4a6cec062f5420b6ed3489f5b075fb1eb4e02074e5505c76d238e5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172409052
0283716084,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fwkf2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 001a3fe7-633c-44f8-9a8c-7401cec7af54,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cbf110391a2708b365a6d117cd1facf1a5820add049c9338b5eaa12f02254e4,PodSandboxId:14b36b352300967c929247cec1ddcb31ac17615e8281918ab214b49a770c21a1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172409051219
3183830,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8cce7fff82cf979e3ad7d68f6f416e8,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5e746178ed6a3645979a5bd617a6d9f408bb3e6af232f31409c7e79a0c4f6b2,PodSandboxId:9b826611f7fb43dc5f6fb5c26f55533ebe177f1d584f77bd7a2a32978c1478e5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724090509187324635,Labels:map[strin
g]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab6b0fe91f166a5c05b58933ead885f6,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:426a12b48132d73e1b93e6a7fb5b3420868e384eb280274c6ee81ae6f6bcea12,PodSandboxId:4cd25796bc67e8c9b4a666188feb3addfa806bf372a40c47a0ed8a3e3576c9a2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724090509150960473,Labels:map[string]string{io.k
ubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcf0b1666b512c678d4309e6a2bd2773,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f729929f59edc9bd3c0ec7e99f4b984f94d6b6ec06edf83cf6dc3efba7a1fe5,PodSandboxId:d0637e1ac222cb0d4d6abc71c2af0485d7935e0b55308bdb6a1af649031fef39,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724090509066737802,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,
io.kubernetes.pod.name: kube-apiserver-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9269a2cf31966e0bbf30b6554fa311ee,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0e66231bf791048a9932068b5f28d8479613545885bea8e42cf9c79913ffccd,PodSandboxId:1f46f8e2ba79c3a9b9a7f9729c154fc9c495e280d0a9fac6dc4fdf837a2e0b73,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724090509024741947,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.nam
e: kube-scheduler-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 465e756b61a05a6f1c4dfeba2adbdeeb,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cef18c65-7cdf-436f-baa1-536789843cc4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:08:17 ha-086149 crio[686]: time="2024-08-19 18:08:17.621808806Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b28660e1-18cb-4eea-a9ae-417776c577f2 name=/runtime.v1.RuntimeService/Version
	Aug 19 18:08:17 ha-086149 crio[686]: time="2024-08-19 18:08:17.621900646Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b28660e1-18cb-4eea-a9ae-417776c577f2 name=/runtime.v1.RuntimeService/Version
	Aug 19 18:08:17 ha-086149 crio[686]: time="2024-08-19 18:08:17.623059263Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b254ebe9-8e8d-45a4-bebc-e4ff3074ad1d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:08:17 ha-086149 crio[686]: time="2024-08-19 18:08:17.623560148Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090897623528831,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b254ebe9-8e8d-45a4-bebc-e4ff3074ad1d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:08:17 ha-086149 crio[686]: time="2024-08-19 18:08:17.624186378Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=642dc2bb-7564-4a79-8a77-4c9cb4d8e89a name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:08:17 ha-086149 crio[686]: time="2024-08-19 18:08:17.624259899Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=642dc2bb-7564-4a79-8a77-4c9cb4d8e89a name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:08:17 ha-086149 crio[686]: time="2024-08-19 18:08:17.624572549Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ef0b28473496e4ab21e3f86bc64eb662e5c22e59e4a56f80f7bdad009460c73d,PodSandboxId:0f784aeccda9e0bff51a30b97a310813be1e271fdaae54f30006645ed5ae31b1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724090682352148658,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-fd2dw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f5e2f831-487f-4edb-b6c1-b391906a6d5b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4208b72f7684106eeabb79597e9a16912d86fddf552d810668e52ee86e4cacf,PodSandboxId:5b83e59b0dd3110115fa51715b6d8f6d29e006636ab031766095bcb6200ff245,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724090536333628873,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-p65cb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f30449e-d4ea-4d6f-a63a-08551024bd04,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86aec3b9357709107938f07e57e09bef332ea9baea288a18bb10389d5108084b,PodSandboxId:86507aaa25957ebc7ff023a8f042b236a729503785cd3163a2a44e79daf28a80,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724090536330177075,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-8fjpd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
4bedb900-107a-4f7e-aae7-391b18da4a26,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de3b095c19e3f3ff1bf0fb76700cc09513b591cda7c219c31dee7842602944b4,PodSandboxId:537bb09282b606b44a00c1c617ce2ce8f82082247274da7d8632728cdecd594d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1724090536201546220,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c12159a8-5f84-4d19-aa54-7b56a9669f6c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66fd9c9b32e5e0294c89ebc2ee3c443fda85c40c3ad5b05d42357b4968e8d305,PodSandboxId:3c6e833618ab7965e295c1f82164c28a64e619a82a0a8a90542c16f004e32954,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1724090524117954693,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vb66s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9322737a-5f8a-4d5a-a7d1-ba076bc8f2d8,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb8cccc1568bbb207d2c7c285f3897a7a425cba60f4dfcf3e8daa8082fc38ef0,PodSandboxId:dc27fd8c8c4a6cec062f5420b6ed3489f5b075fb1eb4e02074e5505c76d238e5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172409052
0283716084,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fwkf2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 001a3fe7-633c-44f8-9a8c-7401cec7af54,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cbf110391a2708b365a6d117cd1facf1a5820add049c9338b5eaa12f02254e4,PodSandboxId:14b36b352300967c929247cec1ddcb31ac17615e8281918ab214b49a770c21a1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172409051219
3183830,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8cce7fff82cf979e3ad7d68f6f416e8,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5e746178ed6a3645979a5bd617a6d9f408bb3e6af232f31409c7e79a0c4f6b2,PodSandboxId:9b826611f7fb43dc5f6fb5c26f55533ebe177f1d584f77bd7a2a32978c1478e5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724090509187324635,Labels:map[strin
g]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab6b0fe91f166a5c05b58933ead885f6,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:426a12b48132d73e1b93e6a7fb5b3420868e384eb280274c6ee81ae6f6bcea12,PodSandboxId:4cd25796bc67e8c9b4a666188feb3addfa806bf372a40c47a0ed8a3e3576c9a2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724090509150960473,Labels:map[string]string{io.k
ubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcf0b1666b512c678d4309e6a2bd2773,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f729929f59edc9bd3c0ec7e99f4b984f94d6b6ec06edf83cf6dc3efba7a1fe5,PodSandboxId:d0637e1ac222cb0d4d6abc71c2af0485d7935e0b55308bdb6a1af649031fef39,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724090509066737802,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,
io.kubernetes.pod.name: kube-apiserver-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9269a2cf31966e0bbf30b6554fa311ee,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0e66231bf791048a9932068b5f28d8479613545885bea8e42cf9c79913ffccd,PodSandboxId:1f46f8e2ba79c3a9b9a7f9729c154fc9c495e280d0a9fac6dc4fdf837a2e0b73,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724090509024741947,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.nam
e: kube-scheduler-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 465e756b61a05a6f1c4dfeba2adbdeeb,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=642dc2bb-7564-4a79-8a77-4c9cb4d8e89a name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:08:17 ha-086149 crio[686]: time="2024-08-19 18:08:17.672396945Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c3f89f28-ce51-4dc8-9551-05e135d0babc name=/runtime.v1.RuntimeService/Version
	Aug 19 18:08:17 ha-086149 crio[686]: time="2024-08-19 18:08:17.672481126Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c3f89f28-ce51-4dc8-9551-05e135d0babc name=/runtime.v1.RuntimeService/Version
	Aug 19 18:08:17 ha-086149 crio[686]: time="2024-08-19 18:08:17.674305601Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ce2f3418-af15-4801-b0fd-b59ee0a446bb name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:08:17 ha-086149 crio[686]: time="2024-08-19 18:08:17.674992849Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090897674964023,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ce2f3418-af15-4801-b0fd-b59ee0a446bb name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:08:17 ha-086149 crio[686]: time="2024-08-19 18:08:17.675655537Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fd48751d-47d8-4d33-baf0-2ab1eefff941 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:08:17 ha-086149 crio[686]: time="2024-08-19 18:08:17.675707726Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fd48751d-47d8-4d33-baf0-2ab1eefff941 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:08:17 ha-086149 crio[686]: time="2024-08-19 18:08:17.675936994Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ef0b28473496e4ab21e3f86bc64eb662e5c22e59e4a56f80f7bdad009460c73d,PodSandboxId:0f784aeccda9e0bff51a30b97a310813be1e271fdaae54f30006645ed5ae31b1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724090682352148658,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-fd2dw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f5e2f831-487f-4edb-b6c1-b391906a6d5b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4208b72f7684106eeabb79597e9a16912d86fddf552d810668e52ee86e4cacf,PodSandboxId:5b83e59b0dd3110115fa51715b6d8f6d29e006636ab031766095bcb6200ff245,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724090536333628873,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-p65cb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f30449e-d4ea-4d6f-a63a-08551024bd04,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86aec3b9357709107938f07e57e09bef332ea9baea288a18bb10389d5108084b,PodSandboxId:86507aaa25957ebc7ff023a8f042b236a729503785cd3163a2a44e79daf28a80,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724090536330177075,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-8fjpd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
4bedb900-107a-4f7e-aae7-391b18da4a26,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de3b095c19e3f3ff1bf0fb76700cc09513b591cda7c219c31dee7842602944b4,PodSandboxId:537bb09282b606b44a00c1c617ce2ce8f82082247274da7d8632728cdecd594d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1724090536201546220,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c12159a8-5f84-4d19-aa54-7b56a9669f6c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66fd9c9b32e5e0294c89ebc2ee3c443fda85c40c3ad5b05d42357b4968e8d305,PodSandboxId:3c6e833618ab7965e295c1f82164c28a64e619a82a0a8a90542c16f004e32954,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1724090524117954693,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vb66s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9322737a-5f8a-4d5a-a7d1-ba076bc8f2d8,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb8cccc1568bbb207d2c7c285f3897a7a425cba60f4dfcf3e8daa8082fc38ef0,PodSandboxId:dc27fd8c8c4a6cec062f5420b6ed3489f5b075fb1eb4e02074e5505c76d238e5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172409052
0283716084,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fwkf2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 001a3fe7-633c-44f8-9a8c-7401cec7af54,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cbf110391a2708b365a6d117cd1facf1a5820add049c9338b5eaa12f02254e4,PodSandboxId:14b36b352300967c929247cec1ddcb31ac17615e8281918ab214b49a770c21a1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172409051219
3183830,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8cce7fff82cf979e3ad7d68f6f416e8,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5e746178ed6a3645979a5bd617a6d9f408bb3e6af232f31409c7e79a0c4f6b2,PodSandboxId:9b826611f7fb43dc5f6fb5c26f55533ebe177f1d584f77bd7a2a32978c1478e5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724090509187324635,Labels:map[strin
g]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab6b0fe91f166a5c05b58933ead885f6,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:426a12b48132d73e1b93e6a7fb5b3420868e384eb280274c6ee81ae6f6bcea12,PodSandboxId:4cd25796bc67e8c9b4a666188feb3addfa806bf372a40c47a0ed8a3e3576c9a2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724090509150960473,Labels:map[string]string{io.k
ubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcf0b1666b512c678d4309e6a2bd2773,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f729929f59edc9bd3c0ec7e99f4b984f94d6b6ec06edf83cf6dc3efba7a1fe5,PodSandboxId:d0637e1ac222cb0d4d6abc71c2af0485d7935e0b55308bdb6a1af649031fef39,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724090509066737802,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,
io.kubernetes.pod.name: kube-apiserver-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9269a2cf31966e0bbf30b6554fa311ee,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0e66231bf791048a9932068b5f28d8479613545885bea8e42cf9c79913ffccd,PodSandboxId:1f46f8e2ba79c3a9b9a7f9729c154fc9c495e280d0a9fac6dc4fdf837a2e0b73,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724090509024741947,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.nam
e: kube-scheduler-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 465e756b61a05a6f1c4dfeba2adbdeeb,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fd48751d-47d8-4d33-baf0-2ab1eefff941 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ef0b28473496e       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   0f784aeccda9e       busybox-7dff88458-fd2dw
	d4208b72f7684       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   5b83e59b0dd31       coredns-6f6b679f8f-p65cb
	86aec3b935770       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   86507aaa25957       coredns-6f6b679f8f-8fjpd
	de3b095c19e3f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   537bb09282b60       storage-provisioner
	66fd9c9b32e5e       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    6 minutes ago       Running             kindnet-cni               0                   3c6e833618ab7       kindnet-vb66s
	eb8cccc1568bb       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      6 minutes ago       Running             kube-proxy                0                   dc27fd8c8c4a6       kube-proxy-fwkf2
	0cbf110391a27       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   14b36b3523009       kube-vip-ha-086149
	f5e746178ed6a       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      6 minutes ago       Running             kube-controller-manager   0                   9b826611f7fb4       kube-controller-manager-ha-086149
	426a12b48132d       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   4cd25796bc67e       etcd-ha-086149
	2f729929f59ed       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      6 minutes ago       Running             kube-apiserver            0                   d0637e1ac222c       kube-apiserver-ha-086149
	d0e66231bf791       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      6 minutes ago       Running             kube-scheduler            0                   1f46f8e2ba79c       kube-scheduler-ha-086149
	
	
	==> coredns [86aec3b9357709107938f07e57e09bef332ea9baea288a18bb10389d5108084b] <==
	[INFO] 10.244.2.2:36864 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000187071s
	[INFO] 10.244.2.2:48106 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000150405s
	[INFO] 10.244.2.2:53329 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000136079s
	[INFO] 10.244.0.4:48191 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014988s
	[INFO] 10.244.0.4:47708 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000096718s
	[INFO] 10.244.0.4:42128 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000149115s
	[INFO] 10.244.0.4:49211 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000058729s
	[INFO] 10.244.0.4:41169 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000147844s
	[INFO] 10.244.1.2:55021 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000105902s
	[INFO] 10.244.1.2:39523 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000197158s
	[INFO] 10.244.1.2:39402 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000068589s
	[INFO] 10.244.1.2:46940 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000086232s
	[INFO] 10.244.2.2:59049 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000177439s
	[INFO] 10.244.2.2:48370 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000103075s
	[INFO] 10.244.2.2:36161 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000110997s
	[INFO] 10.244.2.2:44839 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000079394s
	[INFO] 10.244.1.2:53636 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000153191s
	[INFO] 10.244.1.2:46986 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00014037s
	[INFO] 10.244.1.2:39517 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000205565s
	[INFO] 10.244.2.2:34630 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000217644s
	[INFO] 10.244.2.2:48208 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000175515s
	[INFO] 10.244.2.2:42420 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000305788s
	[INFO] 10.244.0.4:49746 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000082325s
	[INFO] 10.244.0.4:48461 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000222115s
	[INFO] 10.244.1.2:58589 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000263104s
	
	
	==> coredns [d4208b72f7684106eeabb79597e9a16912d86fddf552d810668e52ee86e4cacf] <==
	[INFO] 10.244.1.2:46929 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000103504s
	[INFO] 10.244.1.2:59220 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000514964s
	[INFO] 10.244.1.2:46564 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001814543s
	[INFO] 10.244.2.2:59912 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000139193s
	[INFO] 10.244.2.2:51495 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004077714s
	[INFO] 10.244.2.2:60503 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002804151s
	[INFO] 10.244.2.2:49027 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000124508s
	[INFO] 10.244.0.4:59229 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001769172s
	[INFO] 10.244.0.4:34487 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001315875s
	[INFO] 10.244.0.4:34657 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000124575s
	[INFO] 10.244.1.2:49809 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001830693s
	[INFO] 10.244.1.2:60513 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001456039s
	[INFO] 10.244.1.2:58099 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000201903s
	[INFO] 10.244.1.2:36863 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000108279s
	[INFO] 10.244.0.4:48767 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000119232s
	[INFO] 10.244.0.4:35383 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00018722s
	[INFO] 10.244.0.4:58993 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000063721s
	[INFO] 10.244.0.4:55887 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000059646s
	[INFO] 10.244.1.2:45536 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000124964s
	[INFO] 10.244.2.2:45976 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000160498s
	[INFO] 10.244.0.4:38315 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000146686s
	[INFO] 10.244.0.4:36553 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000130807s
	[INFO] 10.244.1.2:46657 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00022076s
	[INFO] 10.244.1.2:44650 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000123411s
	[INFO] 10.244.1.2:46585 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000089999s
	
	
	==> describe nodes <==
	Name:               ha-086149
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-086149
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9c2db9d51ec33b5c53a86e9ba3d384ee332e3411
	                    minikube.k8s.io/name=ha-086149
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T18_01_56_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 18:01:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-086149
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 18:08:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 18:04:58 +0000   Mon, 19 Aug 2024 18:01:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 18:04:58 +0000   Mon, 19 Aug 2024 18:01:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 18:04:58 +0000   Mon, 19 Aug 2024 18:01:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 18:04:58 +0000   Mon, 19 Aug 2024 18:02:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.249
	  Hostname:    ha-086149
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f2adf13588c04842be48ba7ffa571365
	  System UUID:                f2adf135-88c0-4842-be48-ba7ffa571365
	  Boot ID:                    affd916c-f074-4dc0-bd43-4c71cd2f0b12
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-fd2dw              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m39s
	  kube-system                 coredns-6f6b679f8f-8fjpd             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m19s
	  kube-system                 coredns-6f6b679f8f-p65cb             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m19s
	  kube-system                 etcd-ha-086149                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m23s
	  kube-system                 kindnet-vb66s                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m19s
	  kube-system                 kube-apiserver-ha-086149             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m24s
	  kube-system                 kube-controller-manager-ha-086149    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m23s
	  kube-system                 kube-proxy-fwkf2                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m19s
	  kube-system                 kube-scheduler-ha-086149             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m23s
	  kube-system                 kube-vip-ha-086149                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m25s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m17s  kube-proxy       
	  Normal  Starting                 6m23s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m23s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m23s  kubelet          Node ha-086149 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m23s  kubelet          Node ha-086149 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m23s  kubelet          Node ha-086149 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m19s  node-controller  Node ha-086149 event: Registered Node ha-086149 in Controller
	  Normal  NodeReady                6m3s   kubelet          Node ha-086149 status is now: NodeReady
	  Normal  RegisteredNode           5m16s  node-controller  Node ha-086149 event: Registered Node ha-086149 in Controller
	  Normal  RegisteredNode           4m     node-controller  Node ha-086149 event: Registered Node ha-086149 in Controller
	
	
	Name:               ha-086149-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-086149-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9c2db9d51ec33b5c53a86e9ba3d384ee332e3411
	                    minikube.k8s.io/name=ha-086149
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_19T18_02_56_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 18:02:53 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-086149-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 18:05:47 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 19 Aug 2024 18:04:56 +0000   Mon, 19 Aug 2024 18:06:28 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 19 Aug 2024 18:04:56 +0000   Mon, 19 Aug 2024 18:06:28 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 19 Aug 2024 18:04:56 +0000   Mon, 19 Aug 2024 18:06:28 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 19 Aug 2024 18:04:56 +0000   Mon, 19 Aug 2024 18:06:28 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.167
	  Hostname:    ha-086149-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 db74a62099694214b3e6abfad40c4b33
	  System UUID:                db74a620-9969-4214-b3e6-abfad40c4b33
	  Boot ID:                    717bec9d-0b44-49c0-8d52-7d87d4c1f6a1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-vgcdh                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m39s
	  kube-system                 etcd-ha-086149-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m23s
	  kube-system                 kindnet-dgj9c                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m25s
	  kube-system                 kube-apiserver-ha-086149-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m23s
	  kube-system                 kube-controller-manager-ha-086149-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m22s
	  kube-system                 kube-proxy-vx94r                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m25s
	  kube-system                 kube-scheduler-ha-086149-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m16s
	  kube-system                 kube-vip-ha-086149-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m19s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m25s (x8 over 5m25s)  kubelet          Node ha-086149-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m25s (x8 over 5m25s)  kubelet          Node ha-086149-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m25s (x7 over 5m25s)  kubelet          Node ha-086149-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m24s                  node-controller  Node ha-086149-m02 event: Registered Node ha-086149-m02 in Controller
	  Normal  RegisteredNode           5m16s                  node-controller  Node ha-086149-m02 event: Registered Node ha-086149-m02 in Controller
	  Normal  RegisteredNode           4m                     node-controller  Node ha-086149-m02 event: Registered Node ha-086149-m02 in Controller
	  Normal  NodeNotReady             110s                   node-controller  Node ha-086149-m02 status is now: NodeNotReady
	
	
	Name:               ha-086149-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-086149-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9c2db9d51ec33b5c53a86e9ba3d384ee332e3411
	                    minikube.k8s.io/name=ha-086149
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_19T18_04_13_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 18:04:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-086149-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 18:08:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 18:05:11 +0000   Mon, 19 Aug 2024 18:04:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 18:05:11 +0000   Mon, 19 Aug 2024 18:04:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 18:05:11 +0000   Mon, 19 Aug 2024 18:04:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 18:05:11 +0000   Mon, 19 Aug 2024 18:04:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.121
	  Hostname:    ha-086149-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8eb7138e4a844547bcac8ac690757488
	  System UUID:                8eb7138e-4a84-4547-bcac-8ac690757488
	  Boot ID:                    3282c69f-1237-46cf-afad-b3a07c2459cf
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-7t5wq                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m39s
	  kube-system                 etcd-ha-086149-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m7s
	  kube-system                 kindnet-x87ch                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m8s
	  kube-system                 kube-apiserver-ha-086149-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m7s
	  kube-system                 kube-controller-manager-ha-086149-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m7s
	  kube-system                 kube-proxy-8snb5                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m8s
	  kube-system                 kube-scheduler-ha-086149-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         3m59s
	  kube-system                 kube-vip-ha-086149-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m3s                 kube-proxy       
	  Normal  NodeAllocatableEnforced  4m9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m8s (x8 over 4m9s)  kubelet          Node ha-086149-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m8s (x8 over 4m9s)  kubelet          Node ha-086149-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m8s (x7 over 4m9s)  kubelet          Node ha-086149-m03 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m6s                 node-controller  Node ha-086149-m03 event: Registered Node ha-086149-m03 in Controller
	  Normal  RegisteredNode           4m4s                 node-controller  Node ha-086149-m03 event: Registered Node ha-086149-m03 in Controller
	  Normal  RegisteredNode           4m                   node-controller  Node ha-086149-m03 event: Registered Node ha-086149-m03 in Controller
	
	
	Name:               ha-086149-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-086149-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9c2db9d51ec33b5c53a86e9ba3d384ee332e3411
	                    minikube.k8s.io/name=ha-086149
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_19T18_05_16_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 18:05:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-086149-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 18:08:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 18:05:46 +0000   Mon, 19 Aug 2024 18:05:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 18:05:46 +0000   Mon, 19 Aug 2024 18:05:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 18:05:46 +0000   Mon, 19 Aug 2024 18:05:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 18:05:46 +0000   Mon, 19 Aug 2024 18:05:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.173
	  Hostname:    ha-086149-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e1e9d0d713474980a7c895cb88752846
	  System UUID:                e1e9d0d7-1347-4980-a7c8-95cb88752846
	  Boot ID:                    5dee1daa-7e00-4357-ab41-d48951f73e60
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-gvr65       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m3s
	  kube-system                 kube-proxy-9t8vw    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m56s                kube-proxy       
	  Normal  NodeHasSufficientMemory  3m3s (x2 over 3m3s)  kubelet          Node ha-086149-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m3s (x2 over 3m3s)  kubelet          Node ha-086149-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m3s (x2 over 3m3s)  kubelet          Node ha-086149-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m3s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m1s                 node-controller  Node ha-086149-m04 event: Registered Node ha-086149-m04 in Controller
	  Normal  RegisteredNode           3m                   node-controller  Node ha-086149-m04 event: Registered Node ha-086149-m04 in Controller
	  Normal  RegisteredNode           2m59s                node-controller  Node ha-086149-m04 event: Registered Node ha-086149-m04 in Controller
	  Normal  NodeReady                2m42s                kubelet          Node ha-086149-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Aug19 18:01] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050961] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040140] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.785825] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.527631] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.633566] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.178691] systemd-fstab-generator[603]: Ignoring "noauto" option for root device
	[  +0.057166] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065842] systemd-fstab-generator[615]: Ignoring "noauto" option for root device
	[  +0.172283] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.148890] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.254962] systemd-fstab-generator[671]: Ignoring "noauto" option for root device
	[  +4.015563] systemd-fstab-generator[771]: Ignoring "noauto" option for root device
	[  +4.054508] systemd-fstab-generator[906]: Ignoring "noauto" option for root device
	[  +0.063854] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.951467] systemd-fstab-generator[1326]: Ignoring "noauto" option for root device
	[  +0.096986] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.046961] kauditd_printk_skb: 21 callbacks suppressed
	[Aug19 18:02] kauditd_printk_skb: 37 callbacks suppressed
	[ +54.874778] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [426a12b48132d73e1b93e6a7fb5b3420868e384eb280274c6ee81ae6f6bcea12] <==
	{"level":"warn","ts":"2024-08-19T18:08:17.946687Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"d67143b3afdcc30","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T18:08:17.957605Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"d67143b3afdcc30","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T18:08:17.965773Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"d67143b3afdcc30","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T18:08:17.970548Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"d67143b3afdcc30","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T18:08:17.983766Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"d67143b3afdcc30","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T18:08:17.993050Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"d67143b3afdcc30","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T18:08:18.001240Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"d67143b3afdcc30","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T18:08:18.005062Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"d67143b3afdcc30","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T18:08:18.009296Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"d67143b3afdcc30","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T18:08:18.015717Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"d67143b3afdcc30","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T18:08:18.023263Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"d67143b3afdcc30","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T18:08:18.030657Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"d67143b3afdcc30","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T18:08:18.030878Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"d67143b3afdcc30","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T18:08:18.035000Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"d67143b3afdcc30","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T18:08:18.038308Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"d67143b3afdcc30","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T18:08:18.043803Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"d67143b3afdcc30","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T18:08:18.048288Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"d67143b3afdcc30","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T18:08:18.051468Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"d67143b3afdcc30","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T18:08:18.058488Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"d67143b3afdcc30","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T18:08:18.062019Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"d67143b3afdcc30","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T18:08:18.065321Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"d67143b3afdcc30","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T18:08:18.069199Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"d67143b3afdcc30","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T18:08:18.076400Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"d67143b3afdcc30","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T18:08:18.083724Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"d67143b3afdcc30","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T18:08:18.147160Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"d67143b3afdcc30","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 18:08:18 up 7 min,  0 users,  load average: 0.12, 0.21, 0.10
	Linux ha-086149 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [66fd9c9b32e5e0294c89ebc2ee3c443fda85c40c3ad5b05d42357b4968e8d305] <==
	I0819 18:07:45.261728       1 main.go:322] Node ha-086149-m04 has CIDR [10.244.3.0/24] 
	I0819 18:07:55.262827       1 main.go:295] Handling node with IPs: map[192.168.39.249:{}]
	I0819 18:07:55.263073       1 main.go:299] handling current node
	I0819 18:07:55.263256       1 main.go:295] Handling node with IPs: map[192.168.39.167:{}]
	I0819 18:07:55.263292       1 main.go:322] Node ha-086149-m02 has CIDR [10.244.1.0/24] 
	I0819 18:07:55.263563       1 main.go:295] Handling node with IPs: map[192.168.39.121:{}]
	I0819 18:07:55.263631       1 main.go:322] Node ha-086149-m03 has CIDR [10.244.2.0/24] 
	I0819 18:07:55.263737       1 main.go:295] Handling node with IPs: map[192.168.39.173:{}]
	I0819 18:07:55.263757       1 main.go:322] Node ha-086149-m04 has CIDR [10.244.3.0/24] 
	I0819 18:08:05.253896       1 main.go:295] Handling node with IPs: map[192.168.39.173:{}]
	I0819 18:08:05.254037       1 main.go:322] Node ha-086149-m04 has CIDR [10.244.3.0/24] 
	I0819 18:08:05.254381       1 main.go:295] Handling node with IPs: map[192.168.39.249:{}]
	I0819 18:08:05.254480       1 main.go:299] handling current node
	I0819 18:08:05.254512       1 main.go:295] Handling node with IPs: map[192.168.39.167:{}]
	I0819 18:08:05.254581       1 main.go:322] Node ha-086149-m02 has CIDR [10.244.1.0/24] 
	I0819 18:08:05.254722       1 main.go:295] Handling node with IPs: map[192.168.39.121:{}]
	I0819 18:08:05.254798       1 main.go:322] Node ha-086149-m03 has CIDR [10.244.2.0/24] 
	I0819 18:08:15.262828       1 main.go:295] Handling node with IPs: map[192.168.39.249:{}]
	I0819 18:08:15.262855       1 main.go:299] handling current node
	I0819 18:08:15.262869       1 main.go:295] Handling node with IPs: map[192.168.39.167:{}]
	I0819 18:08:15.262873       1 main.go:322] Node ha-086149-m02 has CIDR [10.244.1.0/24] 
	I0819 18:08:15.262996       1 main.go:295] Handling node with IPs: map[192.168.39.121:{}]
	I0819 18:08:15.263001       1 main.go:322] Node ha-086149-m03 has CIDR [10.244.2.0/24] 
	I0819 18:08:15.263054       1 main.go:295] Handling node with IPs: map[192.168.39.173:{}]
	I0819 18:08:15.263059       1 main.go:322] Node ha-086149-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [2f729929f59edc9bd3c0ec7e99f4b984f94d6b6ec06edf83cf6dc3efba7a1fe5] <==
	I0819 18:01:54.114421       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0819 18:01:55.357825       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0819 18:01:55.375126       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0819 18:01:55.391682       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0819 18:01:59.566453       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0819 18:01:59.635965       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0819 18:02:54.657315       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 7.711µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E0819 18:02:54.657529       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="8.713µs" method="POST" path="/api/v1/namespaces/kube-system/events" result=null
	E0819 18:02:54.657674       1 wrap.go:53] "Timeout or abort while handling" logger="UnhandledError" method="POST" URI="/api/v1/namespaces/kube-system/events" auditID="ec643271-d886-4350-b64a-766e1fc4aac6"
	E0819 18:04:43.672292       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33690: use of closed network connection
	E0819 18:04:43.862868       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33702: use of closed network connection
	E0819 18:04:44.053642       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33720: use of closed network connection
	E0819 18:04:44.256373       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33748: use of closed network connection
	E0819 18:04:44.435625       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33778: use of closed network connection
	E0819 18:04:44.622757       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33798: use of closed network connection
	E0819 18:04:44.807275       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33818: use of closed network connection
	E0819 18:04:44.990252       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33842: use of closed network connection
	E0819 18:04:45.184405       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33870: use of closed network connection
	E0819 18:04:45.500574       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33894: use of closed network connection
	E0819 18:04:45.689892       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33914: use of closed network connection
	E0819 18:04:45.870462       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33918: use of closed network connection
	E0819 18:04:46.059994       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33928: use of closed network connection
	E0819 18:04:46.259491       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33946: use of closed network connection
	E0819 18:04:46.442269       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33962: use of closed network connection
	W0819 18:06:03.904029       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.121 192.168.39.249]
	
	
	==> kube-controller-manager [f5e746178ed6a3645979a5bd617a6d9f408bb3e6af232f31409c7e79a0c4f6b2] <==
	I0819 18:05:15.883996       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-086149-m04" podCIDRs=["10.244.3.0/24"]
	I0819 18:05:15.884215       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-086149-m04"
	I0819 18:05:15.884406       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-086149-m04"
	I0819 18:05:15.895063       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-086149-m04"
	I0819 18:05:16.210617       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-086149-m04"
	I0819 18:05:16.612403       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-086149-m04"
	I0819 18:05:17.233376       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-086149-m04"
	I0819 18:05:18.647419       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-086149-m04"
	I0819 18:05:18.700575       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-086149-m04"
	I0819 18:05:19.116237       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-086149-m04"
	I0819 18:05:19.117622       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-086149-m04"
	I0819 18:05:19.198317       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-086149-m04"
	I0819 18:05:26.055961       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-086149-m04"
	I0819 18:05:36.586361       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-086149-m04"
	I0819 18:05:36.586548       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-086149-m04"
	I0819 18:05:36.601512       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-086149-m04"
	I0819 18:05:37.235419       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-086149-m04"
	I0819 18:05:46.781675       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-086149-m04"
	I0819 18:06:28.676790       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-086149-m02"
	I0819 18:06:28.676967       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-086149-m04"
	I0819 18:06:28.701434       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-086149-m02"
	I0819 18:06:28.792920       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="13.179919ms"
	I0819 18:06:28.793219       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="48.676µs"
	I0819 18:06:29.186490       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-086149-m02"
	I0819 18:06:33.901862       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-086149-m02"
	
	
	==> kube-proxy [eb8cccc1568bbb207d2c7c285f3897a7a425cba60f4dfcf3e8daa8082fc38ef0] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 18:02:00.704338       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0819 18:02:00.716483       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.249"]
	E0819 18:02:00.716614       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 18:02:00.779410       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 18:02:00.779529       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 18:02:00.779616       1 server_linux.go:169] "Using iptables Proxier"
	I0819 18:02:00.785947       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 18:02:00.786306       1 server.go:483] "Version info" version="v1.31.0"
	I0819 18:02:00.786337       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 18:02:00.787880       1 config.go:197] "Starting service config controller"
	I0819 18:02:00.787929       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 18:02:00.787952       1 config.go:104] "Starting endpoint slice config controller"
	I0819 18:02:00.787959       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 18:02:00.792516       1 config.go:326] "Starting node config controller"
	I0819 18:02:00.792546       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 18:02:00.888032       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0819 18:02:00.888046       1 shared_informer.go:320] Caches are synced for service config
	I0819 18:02:00.892575       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [d0e66231bf791048a9932068b5f28d8479613545885bea8e42cf9c79913ffccd] <==
	E0819 18:01:53.080178       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0819 18:01:53.109516       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0819 18:01:53.109646       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0819 18:01:53.176564       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0819 18:01:53.177436       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 18:01:53.432036       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0819 18:01:53.432205       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 18:01:53.436293       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0819 18:01:53.436338       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 18:01:53.438806       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0819 18:01:53.438845       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 18:01:53.498849       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0819 18:01:53.498955       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0819 18:01:55.179206       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0819 18:04:39.265941       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="6d2582b5-3fba-47da-8195-8e19e60aa593" pod="default/busybox-7dff88458-7t5wq" assumedNode="ha-086149-m03" currentNode="ha-086149-m02"
	E0819 18:04:39.285591       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-7t5wq\": pod busybox-7dff88458-7t5wq is already assigned to node \"ha-086149-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-7t5wq" node="ha-086149-m02"
	E0819 18:04:39.285704       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 6d2582b5-3fba-47da-8195-8e19e60aa593(default/busybox-7dff88458-7t5wq) was assumed on ha-086149-m02 but assigned to ha-086149-m03" pod="default/busybox-7dff88458-7t5wq"
	E0819 18:04:39.285739       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-7t5wq\": pod busybox-7dff88458-7t5wq is already assigned to node \"ha-086149-m03\"" pod="default/busybox-7dff88458-7t5wq"
	I0819 18:04:39.285788       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-7t5wq" node="ha-086149-m03"
	E0819 18:04:39.322665       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-fd2dw\": pod busybox-7dff88458-fd2dw is already assigned to node \"ha-086149\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-fd2dw" node="ha-086149"
	E0819 18:04:39.322837       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod f5e2f831-487f-4edb-b6c1-b391906a6d5b(default/busybox-7dff88458-fd2dw) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-fd2dw"
	E0819 18:04:39.322857       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-fd2dw\": pod busybox-7dff88458-fd2dw is already assigned to node \"ha-086149\"" pod="default/busybox-7dff88458-fd2dw"
	I0819 18:04:39.322879       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-fd2dw" node="ha-086149"
	E0819 18:04:39.328354       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-vgcdh\": pod busybox-7dff88458-vgcdh is already assigned to node \"ha-086149-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-vgcdh" node="ha-086149-m02"
	E0819 18:04:39.328444       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-vgcdh\": pod busybox-7dff88458-vgcdh is already assigned to node \"ha-086149-m02\"" pod="default/busybox-7dff88458-vgcdh"
	
	
	==> kubelet <==
	Aug 19 18:06:55 ha-086149 kubelet[1333]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 18:06:55 ha-086149 kubelet[1333]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 18:06:55 ha-086149 kubelet[1333]: E0819 18:06:55.420000    1333 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090815419618394,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:06:55 ha-086149 kubelet[1333]: E0819 18:06:55.420039    1333 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090815419618394,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:07:05 ha-086149 kubelet[1333]: E0819 18:07:05.422653    1333 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090825422346336,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:07:05 ha-086149 kubelet[1333]: E0819 18:07:05.422688    1333 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090825422346336,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:07:15 ha-086149 kubelet[1333]: E0819 18:07:15.424597    1333 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090835424307486,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:07:15 ha-086149 kubelet[1333]: E0819 18:07:15.424622    1333 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090835424307486,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:07:25 ha-086149 kubelet[1333]: E0819 18:07:25.426832    1333 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090845426421386,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:07:25 ha-086149 kubelet[1333]: E0819 18:07:25.427156    1333 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090845426421386,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:07:35 ha-086149 kubelet[1333]: E0819 18:07:35.429729    1333 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090855429342925,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:07:35 ha-086149 kubelet[1333]: E0819 18:07:35.429841    1333 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090855429342925,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:07:45 ha-086149 kubelet[1333]: E0819 18:07:45.431579    1333 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090865431219084,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:07:45 ha-086149 kubelet[1333]: E0819 18:07:45.431874    1333 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090865431219084,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:07:55 ha-086149 kubelet[1333]: E0819 18:07:55.296433    1333 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 18:07:55 ha-086149 kubelet[1333]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 18:07:55 ha-086149 kubelet[1333]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 18:07:55 ha-086149 kubelet[1333]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 18:07:55 ha-086149 kubelet[1333]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 18:07:55 ha-086149 kubelet[1333]: E0819 18:07:55.433545    1333 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090875433297701,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:07:55 ha-086149 kubelet[1333]: E0819 18:07:55.433588    1333 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090875433297701,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:08:05 ha-086149 kubelet[1333]: E0819 18:08:05.436021    1333 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090885435512201,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:08:05 ha-086149 kubelet[1333]: E0819 18:08:05.436420    1333 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090885435512201,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:08:15 ha-086149 kubelet[1333]: E0819 18:08:15.438228    1333 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090895437746904,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:08:15 ha-086149 kubelet[1333]: E0819 18:08:15.438622    1333 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090895437746904,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-086149 -n ha-086149
helpers_test.go:261: (dbg) Run:  kubectl --context ha-086149 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (142.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (61.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-086149 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-086149 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-086149 status -v=7 --alsologtostderr: exit status 3 (3.209531722s)

                                                
                                                
-- stdout --
	ha-086149
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-086149-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-086149-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-086149-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 18:08:22.705451  395602 out.go:345] Setting OutFile to fd 1 ...
	I0819 18:08:22.705564  395602 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:08:22.705574  395602 out.go:358] Setting ErrFile to fd 2...
	I0819 18:08:22.705580  395602 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:08:22.705799  395602 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19468-372744/.minikube/bin
	I0819 18:08:22.706004  395602 out.go:352] Setting JSON to false
	I0819 18:08:22.706035  395602 mustload.go:65] Loading cluster: ha-086149
	I0819 18:08:22.706082  395602 notify.go:220] Checking for updates...
	I0819 18:08:22.706539  395602 config.go:182] Loaded profile config "ha-086149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:08:22.706558  395602 status.go:255] checking status of ha-086149 ...
	I0819 18:08:22.707042  395602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:08:22.707091  395602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:08:22.728558  395602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36913
	I0819 18:08:22.729047  395602 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:08:22.729698  395602 main.go:141] libmachine: Using API Version  1
	I0819 18:08:22.729719  395602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:08:22.730175  395602 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:08:22.730414  395602 main.go:141] libmachine: (ha-086149) Calling .GetState
	I0819 18:08:22.731962  395602 status.go:330] ha-086149 host status = "Running" (err=<nil>)
	I0819 18:08:22.731987  395602 host.go:66] Checking if "ha-086149" exists ...
	I0819 18:08:22.732277  395602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:08:22.732319  395602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:08:22.747766  395602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33657
	I0819 18:08:22.748310  395602 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:08:22.748937  395602 main.go:141] libmachine: Using API Version  1
	I0819 18:08:22.748959  395602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:08:22.749359  395602 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:08:22.749602  395602 main.go:141] libmachine: (ha-086149) Calling .GetIP
	I0819 18:08:22.752468  395602 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:08:22.752823  395602 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:08:22.752845  395602 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:08:22.753036  395602 host.go:66] Checking if "ha-086149" exists ...
	I0819 18:08:22.753522  395602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:08:22.753580  395602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:08:22.769068  395602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46833
	I0819 18:08:22.769492  395602 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:08:22.769943  395602 main.go:141] libmachine: Using API Version  1
	I0819 18:08:22.769967  395602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:08:22.770244  395602 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:08:22.770402  395602 main.go:141] libmachine: (ha-086149) Calling .DriverName
	I0819 18:08:22.770642  395602 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 18:08:22.770685  395602 main.go:141] libmachine: (ha-086149) Calling .GetSSHHostname
	I0819 18:08:22.773249  395602 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:08:22.773626  395602 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:08:22.773661  395602 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:08:22.773804  395602 main.go:141] libmachine: (ha-086149) Calling .GetSSHPort
	I0819 18:08:22.774001  395602 main.go:141] libmachine: (ha-086149) Calling .GetSSHKeyPath
	I0819 18:08:22.774176  395602 main.go:141] libmachine: (ha-086149) Calling .GetSSHUsername
	I0819 18:08:22.774330  395602 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149/id_rsa Username:docker}
	I0819 18:08:22.855389  395602 ssh_runner.go:195] Run: systemctl --version
	I0819 18:08:22.861888  395602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:08:22.877818  395602 kubeconfig.go:125] found "ha-086149" server: "https://192.168.39.254:8443"
	I0819 18:08:22.877855  395602 api_server.go:166] Checking apiserver status ...
	I0819 18:08:22.877900  395602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:08:22.892700  395602 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1121/cgroup
	W0819 18:08:22.903989  395602 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1121/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 18:08:22.904046  395602 ssh_runner.go:195] Run: ls
	I0819 18:08:22.909481  395602 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 18:08:22.913813  395602 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 18:08:22.913840  395602 status.go:422] ha-086149 apiserver status = Running (err=<nil>)
	I0819 18:08:22.913850  395602 status.go:257] ha-086149 status: &{Name:ha-086149 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 18:08:22.913867  395602 status.go:255] checking status of ha-086149-m02 ...
	I0819 18:08:22.914190  395602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:08:22.914235  395602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:08:22.932405  395602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45257
	I0819 18:08:22.933002  395602 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:08:22.933475  395602 main.go:141] libmachine: Using API Version  1
	I0819 18:08:22.933497  395602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:08:22.933857  395602 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:08:22.934083  395602 main.go:141] libmachine: (ha-086149-m02) Calling .GetState
	I0819 18:08:22.935871  395602 status.go:330] ha-086149-m02 host status = "Running" (err=<nil>)
	I0819 18:08:22.935903  395602 host.go:66] Checking if "ha-086149-m02" exists ...
	I0819 18:08:22.936324  395602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:08:22.936372  395602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:08:22.951698  395602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40999
	I0819 18:08:22.952141  395602 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:08:22.952630  395602 main.go:141] libmachine: Using API Version  1
	I0819 18:08:22.952651  395602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:08:22.952984  395602 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:08:22.953249  395602 main.go:141] libmachine: (ha-086149-m02) Calling .GetIP
	I0819 18:08:22.956244  395602 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:08:22.956709  395602 main.go:141] libmachine: (ha-086149-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:44:0e", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:02:15 +0000 UTC Type:0 Mac:52:54:00:b9:44:0e Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-086149-m02 Clientid:01:52:54:00:b9:44:0e}
	I0819 18:08:22.956736  395602 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined IP address 192.168.39.167 and MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:08:22.956909  395602 host.go:66] Checking if "ha-086149-m02" exists ...
	I0819 18:08:22.957387  395602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:08:22.957436  395602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:08:22.972624  395602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43035
	I0819 18:08:22.973155  395602 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:08:22.973653  395602 main.go:141] libmachine: Using API Version  1
	I0819 18:08:22.973676  395602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:08:22.973981  395602 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:08:22.974215  395602 main.go:141] libmachine: (ha-086149-m02) Calling .DriverName
	I0819 18:08:22.974394  395602 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 18:08:22.974419  395602 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHHostname
	I0819 18:08:22.977725  395602 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:08:22.978254  395602 main.go:141] libmachine: (ha-086149-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:44:0e", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:02:15 +0000 UTC Type:0 Mac:52:54:00:b9:44:0e Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-086149-m02 Clientid:01:52:54:00:b9:44:0e}
	I0819 18:08:22.978285  395602 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined IP address 192.168.39.167 and MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:08:22.978430  395602 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHPort
	I0819 18:08:22.978592  395602 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHKeyPath
	I0819 18:08:22.978712  395602 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHUsername
	I0819 18:08:22.978897  395602 sshutil.go:53] new ssh client: &{IP:192.168.39.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m02/id_rsa Username:docker}
	W0819 18:08:25.508013  395602 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.167:22: connect: no route to host
	W0819 18:08:25.508123  395602 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.167:22: connect: no route to host
	E0819 18:08:25.508160  395602 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.167:22: connect: no route to host
	I0819 18:08:25.508169  395602 status.go:257] ha-086149-m02 status: &{Name:ha-086149-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0819 18:08:25.508192  395602 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.167:22: connect: no route to host
	I0819 18:08:25.508217  395602 status.go:255] checking status of ha-086149-m03 ...
	I0819 18:08:25.508627  395602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:08:25.508710  395602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:08:25.523883  395602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37327
	I0819 18:08:25.524381  395602 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:08:25.524881  395602 main.go:141] libmachine: Using API Version  1
	I0819 18:08:25.524904  395602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:08:25.525219  395602 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:08:25.525388  395602 main.go:141] libmachine: (ha-086149-m03) Calling .GetState
	I0819 18:08:25.527040  395602 status.go:330] ha-086149-m03 host status = "Running" (err=<nil>)
	I0819 18:08:25.527058  395602 host.go:66] Checking if "ha-086149-m03" exists ...
	I0819 18:08:25.527354  395602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:08:25.527395  395602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:08:25.543992  395602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35035
	I0819 18:08:25.544523  395602 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:08:25.545122  395602 main.go:141] libmachine: Using API Version  1
	I0819 18:08:25.545151  395602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:08:25.545489  395602 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:08:25.545674  395602 main.go:141] libmachine: (ha-086149-m03) Calling .GetIP
	I0819 18:08:25.548237  395602 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:08:25.548713  395602 main.go:141] libmachine: (ha-086149-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:29:16", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:03:35 +0000 UTC Type:0 Mac:52:54:00:dc:29:16 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-086149-m03 Clientid:01:52:54:00:dc:29:16}
	I0819 18:08:25.548740  395602 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined IP address 192.168.39.121 and MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:08:25.548877  395602 host.go:66] Checking if "ha-086149-m03" exists ...
	I0819 18:08:25.549225  395602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:08:25.549268  395602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:08:25.565287  395602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37883
	I0819 18:08:25.565664  395602 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:08:25.566129  395602 main.go:141] libmachine: Using API Version  1
	I0819 18:08:25.566148  395602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:08:25.566462  395602 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:08:25.566714  395602 main.go:141] libmachine: (ha-086149-m03) Calling .DriverName
	I0819 18:08:25.566897  395602 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 18:08:25.566921  395602 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHHostname
	I0819 18:08:25.569620  395602 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:08:25.569998  395602 main.go:141] libmachine: (ha-086149-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:29:16", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:03:35 +0000 UTC Type:0 Mac:52:54:00:dc:29:16 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-086149-m03 Clientid:01:52:54:00:dc:29:16}
	I0819 18:08:25.570030  395602 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined IP address 192.168.39.121 and MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:08:25.570156  395602 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHPort
	I0819 18:08:25.570353  395602 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHKeyPath
	I0819 18:08:25.570516  395602 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHUsername
	I0819 18:08:25.570655  395602 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m03/id_rsa Username:docker}
	I0819 18:08:25.651920  395602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:08:25.667427  395602 kubeconfig.go:125] found "ha-086149" server: "https://192.168.39.254:8443"
	I0819 18:08:25.667461  395602 api_server.go:166] Checking apiserver status ...
	I0819 18:08:25.667500  395602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:08:25.681517  395602 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1440/cgroup
	W0819 18:08:25.691102  395602 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1440/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 18:08:25.691162  395602 ssh_runner.go:195] Run: ls
	I0819 18:08:25.695866  395602 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 18:08:25.701286  395602 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 18:08:25.701320  395602 status.go:422] ha-086149-m03 apiserver status = Running (err=<nil>)
	I0819 18:08:25.701329  395602 status.go:257] ha-086149-m03 status: &{Name:ha-086149-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 18:08:25.701353  395602 status.go:255] checking status of ha-086149-m04 ...
	I0819 18:08:25.701626  395602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:08:25.701661  395602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:08:25.717514  395602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44653
	I0819 18:08:25.718016  395602 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:08:25.718554  395602 main.go:141] libmachine: Using API Version  1
	I0819 18:08:25.718574  395602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:08:25.718854  395602 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:08:25.719053  395602 main.go:141] libmachine: (ha-086149-m04) Calling .GetState
	I0819 18:08:25.720546  395602 status.go:330] ha-086149-m04 host status = "Running" (err=<nil>)
	I0819 18:08:25.720566  395602 host.go:66] Checking if "ha-086149-m04" exists ...
	I0819 18:08:25.720977  395602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:08:25.721020  395602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:08:25.737065  395602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43387
	I0819 18:08:25.737553  395602 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:08:25.738047  395602 main.go:141] libmachine: Using API Version  1
	I0819 18:08:25.738078  395602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:08:25.738357  395602 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:08:25.738534  395602 main.go:141] libmachine: (ha-086149-m04) Calling .GetIP
	I0819 18:08:25.741505  395602 main.go:141] libmachine: (ha-086149-m04) DBG | domain ha-086149-m04 has defined MAC address 52:54:00:03:a4:7a in network mk-ha-086149
	I0819 18:08:25.741998  395602 main.go:141] libmachine: (ha-086149-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:a4:7a", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:05:01 +0000 UTC Type:0 Mac:52:54:00:03:a4:7a Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-086149-m04 Clientid:01:52:54:00:03:a4:7a}
	I0819 18:08:25.742058  395602 main.go:141] libmachine: (ha-086149-m04) DBG | domain ha-086149-m04 has defined IP address 192.168.39.173 and MAC address 52:54:00:03:a4:7a in network mk-ha-086149
	I0819 18:08:25.742218  395602 host.go:66] Checking if "ha-086149-m04" exists ...
	I0819 18:08:25.742541  395602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:08:25.742586  395602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:08:25.759773  395602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35969
	I0819 18:08:25.760247  395602 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:08:25.760789  395602 main.go:141] libmachine: Using API Version  1
	I0819 18:08:25.760812  395602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:08:25.761212  395602 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:08:25.761419  395602 main.go:141] libmachine: (ha-086149-m04) Calling .DriverName
	I0819 18:08:25.761637  395602 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 18:08:25.761662  395602 main.go:141] libmachine: (ha-086149-m04) Calling .GetSSHHostname
	I0819 18:08:25.764773  395602 main.go:141] libmachine: (ha-086149-m04) DBG | domain ha-086149-m04 has defined MAC address 52:54:00:03:a4:7a in network mk-ha-086149
	I0819 18:08:25.765181  395602 main.go:141] libmachine: (ha-086149-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:a4:7a", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:05:01 +0000 UTC Type:0 Mac:52:54:00:03:a4:7a Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-086149-m04 Clientid:01:52:54:00:03:a4:7a}
	I0819 18:08:25.765200  395602 main.go:141] libmachine: (ha-086149-m04) DBG | domain ha-086149-m04 has defined IP address 192.168.39.173 and MAC address 52:54:00:03:a4:7a in network mk-ha-086149
	I0819 18:08:25.765367  395602 main.go:141] libmachine: (ha-086149-m04) Calling .GetSSHPort
	I0819 18:08:25.765518  395602 main.go:141] libmachine: (ha-086149-m04) Calling .GetSSHKeyPath
	I0819 18:08:25.765647  395602 main.go:141] libmachine: (ha-086149-m04) Calling .GetSSHUsername
	I0819 18:08:25.765827  395602 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m04/id_rsa Username:docker}
	I0819 18:08:25.855376  395602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:08:25.870274  395602 status.go:257] ha-086149-m04 status: &{Name:ha-086149-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-086149 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-086149 status -v=7 --alsologtostderr: exit status 3 (5.165495639s)

                                                
                                                
-- stdout --
	ha-086149
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-086149-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-086149-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-086149-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 18:08:26.894531  395702 out.go:345] Setting OutFile to fd 1 ...
	I0819 18:08:26.894659  395702 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:08:26.894667  395702 out.go:358] Setting ErrFile to fd 2...
	I0819 18:08:26.894672  395702 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:08:26.894856  395702 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19468-372744/.minikube/bin
	I0819 18:08:26.895093  395702 out.go:352] Setting JSON to false
	I0819 18:08:26.895127  395702 mustload.go:65] Loading cluster: ha-086149
	I0819 18:08:26.895172  395702 notify.go:220] Checking for updates...
	I0819 18:08:26.895510  395702 config.go:182] Loaded profile config "ha-086149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:08:26.895525  395702 status.go:255] checking status of ha-086149 ...
	I0819 18:08:26.895965  395702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:08:26.896021  395702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:08:26.911572  395702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34863
	I0819 18:08:26.912017  395702 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:08:26.912629  395702 main.go:141] libmachine: Using API Version  1
	I0819 18:08:26.912650  395702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:08:26.913054  395702 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:08:26.913317  395702 main.go:141] libmachine: (ha-086149) Calling .GetState
	I0819 18:08:26.915072  395702 status.go:330] ha-086149 host status = "Running" (err=<nil>)
	I0819 18:08:26.915093  395702 host.go:66] Checking if "ha-086149" exists ...
	I0819 18:08:26.915393  395702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:08:26.915442  395702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:08:26.930646  395702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42751
	I0819 18:08:26.931020  395702 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:08:26.931473  395702 main.go:141] libmachine: Using API Version  1
	I0819 18:08:26.931506  395702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:08:26.931860  395702 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:08:26.932058  395702 main.go:141] libmachine: (ha-086149) Calling .GetIP
	I0819 18:08:26.934555  395702 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:08:26.934912  395702 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:08:26.934945  395702 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:08:26.935052  395702 host.go:66] Checking if "ha-086149" exists ...
	I0819 18:08:26.935340  395702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:08:26.935377  395702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:08:26.950776  395702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33271
	I0819 18:08:26.951202  395702 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:08:26.951755  395702 main.go:141] libmachine: Using API Version  1
	I0819 18:08:26.951809  395702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:08:26.952155  395702 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:08:26.952373  395702 main.go:141] libmachine: (ha-086149) Calling .DriverName
	I0819 18:08:26.952620  395702 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 18:08:26.952658  395702 main.go:141] libmachine: (ha-086149) Calling .GetSSHHostname
	I0819 18:08:26.955512  395702 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:08:26.956042  395702 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:08:26.956081  395702 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:08:26.956198  395702 main.go:141] libmachine: (ha-086149) Calling .GetSSHPort
	I0819 18:08:26.956372  395702 main.go:141] libmachine: (ha-086149) Calling .GetSSHKeyPath
	I0819 18:08:26.956543  395702 main.go:141] libmachine: (ha-086149) Calling .GetSSHUsername
	I0819 18:08:26.956663  395702 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149/id_rsa Username:docker}
	I0819 18:08:27.043621  395702 ssh_runner.go:195] Run: systemctl --version
	I0819 18:08:27.050584  395702 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:08:27.068394  395702 kubeconfig.go:125] found "ha-086149" server: "https://192.168.39.254:8443"
	I0819 18:08:27.068433  395702 api_server.go:166] Checking apiserver status ...
	I0819 18:08:27.068478  395702 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:08:27.084125  395702 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1121/cgroup
	W0819 18:08:27.094171  395702 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1121/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 18:08:27.094243  395702 ssh_runner.go:195] Run: ls
	I0819 18:08:27.098874  395702 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 18:08:27.105986  395702 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 18:08:27.106011  395702 status.go:422] ha-086149 apiserver status = Running (err=<nil>)
	I0819 18:08:27.106021  395702 status.go:257] ha-086149 status: &{Name:ha-086149 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 18:08:27.106044  395702 status.go:255] checking status of ha-086149-m02 ...
	I0819 18:08:27.106404  395702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:08:27.106449  395702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:08:27.121736  395702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33355
	I0819 18:08:27.122309  395702 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:08:27.122818  395702 main.go:141] libmachine: Using API Version  1
	I0819 18:08:27.122847  395702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:08:27.123223  395702 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:08:27.123424  395702 main.go:141] libmachine: (ha-086149-m02) Calling .GetState
	I0819 18:08:27.124961  395702 status.go:330] ha-086149-m02 host status = "Running" (err=<nil>)
	I0819 18:08:27.124981  395702 host.go:66] Checking if "ha-086149-m02" exists ...
	I0819 18:08:27.125263  395702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:08:27.125297  395702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:08:27.140231  395702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35899
	I0819 18:08:27.140605  395702 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:08:27.141076  395702 main.go:141] libmachine: Using API Version  1
	I0819 18:08:27.141097  395702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:08:27.141387  395702 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:08:27.141611  395702 main.go:141] libmachine: (ha-086149-m02) Calling .GetIP
	I0819 18:08:27.144241  395702 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:08:27.144664  395702 main.go:141] libmachine: (ha-086149-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:44:0e", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:02:15 +0000 UTC Type:0 Mac:52:54:00:b9:44:0e Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-086149-m02 Clientid:01:52:54:00:b9:44:0e}
	I0819 18:08:27.144688  395702 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined IP address 192.168.39.167 and MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:08:27.144821  395702 host.go:66] Checking if "ha-086149-m02" exists ...
	I0819 18:08:27.145155  395702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:08:27.145202  395702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:08:27.160188  395702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34677
	I0819 18:08:27.160669  395702 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:08:27.161125  395702 main.go:141] libmachine: Using API Version  1
	I0819 18:08:27.161155  395702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:08:27.161476  395702 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:08:27.161663  395702 main.go:141] libmachine: (ha-086149-m02) Calling .DriverName
	I0819 18:08:27.161847  395702 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 18:08:27.161881  395702 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHHostname
	I0819 18:08:27.164579  395702 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:08:27.165070  395702 main.go:141] libmachine: (ha-086149-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:44:0e", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:02:15 +0000 UTC Type:0 Mac:52:54:00:b9:44:0e Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-086149-m02 Clientid:01:52:54:00:b9:44:0e}
	I0819 18:08:27.165099  395702 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined IP address 192.168.39.167 and MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:08:27.165239  395702 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHPort
	I0819 18:08:27.165387  395702 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHKeyPath
	I0819 18:08:27.165515  395702 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHUsername
	I0819 18:08:27.165643  395702 sshutil.go:53] new ssh client: &{IP:192.168.39.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m02/id_rsa Username:docker}
	W0819 18:08:28.583972  395702 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.167:22: connect: no route to host
	I0819 18:08:28.584037  395702 retry.go:31] will retry after 357.773926ms: dial tcp 192.168.39.167:22: connect: no route to host
	W0819 18:08:31.652036  395702 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.167:22: connect: no route to host
	W0819 18:08:31.652152  395702 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.167:22: connect: no route to host
	E0819 18:08:31.652169  395702 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.167:22: connect: no route to host
	I0819 18:08:31.652177  395702 status.go:257] ha-086149-m02 status: &{Name:ha-086149-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0819 18:08:31.652210  395702 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.167:22: connect: no route to host
	I0819 18:08:31.652219  395702 status.go:255] checking status of ha-086149-m03 ...
	I0819 18:08:31.652532  395702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:08:31.652575  395702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:08:31.667831  395702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35475
	I0819 18:08:31.668300  395702 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:08:31.668770  395702 main.go:141] libmachine: Using API Version  1
	I0819 18:08:31.668793  395702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:08:31.669154  395702 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:08:31.669336  395702 main.go:141] libmachine: (ha-086149-m03) Calling .GetState
	I0819 18:08:31.671009  395702 status.go:330] ha-086149-m03 host status = "Running" (err=<nil>)
	I0819 18:08:31.671026  395702 host.go:66] Checking if "ha-086149-m03" exists ...
	I0819 18:08:31.671335  395702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:08:31.671375  395702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:08:31.686756  395702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34727
	I0819 18:08:31.687147  395702 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:08:31.687624  395702 main.go:141] libmachine: Using API Version  1
	I0819 18:08:31.687646  395702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:08:31.688027  395702 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:08:31.688248  395702 main.go:141] libmachine: (ha-086149-m03) Calling .GetIP
	I0819 18:08:31.691341  395702 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:08:31.691790  395702 main.go:141] libmachine: (ha-086149-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:29:16", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:03:35 +0000 UTC Type:0 Mac:52:54:00:dc:29:16 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-086149-m03 Clientid:01:52:54:00:dc:29:16}
	I0819 18:08:31.691813  395702 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined IP address 192.168.39.121 and MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:08:31.692040  395702 host.go:66] Checking if "ha-086149-m03" exists ...
	I0819 18:08:31.692346  395702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:08:31.692400  395702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:08:31.707574  395702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38613
	I0819 18:08:31.708188  395702 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:08:31.708676  395702 main.go:141] libmachine: Using API Version  1
	I0819 18:08:31.708707  395702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:08:31.709044  395702 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:08:31.709270  395702 main.go:141] libmachine: (ha-086149-m03) Calling .DriverName
	I0819 18:08:31.709451  395702 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 18:08:31.709472  395702 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHHostname
	I0819 18:08:31.712456  395702 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:08:31.712901  395702 main.go:141] libmachine: (ha-086149-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:29:16", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:03:35 +0000 UTC Type:0 Mac:52:54:00:dc:29:16 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-086149-m03 Clientid:01:52:54:00:dc:29:16}
	I0819 18:08:31.712944  395702 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined IP address 192.168.39.121 and MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:08:31.713184  395702 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHPort
	I0819 18:08:31.713390  395702 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHKeyPath
	I0819 18:08:31.713532  395702 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHUsername
	I0819 18:08:31.713685  395702 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m03/id_rsa Username:docker}
	I0819 18:08:31.795097  395702 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:08:31.811108  395702 kubeconfig.go:125] found "ha-086149" server: "https://192.168.39.254:8443"
	I0819 18:08:31.811140  395702 api_server.go:166] Checking apiserver status ...
	I0819 18:08:31.811181  395702 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:08:31.825880  395702 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1440/cgroup
	W0819 18:08:31.837071  395702 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1440/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 18:08:31.837139  395702 ssh_runner.go:195] Run: ls
	I0819 18:08:31.842022  395702 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 18:08:31.846570  395702 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 18:08:31.846602  395702 status.go:422] ha-086149-m03 apiserver status = Running (err=<nil>)
	I0819 18:08:31.846612  395702 status.go:257] ha-086149-m03 status: &{Name:ha-086149-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 18:08:31.846629  395702 status.go:255] checking status of ha-086149-m04 ...
	I0819 18:08:31.847030  395702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:08:31.847073  395702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:08:31.862229  395702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40087
	I0819 18:08:31.862707  395702 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:08:31.863290  395702 main.go:141] libmachine: Using API Version  1
	I0819 18:08:31.863317  395702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:08:31.863651  395702 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:08:31.863877  395702 main.go:141] libmachine: (ha-086149-m04) Calling .GetState
	I0819 18:08:31.865312  395702 status.go:330] ha-086149-m04 host status = "Running" (err=<nil>)
	I0819 18:08:31.865328  395702 host.go:66] Checking if "ha-086149-m04" exists ...
	I0819 18:08:31.865612  395702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:08:31.865653  395702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:08:31.881560  395702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44553
	I0819 18:08:31.881957  395702 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:08:31.882449  395702 main.go:141] libmachine: Using API Version  1
	I0819 18:08:31.882476  395702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:08:31.882784  395702 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:08:31.883002  395702 main.go:141] libmachine: (ha-086149-m04) Calling .GetIP
	I0819 18:08:31.885507  395702 main.go:141] libmachine: (ha-086149-m04) DBG | domain ha-086149-m04 has defined MAC address 52:54:00:03:a4:7a in network mk-ha-086149
	I0819 18:08:31.885914  395702 main.go:141] libmachine: (ha-086149-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:a4:7a", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:05:01 +0000 UTC Type:0 Mac:52:54:00:03:a4:7a Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-086149-m04 Clientid:01:52:54:00:03:a4:7a}
	I0819 18:08:31.885950  395702 main.go:141] libmachine: (ha-086149-m04) DBG | domain ha-086149-m04 has defined IP address 192.168.39.173 and MAC address 52:54:00:03:a4:7a in network mk-ha-086149
	I0819 18:08:31.886089  395702 host.go:66] Checking if "ha-086149-m04" exists ...
	I0819 18:08:31.886403  395702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:08:31.886439  395702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:08:31.901785  395702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41013
	I0819 18:08:31.902270  395702 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:08:31.902732  395702 main.go:141] libmachine: Using API Version  1
	I0819 18:08:31.902751  395702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:08:31.903149  395702 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:08:31.903374  395702 main.go:141] libmachine: (ha-086149-m04) Calling .DriverName
	I0819 18:08:31.903627  395702 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 18:08:31.903650  395702 main.go:141] libmachine: (ha-086149-m04) Calling .GetSSHHostname
	I0819 18:08:31.906863  395702 main.go:141] libmachine: (ha-086149-m04) DBG | domain ha-086149-m04 has defined MAC address 52:54:00:03:a4:7a in network mk-ha-086149
	I0819 18:08:31.907321  395702 main.go:141] libmachine: (ha-086149-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:a4:7a", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:05:01 +0000 UTC Type:0 Mac:52:54:00:03:a4:7a Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-086149-m04 Clientid:01:52:54:00:03:a4:7a}
	I0819 18:08:31.907360  395702 main.go:141] libmachine: (ha-086149-m04) DBG | domain ha-086149-m04 has defined IP address 192.168.39.173 and MAC address 52:54:00:03:a4:7a in network mk-ha-086149
	I0819 18:08:31.907482  395702 main.go:141] libmachine: (ha-086149-m04) Calling .GetSSHPort
	I0819 18:08:31.907698  395702 main.go:141] libmachine: (ha-086149-m04) Calling .GetSSHKeyPath
	I0819 18:08:31.907859  395702 main.go:141] libmachine: (ha-086149-m04) Calling .GetSSHUsername
	I0819 18:08:31.908023  395702 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m04/id_rsa Username:docker}
	I0819 18:08:31.995088  395702 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:08:32.009571  395702 status.go:257] ha-086149-m04 status: &{Name:ha-086149-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-086149 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-086149 status -v=7 --alsologtostderr: exit status 3 (4.859283947s)

                                                
                                                
-- stdout --
	ha-086149
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-086149-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-086149-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-086149-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 18:08:33.334191  395804 out.go:345] Setting OutFile to fd 1 ...
	I0819 18:08:33.334382  395804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:08:33.334396  395804 out.go:358] Setting ErrFile to fd 2...
	I0819 18:08:33.334401  395804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:08:33.334607  395804 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19468-372744/.minikube/bin
	I0819 18:08:33.334843  395804 out.go:352] Setting JSON to false
	I0819 18:08:33.334875  395804 mustload.go:65] Loading cluster: ha-086149
	I0819 18:08:33.334981  395804 notify.go:220] Checking for updates...
	I0819 18:08:33.335370  395804 config.go:182] Loaded profile config "ha-086149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:08:33.335391  395804 status.go:255] checking status of ha-086149 ...
	I0819 18:08:33.335926  395804 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:08:33.336020  395804 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:08:33.354206  395804 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42563
	I0819 18:08:33.354705  395804 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:08:33.355265  395804 main.go:141] libmachine: Using API Version  1
	I0819 18:08:33.355287  395804 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:08:33.355846  395804 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:08:33.356083  395804 main.go:141] libmachine: (ha-086149) Calling .GetState
	I0819 18:08:33.357775  395804 status.go:330] ha-086149 host status = "Running" (err=<nil>)
	I0819 18:08:33.357808  395804 host.go:66] Checking if "ha-086149" exists ...
	I0819 18:08:33.358176  395804 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:08:33.358229  395804 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:08:33.374128  395804 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46015
	I0819 18:08:33.374664  395804 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:08:33.375202  395804 main.go:141] libmachine: Using API Version  1
	I0819 18:08:33.375222  395804 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:08:33.375557  395804 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:08:33.375754  395804 main.go:141] libmachine: (ha-086149) Calling .GetIP
	I0819 18:08:33.378298  395804 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:08:33.378765  395804 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:08:33.378802  395804 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:08:33.378934  395804 host.go:66] Checking if "ha-086149" exists ...
	I0819 18:08:33.379273  395804 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:08:33.379317  395804 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:08:33.394310  395804 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37001
	I0819 18:08:33.394707  395804 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:08:33.395234  395804 main.go:141] libmachine: Using API Version  1
	I0819 18:08:33.395259  395804 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:08:33.395576  395804 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:08:33.395790  395804 main.go:141] libmachine: (ha-086149) Calling .DriverName
	I0819 18:08:33.395970  395804 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 18:08:33.396006  395804 main.go:141] libmachine: (ha-086149) Calling .GetSSHHostname
	I0819 18:08:33.398860  395804 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:08:33.399303  395804 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:08:33.399337  395804 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:08:33.399486  395804 main.go:141] libmachine: (ha-086149) Calling .GetSSHPort
	I0819 18:08:33.399665  395804 main.go:141] libmachine: (ha-086149) Calling .GetSSHKeyPath
	I0819 18:08:33.399881  395804 main.go:141] libmachine: (ha-086149) Calling .GetSSHUsername
	I0819 18:08:33.400086  395804 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149/id_rsa Username:docker}
	I0819 18:08:33.479886  395804 ssh_runner.go:195] Run: systemctl --version
	I0819 18:08:33.486197  395804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:08:33.500309  395804 kubeconfig.go:125] found "ha-086149" server: "https://192.168.39.254:8443"
	I0819 18:08:33.500345  395804 api_server.go:166] Checking apiserver status ...
	I0819 18:08:33.500385  395804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:08:33.513977  395804 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1121/cgroup
	W0819 18:08:33.525118  395804 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1121/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 18:08:33.525185  395804 ssh_runner.go:195] Run: ls
	I0819 18:08:33.530297  395804 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 18:08:33.535293  395804 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 18:08:33.535323  395804 status.go:422] ha-086149 apiserver status = Running (err=<nil>)
	I0819 18:08:33.535335  395804 status.go:257] ha-086149 status: &{Name:ha-086149 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 18:08:33.535356  395804 status.go:255] checking status of ha-086149-m02 ...
	I0819 18:08:33.535742  395804 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:08:33.535792  395804 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:08:33.551343  395804 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44049
	I0819 18:08:33.551979  395804 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:08:33.552520  395804 main.go:141] libmachine: Using API Version  1
	I0819 18:08:33.552544  395804 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:08:33.552924  395804 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:08:33.553118  395804 main.go:141] libmachine: (ha-086149-m02) Calling .GetState
	I0819 18:08:33.554734  395804 status.go:330] ha-086149-m02 host status = "Running" (err=<nil>)
	I0819 18:08:33.554765  395804 host.go:66] Checking if "ha-086149-m02" exists ...
	I0819 18:08:33.555064  395804 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:08:33.555097  395804 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:08:33.571946  395804 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44567
	I0819 18:08:33.572419  395804 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:08:33.572898  395804 main.go:141] libmachine: Using API Version  1
	I0819 18:08:33.572943  395804 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:08:33.573270  395804 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:08:33.573479  395804 main.go:141] libmachine: (ha-086149-m02) Calling .GetIP
	I0819 18:08:33.576583  395804 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:08:33.577109  395804 main.go:141] libmachine: (ha-086149-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:44:0e", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:02:15 +0000 UTC Type:0 Mac:52:54:00:b9:44:0e Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-086149-m02 Clientid:01:52:54:00:b9:44:0e}
	I0819 18:08:33.577139  395804 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined IP address 192.168.39.167 and MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:08:33.577284  395804 host.go:66] Checking if "ha-086149-m02" exists ...
	I0819 18:08:33.577583  395804 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:08:33.577626  395804 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:08:33.593837  395804 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43635
	I0819 18:08:33.594374  395804 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:08:33.594872  395804 main.go:141] libmachine: Using API Version  1
	I0819 18:08:33.594907  395804 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:08:33.595256  395804 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:08:33.595519  395804 main.go:141] libmachine: (ha-086149-m02) Calling .DriverName
	I0819 18:08:33.595760  395804 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 18:08:33.595785  395804 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHHostname
	I0819 18:08:33.598651  395804 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:08:33.599037  395804 main.go:141] libmachine: (ha-086149-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:44:0e", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:02:15 +0000 UTC Type:0 Mac:52:54:00:b9:44:0e Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-086149-m02 Clientid:01:52:54:00:b9:44:0e}
	I0819 18:08:33.599068  395804 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined IP address 192.168.39.167 and MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:08:33.599211  395804 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHPort
	I0819 18:08:33.599383  395804 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHKeyPath
	I0819 18:08:33.599536  395804 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHUsername
	I0819 18:08:33.599661  395804 sshutil.go:53] new ssh client: &{IP:192.168.39.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m02/id_rsa Username:docker}
	W0819 18:08:34.724087  395804 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.167:22: connect: no route to host
	I0819 18:08:34.724165  395804 retry.go:31] will retry after 170.566678ms: dial tcp 192.168.39.167:22: connect: no route to host
	W0819 18:08:37.796053  395804 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.167:22: connect: no route to host
	W0819 18:08:37.796151  395804 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.167:22: connect: no route to host
	E0819 18:08:37.796173  395804 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.167:22: connect: no route to host
	I0819 18:08:37.796188  395804 status.go:257] ha-086149-m02 status: &{Name:ha-086149-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0819 18:08:37.796246  395804 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.167:22: connect: no route to host
	I0819 18:08:37.796258  395804 status.go:255] checking status of ha-086149-m03 ...
	I0819 18:08:37.796741  395804 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:08:37.796826  395804 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:08:37.812098  395804 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44859
	I0819 18:08:37.812591  395804 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:08:37.813062  395804 main.go:141] libmachine: Using API Version  1
	I0819 18:08:37.813098  395804 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:08:37.813414  395804 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:08:37.813605  395804 main.go:141] libmachine: (ha-086149-m03) Calling .GetState
	I0819 18:08:37.815362  395804 status.go:330] ha-086149-m03 host status = "Running" (err=<nil>)
	I0819 18:08:37.815384  395804 host.go:66] Checking if "ha-086149-m03" exists ...
	I0819 18:08:37.815695  395804 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:08:37.815740  395804 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:08:37.830278  395804 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40557
	I0819 18:08:37.830703  395804 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:08:37.831165  395804 main.go:141] libmachine: Using API Version  1
	I0819 18:08:37.831191  395804 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:08:37.831472  395804 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:08:37.831664  395804 main.go:141] libmachine: (ha-086149-m03) Calling .GetIP
	I0819 18:08:37.834329  395804 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:08:37.834703  395804 main.go:141] libmachine: (ha-086149-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:29:16", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:03:35 +0000 UTC Type:0 Mac:52:54:00:dc:29:16 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-086149-m03 Clientid:01:52:54:00:dc:29:16}
	I0819 18:08:37.834739  395804 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined IP address 192.168.39.121 and MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:08:37.834917  395804 host.go:66] Checking if "ha-086149-m03" exists ...
	I0819 18:08:37.835217  395804 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:08:37.835253  395804 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:08:37.851133  395804 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41305
	I0819 18:08:37.851588  395804 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:08:37.852033  395804 main.go:141] libmachine: Using API Version  1
	I0819 18:08:37.852057  395804 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:08:37.852420  395804 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:08:37.852707  395804 main.go:141] libmachine: (ha-086149-m03) Calling .DriverName
	I0819 18:08:37.852896  395804 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 18:08:37.852938  395804 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHHostname
	I0819 18:08:37.855547  395804 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:08:37.855985  395804 main.go:141] libmachine: (ha-086149-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:29:16", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:03:35 +0000 UTC Type:0 Mac:52:54:00:dc:29:16 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-086149-m03 Clientid:01:52:54:00:dc:29:16}
	I0819 18:08:37.856018  395804 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined IP address 192.168.39.121 and MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:08:37.856155  395804 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHPort
	I0819 18:08:37.856325  395804 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHKeyPath
	I0819 18:08:37.856463  395804 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHUsername
	I0819 18:08:37.856628  395804 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m03/id_rsa Username:docker}
	I0819 18:08:37.935472  395804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:08:37.951309  395804 kubeconfig.go:125] found "ha-086149" server: "https://192.168.39.254:8443"
	I0819 18:08:37.951341  395804 api_server.go:166] Checking apiserver status ...
	I0819 18:08:37.951411  395804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:08:37.966723  395804 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1440/cgroup
	W0819 18:08:37.979026  395804 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1440/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 18:08:37.979090  395804 ssh_runner.go:195] Run: ls
	I0819 18:08:37.983733  395804 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 18:08:37.988303  395804 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 18:08:37.988330  395804 status.go:422] ha-086149-m03 apiserver status = Running (err=<nil>)
	I0819 18:08:37.988342  395804 status.go:257] ha-086149-m03 status: &{Name:ha-086149-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 18:08:37.988368  395804 status.go:255] checking status of ha-086149-m04 ...
	I0819 18:08:37.988689  395804 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:08:37.988728  395804 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:08:38.004033  395804 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46745
	I0819 18:08:38.004528  395804 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:08:38.005000  395804 main.go:141] libmachine: Using API Version  1
	I0819 18:08:38.005024  395804 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:08:38.005335  395804 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:08:38.005517  395804 main.go:141] libmachine: (ha-086149-m04) Calling .GetState
	I0819 18:08:38.006963  395804 status.go:330] ha-086149-m04 host status = "Running" (err=<nil>)
	I0819 18:08:38.006980  395804 host.go:66] Checking if "ha-086149-m04" exists ...
	I0819 18:08:38.007264  395804 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:08:38.007296  395804 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:08:38.022383  395804 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35799
	I0819 18:08:38.022824  395804 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:08:38.023296  395804 main.go:141] libmachine: Using API Version  1
	I0819 18:08:38.023319  395804 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:08:38.023707  395804 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:08:38.023914  395804 main.go:141] libmachine: (ha-086149-m04) Calling .GetIP
	I0819 18:08:38.026994  395804 main.go:141] libmachine: (ha-086149-m04) DBG | domain ha-086149-m04 has defined MAC address 52:54:00:03:a4:7a in network mk-ha-086149
	I0819 18:08:38.027389  395804 main.go:141] libmachine: (ha-086149-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:a4:7a", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:05:01 +0000 UTC Type:0 Mac:52:54:00:03:a4:7a Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-086149-m04 Clientid:01:52:54:00:03:a4:7a}
	I0819 18:08:38.027412  395804 main.go:141] libmachine: (ha-086149-m04) DBG | domain ha-086149-m04 has defined IP address 192.168.39.173 and MAC address 52:54:00:03:a4:7a in network mk-ha-086149
	I0819 18:08:38.027570  395804 host.go:66] Checking if "ha-086149-m04" exists ...
	I0819 18:08:38.027924  395804 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:08:38.027962  395804 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:08:38.044145  395804 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42515
	I0819 18:08:38.044542  395804 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:08:38.045061  395804 main.go:141] libmachine: Using API Version  1
	I0819 18:08:38.045082  395804 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:08:38.045392  395804 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:08:38.045589  395804 main.go:141] libmachine: (ha-086149-m04) Calling .DriverName
	I0819 18:08:38.045759  395804 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 18:08:38.045783  395804 main.go:141] libmachine: (ha-086149-m04) Calling .GetSSHHostname
	I0819 18:08:38.048449  395804 main.go:141] libmachine: (ha-086149-m04) DBG | domain ha-086149-m04 has defined MAC address 52:54:00:03:a4:7a in network mk-ha-086149
	I0819 18:08:38.048941  395804 main.go:141] libmachine: (ha-086149-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:a4:7a", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:05:01 +0000 UTC Type:0 Mac:52:54:00:03:a4:7a Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-086149-m04 Clientid:01:52:54:00:03:a4:7a}
	I0819 18:08:38.048988  395804 main.go:141] libmachine: (ha-086149-m04) DBG | domain ha-086149-m04 has defined IP address 192.168.39.173 and MAC address 52:54:00:03:a4:7a in network mk-ha-086149
	I0819 18:08:38.049093  395804 main.go:141] libmachine: (ha-086149-m04) Calling .GetSSHPort
	I0819 18:08:38.049285  395804 main.go:141] libmachine: (ha-086149-m04) Calling .GetSSHKeyPath
	I0819 18:08:38.049455  395804 main.go:141] libmachine: (ha-086149-m04) Calling .GetSSHUsername
	I0819 18:08:38.049584  395804 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m04/id_rsa Username:docker}
	I0819 18:08:38.134914  395804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:08:38.148089  395804 status.go:257] ha-086149-m04 status: &{Name:ha-086149-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-086149 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-086149 status -v=7 --alsologtostderr: exit status 3 (3.74726713s)

                                                
                                                
-- stdout --
	ha-086149
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-086149-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-086149-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-086149-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 18:08:41.007816  395920 out.go:345] Setting OutFile to fd 1 ...
	I0819 18:08:41.008338  395920 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:08:41.008358  395920 out.go:358] Setting ErrFile to fd 2...
	I0819 18:08:41.008365  395920 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:08:41.008811  395920 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19468-372744/.minikube/bin
	I0819 18:08:41.009374  395920 out.go:352] Setting JSON to false
	I0819 18:08:41.009418  395920 mustload.go:65] Loading cluster: ha-086149
	I0819 18:08:41.009503  395920 notify.go:220] Checking for updates...
	I0819 18:08:41.009840  395920 config.go:182] Loaded profile config "ha-086149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:08:41.009858  395920 status.go:255] checking status of ha-086149 ...
	I0819 18:08:41.010377  395920 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:08:41.010421  395920 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:08:41.025760  395920 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38831
	I0819 18:08:41.026195  395920 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:08:41.026752  395920 main.go:141] libmachine: Using API Version  1
	I0819 18:08:41.026787  395920 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:08:41.027155  395920 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:08:41.027454  395920 main.go:141] libmachine: (ha-086149) Calling .GetState
	I0819 18:08:41.028937  395920 status.go:330] ha-086149 host status = "Running" (err=<nil>)
	I0819 18:08:41.028991  395920 host.go:66] Checking if "ha-086149" exists ...
	I0819 18:08:41.029335  395920 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:08:41.029376  395920 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:08:41.045091  395920 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36477
	I0819 18:08:41.045538  395920 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:08:41.046015  395920 main.go:141] libmachine: Using API Version  1
	I0819 18:08:41.046039  395920 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:08:41.046359  395920 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:08:41.046575  395920 main.go:141] libmachine: (ha-086149) Calling .GetIP
	I0819 18:08:41.049235  395920 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:08:41.049624  395920 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:08:41.049657  395920 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:08:41.049784  395920 host.go:66] Checking if "ha-086149" exists ...
	I0819 18:08:41.050186  395920 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:08:41.050251  395920 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:08:41.065221  395920 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34639
	I0819 18:08:41.065605  395920 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:08:41.066159  395920 main.go:141] libmachine: Using API Version  1
	I0819 18:08:41.066185  395920 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:08:41.066518  395920 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:08:41.066734  395920 main.go:141] libmachine: (ha-086149) Calling .DriverName
	I0819 18:08:41.066939  395920 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 18:08:41.066974  395920 main.go:141] libmachine: (ha-086149) Calling .GetSSHHostname
	I0819 18:08:41.070019  395920 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:08:41.070437  395920 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:08:41.070474  395920 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:08:41.070597  395920 main.go:141] libmachine: (ha-086149) Calling .GetSSHPort
	I0819 18:08:41.070785  395920 main.go:141] libmachine: (ha-086149) Calling .GetSSHKeyPath
	I0819 18:08:41.070999  395920 main.go:141] libmachine: (ha-086149) Calling .GetSSHUsername
	I0819 18:08:41.071186  395920 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149/id_rsa Username:docker}
	I0819 18:08:41.156009  395920 ssh_runner.go:195] Run: systemctl --version
	I0819 18:08:41.162444  395920 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:08:41.178724  395920 kubeconfig.go:125] found "ha-086149" server: "https://192.168.39.254:8443"
	I0819 18:08:41.178762  395920 api_server.go:166] Checking apiserver status ...
	I0819 18:08:41.178796  395920 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:08:41.194962  395920 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1121/cgroup
	W0819 18:08:41.206367  395920 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1121/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 18:08:41.206452  395920 ssh_runner.go:195] Run: ls
	I0819 18:08:41.212467  395920 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 18:08:41.216689  395920 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 18:08:41.216712  395920 status.go:422] ha-086149 apiserver status = Running (err=<nil>)
	I0819 18:08:41.216722  395920 status.go:257] ha-086149 status: &{Name:ha-086149 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 18:08:41.216740  395920 status.go:255] checking status of ha-086149-m02 ...
	I0819 18:08:41.217067  395920 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:08:41.217121  395920 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:08:41.233472  395920 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35791
	I0819 18:08:41.233934  395920 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:08:41.234459  395920 main.go:141] libmachine: Using API Version  1
	I0819 18:08:41.234485  395920 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:08:41.234923  395920 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:08:41.235191  395920 main.go:141] libmachine: (ha-086149-m02) Calling .GetState
	I0819 18:08:41.236996  395920 status.go:330] ha-086149-m02 host status = "Running" (err=<nil>)
	I0819 18:08:41.237013  395920 host.go:66] Checking if "ha-086149-m02" exists ...
	I0819 18:08:41.237339  395920 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:08:41.237389  395920 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:08:41.253337  395920 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37299
	I0819 18:08:41.253905  395920 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:08:41.254447  395920 main.go:141] libmachine: Using API Version  1
	I0819 18:08:41.254475  395920 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:08:41.254806  395920 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:08:41.255024  395920 main.go:141] libmachine: (ha-086149-m02) Calling .GetIP
	I0819 18:08:41.258241  395920 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:08:41.258775  395920 main.go:141] libmachine: (ha-086149-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:44:0e", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:02:15 +0000 UTC Type:0 Mac:52:54:00:b9:44:0e Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-086149-m02 Clientid:01:52:54:00:b9:44:0e}
	I0819 18:08:41.258801  395920 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined IP address 192.168.39.167 and MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:08:41.258987  395920 host.go:66] Checking if "ha-086149-m02" exists ...
	I0819 18:08:41.259332  395920 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:08:41.259373  395920 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:08:41.274751  395920 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45619
	I0819 18:08:41.275298  395920 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:08:41.275846  395920 main.go:141] libmachine: Using API Version  1
	I0819 18:08:41.275870  395920 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:08:41.276189  395920 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:08:41.276392  395920 main.go:141] libmachine: (ha-086149-m02) Calling .DriverName
	I0819 18:08:41.276600  395920 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 18:08:41.276623  395920 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHHostname
	I0819 18:08:41.279879  395920 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:08:41.280347  395920 main.go:141] libmachine: (ha-086149-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:44:0e", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:02:15 +0000 UTC Type:0 Mac:52:54:00:b9:44:0e Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-086149-m02 Clientid:01:52:54:00:b9:44:0e}
	I0819 18:08:41.280373  395920 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined IP address 192.168.39.167 and MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:08:41.280503  395920 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHPort
	I0819 18:08:41.280673  395920 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHKeyPath
	I0819 18:08:41.280847  395920 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHUsername
	I0819 18:08:41.281008  395920 sshutil.go:53] new ssh client: &{IP:192.168.39.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m02/id_rsa Username:docker}
	W0819 18:08:44.355966  395920 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.167:22: connect: no route to host
	W0819 18:08:44.356058  395920 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.167:22: connect: no route to host
	E0819 18:08:44.356073  395920 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.167:22: connect: no route to host
	I0819 18:08:44.356082  395920 status.go:257] ha-086149-m02 status: &{Name:ha-086149-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0819 18:08:44.356100  395920 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.167:22: connect: no route to host
	I0819 18:08:44.356118  395920 status.go:255] checking status of ha-086149-m03 ...
	I0819 18:08:44.356427  395920 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:08:44.356494  395920 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:08:44.371947  395920 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36725
	I0819 18:08:44.372399  395920 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:08:44.372932  395920 main.go:141] libmachine: Using API Version  1
	I0819 18:08:44.372953  395920 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:08:44.373282  395920 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:08:44.373487  395920 main.go:141] libmachine: (ha-086149-m03) Calling .GetState
	I0819 18:08:44.375235  395920 status.go:330] ha-086149-m03 host status = "Running" (err=<nil>)
	I0819 18:08:44.375256  395920 host.go:66] Checking if "ha-086149-m03" exists ...
	I0819 18:08:44.375558  395920 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:08:44.375593  395920 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:08:44.390531  395920 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39061
	I0819 18:08:44.391021  395920 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:08:44.391465  395920 main.go:141] libmachine: Using API Version  1
	I0819 18:08:44.391483  395920 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:08:44.391855  395920 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:08:44.392091  395920 main.go:141] libmachine: (ha-086149-m03) Calling .GetIP
	I0819 18:08:44.394636  395920 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:08:44.395104  395920 main.go:141] libmachine: (ha-086149-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:29:16", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:03:35 +0000 UTC Type:0 Mac:52:54:00:dc:29:16 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-086149-m03 Clientid:01:52:54:00:dc:29:16}
	I0819 18:08:44.395132  395920 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined IP address 192.168.39.121 and MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:08:44.395294  395920 host.go:66] Checking if "ha-086149-m03" exists ...
	I0819 18:08:44.395711  395920 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:08:44.395760  395920 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:08:44.412109  395920 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36029
	I0819 18:08:44.412532  395920 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:08:44.413128  395920 main.go:141] libmachine: Using API Version  1
	I0819 18:08:44.413159  395920 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:08:44.413511  395920 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:08:44.413734  395920 main.go:141] libmachine: (ha-086149-m03) Calling .DriverName
	I0819 18:08:44.413941  395920 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 18:08:44.413981  395920 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHHostname
	I0819 18:08:44.416659  395920 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:08:44.417135  395920 main.go:141] libmachine: (ha-086149-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:29:16", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:03:35 +0000 UTC Type:0 Mac:52:54:00:dc:29:16 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-086149-m03 Clientid:01:52:54:00:dc:29:16}
	I0819 18:08:44.417173  395920 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined IP address 192.168.39.121 and MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:08:44.417259  395920 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHPort
	I0819 18:08:44.417414  395920 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHKeyPath
	I0819 18:08:44.417607  395920 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHUsername
	I0819 18:08:44.417776  395920 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m03/id_rsa Username:docker}
	I0819 18:08:44.497284  395920 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:08:44.512869  395920 kubeconfig.go:125] found "ha-086149" server: "https://192.168.39.254:8443"
	I0819 18:08:44.512898  395920 api_server.go:166] Checking apiserver status ...
	I0819 18:08:44.512937  395920 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:08:44.527051  395920 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1440/cgroup
	W0819 18:08:44.537380  395920 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1440/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 18:08:44.537435  395920 ssh_runner.go:195] Run: ls
	I0819 18:08:44.541826  395920 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 18:08:44.546808  395920 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 18:08:44.546836  395920 status.go:422] ha-086149-m03 apiserver status = Running (err=<nil>)
	I0819 18:08:44.546847  395920 status.go:257] ha-086149-m03 status: &{Name:ha-086149-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 18:08:44.546865  395920 status.go:255] checking status of ha-086149-m04 ...
	I0819 18:08:44.547178  395920 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:08:44.547219  395920 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:08:44.562530  395920 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44399
	I0819 18:08:44.562953  395920 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:08:44.563428  395920 main.go:141] libmachine: Using API Version  1
	I0819 18:08:44.563450  395920 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:08:44.563812  395920 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:08:44.564052  395920 main.go:141] libmachine: (ha-086149-m04) Calling .GetState
	I0819 18:08:44.565745  395920 status.go:330] ha-086149-m04 host status = "Running" (err=<nil>)
	I0819 18:08:44.565762  395920 host.go:66] Checking if "ha-086149-m04" exists ...
	I0819 18:08:44.566095  395920 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:08:44.566138  395920 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:08:44.581173  395920 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42255
	I0819 18:08:44.581586  395920 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:08:44.582160  395920 main.go:141] libmachine: Using API Version  1
	I0819 18:08:44.582188  395920 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:08:44.582543  395920 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:08:44.582785  395920 main.go:141] libmachine: (ha-086149-m04) Calling .GetIP
	I0819 18:08:44.585635  395920 main.go:141] libmachine: (ha-086149-m04) DBG | domain ha-086149-m04 has defined MAC address 52:54:00:03:a4:7a in network mk-ha-086149
	I0819 18:08:44.586040  395920 main.go:141] libmachine: (ha-086149-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:a4:7a", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:05:01 +0000 UTC Type:0 Mac:52:54:00:03:a4:7a Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-086149-m04 Clientid:01:52:54:00:03:a4:7a}
	I0819 18:08:44.586077  395920 main.go:141] libmachine: (ha-086149-m04) DBG | domain ha-086149-m04 has defined IP address 192.168.39.173 and MAC address 52:54:00:03:a4:7a in network mk-ha-086149
	I0819 18:08:44.586223  395920 host.go:66] Checking if "ha-086149-m04" exists ...
	I0819 18:08:44.586583  395920 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:08:44.586620  395920 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:08:44.602096  395920 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43837
	I0819 18:08:44.602502  395920 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:08:44.602990  395920 main.go:141] libmachine: Using API Version  1
	I0819 18:08:44.603011  395920 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:08:44.603332  395920 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:08:44.603526  395920 main.go:141] libmachine: (ha-086149-m04) Calling .DriverName
	I0819 18:08:44.603734  395920 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 18:08:44.603762  395920 main.go:141] libmachine: (ha-086149-m04) Calling .GetSSHHostname
	I0819 18:08:44.606552  395920 main.go:141] libmachine: (ha-086149-m04) DBG | domain ha-086149-m04 has defined MAC address 52:54:00:03:a4:7a in network mk-ha-086149
	I0819 18:08:44.607002  395920 main.go:141] libmachine: (ha-086149-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:a4:7a", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:05:01 +0000 UTC Type:0 Mac:52:54:00:03:a4:7a Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-086149-m04 Clientid:01:52:54:00:03:a4:7a}
	I0819 18:08:44.607029  395920 main.go:141] libmachine: (ha-086149-m04) DBG | domain ha-086149-m04 has defined IP address 192.168.39.173 and MAC address 52:54:00:03:a4:7a in network mk-ha-086149
	I0819 18:08:44.607138  395920 main.go:141] libmachine: (ha-086149-m04) Calling .GetSSHPort
	I0819 18:08:44.607304  395920 main.go:141] libmachine: (ha-086149-m04) Calling .GetSSHKeyPath
	I0819 18:08:44.607439  395920 main.go:141] libmachine: (ha-086149-m04) Calling .GetSSHUsername
	I0819 18:08:44.607583  395920 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m04/id_rsa Username:docker}
	I0819 18:08:44.695791  395920 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:08:44.710038  395920 status.go:257] ha-086149-m04 status: &{Name:ha-086149-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-086149 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-086149 status -v=7 --alsologtostderr: exit status 3 (3.750530721s)

                                                
                                                
-- stdout --
	ha-086149
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-086149-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-086149-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-086149-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 18:08:47.447718  396020 out.go:345] Setting OutFile to fd 1 ...
	I0819 18:08:47.447998  396020 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:08:47.448010  396020 out.go:358] Setting ErrFile to fd 2...
	I0819 18:08:47.448016  396020 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:08:47.448224  396020 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19468-372744/.minikube/bin
	I0819 18:08:47.448419  396020 out.go:352] Setting JSON to false
	I0819 18:08:47.448454  396020 mustload.go:65] Loading cluster: ha-086149
	I0819 18:08:47.448496  396020 notify.go:220] Checking for updates...
	I0819 18:08:47.448895  396020 config.go:182] Loaded profile config "ha-086149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:08:47.448918  396020 status.go:255] checking status of ha-086149 ...
	I0819 18:08:47.449304  396020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:08:47.449376  396020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:08:47.465797  396020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45861
	I0819 18:08:47.466321  396020 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:08:47.467022  396020 main.go:141] libmachine: Using API Version  1
	I0819 18:08:47.467050  396020 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:08:47.467522  396020 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:08:47.467852  396020 main.go:141] libmachine: (ha-086149) Calling .GetState
	I0819 18:08:47.469743  396020 status.go:330] ha-086149 host status = "Running" (err=<nil>)
	I0819 18:08:47.469766  396020 host.go:66] Checking if "ha-086149" exists ...
	I0819 18:08:47.470071  396020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:08:47.470119  396020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:08:47.485372  396020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43189
	I0819 18:08:47.485878  396020 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:08:47.486374  396020 main.go:141] libmachine: Using API Version  1
	I0819 18:08:47.486397  396020 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:08:47.486733  396020 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:08:47.486927  396020 main.go:141] libmachine: (ha-086149) Calling .GetIP
	I0819 18:08:47.489852  396020 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:08:47.490312  396020 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:08:47.490344  396020 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:08:47.490488  396020 host.go:66] Checking if "ha-086149" exists ...
	I0819 18:08:47.490812  396020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:08:47.490857  396020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:08:47.505831  396020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45245
	I0819 18:08:47.506242  396020 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:08:47.506730  396020 main.go:141] libmachine: Using API Version  1
	I0819 18:08:47.506762  396020 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:08:47.507139  396020 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:08:47.507413  396020 main.go:141] libmachine: (ha-086149) Calling .DriverName
	I0819 18:08:47.507635  396020 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 18:08:47.507699  396020 main.go:141] libmachine: (ha-086149) Calling .GetSSHHostname
	I0819 18:08:47.510381  396020 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:08:47.510776  396020 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:08:47.510803  396020 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:08:47.510971  396020 main.go:141] libmachine: (ha-086149) Calling .GetSSHPort
	I0819 18:08:47.511155  396020 main.go:141] libmachine: (ha-086149) Calling .GetSSHKeyPath
	I0819 18:08:47.511304  396020 main.go:141] libmachine: (ha-086149) Calling .GetSSHUsername
	I0819 18:08:47.511427  396020 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149/id_rsa Username:docker}
	I0819 18:08:47.591429  396020 ssh_runner.go:195] Run: systemctl --version
	I0819 18:08:47.599069  396020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:08:47.614747  396020 kubeconfig.go:125] found "ha-086149" server: "https://192.168.39.254:8443"
	I0819 18:08:47.614781  396020 api_server.go:166] Checking apiserver status ...
	I0819 18:08:47.614815  396020 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:08:47.629689  396020 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1121/cgroup
	W0819 18:08:47.640450  396020 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1121/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 18:08:47.640501  396020 ssh_runner.go:195] Run: ls
	I0819 18:08:47.645527  396020 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 18:08:47.651355  396020 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 18:08:47.651380  396020 status.go:422] ha-086149 apiserver status = Running (err=<nil>)
	I0819 18:08:47.651391  396020 status.go:257] ha-086149 status: &{Name:ha-086149 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 18:08:47.651410  396020 status.go:255] checking status of ha-086149-m02 ...
	I0819 18:08:47.651720  396020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:08:47.651765  396020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:08:47.667163  396020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32861
	I0819 18:08:47.667570  396020 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:08:47.668106  396020 main.go:141] libmachine: Using API Version  1
	I0819 18:08:47.668129  396020 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:08:47.668469  396020 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:08:47.668700  396020 main.go:141] libmachine: (ha-086149-m02) Calling .GetState
	I0819 18:08:47.670205  396020 status.go:330] ha-086149-m02 host status = "Running" (err=<nil>)
	I0819 18:08:47.670224  396020 host.go:66] Checking if "ha-086149-m02" exists ...
	I0819 18:08:47.670641  396020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:08:47.670715  396020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:08:47.686263  396020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39249
	I0819 18:08:47.686625  396020 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:08:47.687084  396020 main.go:141] libmachine: Using API Version  1
	I0819 18:08:47.687104  396020 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:08:47.687459  396020 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:08:47.687700  396020 main.go:141] libmachine: (ha-086149-m02) Calling .GetIP
	I0819 18:08:47.690585  396020 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:08:47.691071  396020 main.go:141] libmachine: (ha-086149-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:44:0e", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:02:15 +0000 UTC Type:0 Mac:52:54:00:b9:44:0e Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-086149-m02 Clientid:01:52:54:00:b9:44:0e}
	I0819 18:08:47.691099  396020 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined IP address 192.168.39.167 and MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:08:47.691262  396020 host.go:66] Checking if "ha-086149-m02" exists ...
	I0819 18:08:47.691659  396020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:08:47.691733  396020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:08:47.706407  396020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35943
	I0819 18:08:47.706852  396020 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:08:47.707334  396020 main.go:141] libmachine: Using API Version  1
	I0819 18:08:47.707355  396020 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:08:47.707647  396020 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:08:47.707829  396020 main.go:141] libmachine: (ha-086149-m02) Calling .DriverName
	I0819 18:08:47.708031  396020 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 18:08:47.708052  396020 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHHostname
	I0819 18:08:47.711264  396020 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:08:47.711757  396020 main.go:141] libmachine: (ha-086149-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:44:0e", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:02:15 +0000 UTC Type:0 Mac:52:54:00:b9:44:0e Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-086149-m02 Clientid:01:52:54:00:b9:44:0e}
	I0819 18:08:47.711785  396020 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined IP address 192.168.39.167 and MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:08:47.711955  396020 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHPort
	I0819 18:08:47.712145  396020 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHKeyPath
	I0819 18:08:47.712328  396020 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHUsername
	I0819 18:08:47.712483  396020 sshutil.go:53] new ssh client: &{IP:192.168.39.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m02/id_rsa Username:docker}
	W0819 18:08:50.787938  396020 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.167:22: connect: no route to host
	W0819 18:08:50.788035  396020 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.167:22: connect: no route to host
	E0819 18:08:50.788049  396020 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.167:22: connect: no route to host
	I0819 18:08:50.788057  396020 status.go:257] ha-086149-m02 status: &{Name:ha-086149-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0819 18:08:50.788073  396020 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.167:22: connect: no route to host
	I0819 18:08:50.788080  396020 status.go:255] checking status of ha-086149-m03 ...
	I0819 18:08:50.788408  396020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:08:50.788457  396020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:08:50.803867  396020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46111
	I0819 18:08:50.804319  396020 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:08:50.804780  396020 main.go:141] libmachine: Using API Version  1
	I0819 18:08:50.804810  396020 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:08:50.805154  396020 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:08:50.805347  396020 main.go:141] libmachine: (ha-086149-m03) Calling .GetState
	I0819 18:08:50.806893  396020 status.go:330] ha-086149-m03 host status = "Running" (err=<nil>)
	I0819 18:08:50.806912  396020 host.go:66] Checking if "ha-086149-m03" exists ...
	I0819 18:08:50.807212  396020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:08:50.807251  396020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:08:50.822021  396020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43985
	I0819 18:08:50.822505  396020 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:08:50.822990  396020 main.go:141] libmachine: Using API Version  1
	I0819 18:08:50.823010  396020 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:08:50.823347  396020 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:08:50.823558  396020 main.go:141] libmachine: (ha-086149-m03) Calling .GetIP
	I0819 18:08:50.826379  396020 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:08:50.826797  396020 main.go:141] libmachine: (ha-086149-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:29:16", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:03:35 +0000 UTC Type:0 Mac:52:54:00:dc:29:16 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-086149-m03 Clientid:01:52:54:00:dc:29:16}
	I0819 18:08:50.826827  396020 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined IP address 192.168.39.121 and MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:08:50.826996  396020 host.go:66] Checking if "ha-086149-m03" exists ...
	I0819 18:08:50.827302  396020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:08:50.827337  396020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:08:50.842196  396020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40607
	I0819 18:08:50.842627  396020 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:08:50.843104  396020 main.go:141] libmachine: Using API Version  1
	I0819 18:08:50.843126  396020 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:08:50.843480  396020 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:08:50.843733  396020 main.go:141] libmachine: (ha-086149-m03) Calling .DriverName
	I0819 18:08:50.843977  396020 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 18:08:50.844002  396020 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHHostname
	I0819 18:08:50.846741  396020 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:08:50.847254  396020 main.go:141] libmachine: (ha-086149-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:29:16", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:03:35 +0000 UTC Type:0 Mac:52:54:00:dc:29:16 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-086149-m03 Clientid:01:52:54:00:dc:29:16}
	I0819 18:08:50.847281  396020 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined IP address 192.168.39.121 and MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:08:50.847466  396020 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHPort
	I0819 18:08:50.847658  396020 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHKeyPath
	I0819 18:08:50.847813  396020 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHUsername
	I0819 18:08:50.847951  396020 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m03/id_rsa Username:docker}
	I0819 18:08:50.927401  396020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:08:50.944320  396020 kubeconfig.go:125] found "ha-086149" server: "https://192.168.39.254:8443"
	I0819 18:08:50.944354  396020 api_server.go:166] Checking apiserver status ...
	I0819 18:08:50.944393  396020 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:08:50.963758  396020 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1440/cgroup
	W0819 18:08:50.973756  396020 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1440/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 18:08:50.973827  396020 ssh_runner.go:195] Run: ls
	I0819 18:08:50.978809  396020 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 18:08:50.983412  396020 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 18:08:50.983439  396020 status.go:422] ha-086149-m03 apiserver status = Running (err=<nil>)
	I0819 18:08:50.983451  396020 status.go:257] ha-086149-m03 status: &{Name:ha-086149-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 18:08:50.983472  396020 status.go:255] checking status of ha-086149-m04 ...
	I0819 18:08:50.983824  396020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:08:50.983873  396020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:08:51.000594  396020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35503
	I0819 18:08:51.001033  396020 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:08:51.001529  396020 main.go:141] libmachine: Using API Version  1
	I0819 18:08:51.001546  396020 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:08:51.001905  396020 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:08:51.002163  396020 main.go:141] libmachine: (ha-086149-m04) Calling .GetState
	I0819 18:08:51.003820  396020 status.go:330] ha-086149-m04 host status = "Running" (err=<nil>)
	I0819 18:08:51.003836  396020 host.go:66] Checking if "ha-086149-m04" exists ...
	I0819 18:08:51.004182  396020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:08:51.004227  396020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:08:51.019224  396020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44379
	I0819 18:08:51.019749  396020 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:08:51.020357  396020 main.go:141] libmachine: Using API Version  1
	I0819 18:08:51.020396  396020 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:08:51.020777  396020 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:08:51.021054  396020 main.go:141] libmachine: (ha-086149-m04) Calling .GetIP
	I0819 18:08:51.023772  396020 main.go:141] libmachine: (ha-086149-m04) DBG | domain ha-086149-m04 has defined MAC address 52:54:00:03:a4:7a in network mk-ha-086149
	I0819 18:08:51.024175  396020 main.go:141] libmachine: (ha-086149-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:a4:7a", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:05:01 +0000 UTC Type:0 Mac:52:54:00:03:a4:7a Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-086149-m04 Clientid:01:52:54:00:03:a4:7a}
	I0819 18:08:51.024199  396020 main.go:141] libmachine: (ha-086149-m04) DBG | domain ha-086149-m04 has defined IP address 192.168.39.173 and MAC address 52:54:00:03:a4:7a in network mk-ha-086149
	I0819 18:08:51.024326  396020 host.go:66] Checking if "ha-086149-m04" exists ...
	I0819 18:08:51.024637  396020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:08:51.024680  396020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:08:51.040432  396020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36021
	I0819 18:08:51.040919  396020 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:08:51.041419  396020 main.go:141] libmachine: Using API Version  1
	I0819 18:08:51.041446  396020 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:08:51.041791  396020 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:08:51.041963  396020 main.go:141] libmachine: (ha-086149-m04) Calling .DriverName
	I0819 18:08:51.042103  396020 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 18:08:51.042126  396020 main.go:141] libmachine: (ha-086149-m04) Calling .GetSSHHostname
	I0819 18:08:51.044889  396020 main.go:141] libmachine: (ha-086149-m04) DBG | domain ha-086149-m04 has defined MAC address 52:54:00:03:a4:7a in network mk-ha-086149
	I0819 18:08:51.045399  396020 main.go:141] libmachine: (ha-086149-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:a4:7a", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:05:01 +0000 UTC Type:0 Mac:52:54:00:03:a4:7a Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-086149-m04 Clientid:01:52:54:00:03:a4:7a}
	I0819 18:08:51.045435  396020 main.go:141] libmachine: (ha-086149-m04) DBG | domain ha-086149-m04 has defined IP address 192.168.39.173 and MAC address 52:54:00:03:a4:7a in network mk-ha-086149
	I0819 18:08:51.045683  396020 main.go:141] libmachine: (ha-086149-m04) Calling .GetSSHPort
	I0819 18:08:51.045884  396020 main.go:141] libmachine: (ha-086149-m04) Calling .GetSSHKeyPath
	I0819 18:08:51.046072  396020 main.go:141] libmachine: (ha-086149-m04) Calling .GetSSHUsername
	I0819 18:08:51.046290  396020 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m04/id_rsa Username:docker}
	I0819 18:08:51.135757  396020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:08:51.150260  396020 status.go:257] ha-086149-m04 status: &{Name:ha-086149-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-086149 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-086149 status -v=7 --alsologtostderr: exit status 3 (3.73224017s)

                                                
                                                
-- stdout --
	ha-086149
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-086149-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-086149-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-086149-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 18:08:57.119096  396136 out.go:345] Setting OutFile to fd 1 ...
	I0819 18:08:57.119376  396136 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:08:57.119389  396136 out.go:358] Setting ErrFile to fd 2...
	I0819 18:08:57.119396  396136 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:08:57.119700  396136 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19468-372744/.minikube/bin
	I0819 18:08:57.119891  396136 out.go:352] Setting JSON to false
	I0819 18:08:57.119922  396136 mustload.go:65] Loading cluster: ha-086149
	I0819 18:08:57.120044  396136 notify.go:220] Checking for updates...
	I0819 18:08:57.120379  396136 config.go:182] Loaded profile config "ha-086149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:08:57.120396  396136 status.go:255] checking status of ha-086149 ...
	I0819 18:08:57.120806  396136 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:08:57.120870  396136 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:08:57.136088  396136 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40591
	I0819 18:08:57.136568  396136 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:08:57.137233  396136 main.go:141] libmachine: Using API Version  1
	I0819 18:08:57.137274  396136 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:08:57.137625  396136 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:08:57.137817  396136 main.go:141] libmachine: (ha-086149) Calling .GetState
	I0819 18:08:57.139737  396136 status.go:330] ha-086149 host status = "Running" (err=<nil>)
	I0819 18:08:57.139755  396136 host.go:66] Checking if "ha-086149" exists ...
	I0819 18:08:57.140054  396136 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:08:57.140095  396136 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:08:57.155402  396136 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45433
	I0819 18:08:57.155804  396136 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:08:57.156287  396136 main.go:141] libmachine: Using API Version  1
	I0819 18:08:57.156316  396136 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:08:57.156672  396136 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:08:57.156918  396136 main.go:141] libmachine: (ha-086149) Calling .GetIP
	I0819 18:08:57.159835  396136 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:08:57.160341  396136 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:08:57.160374  396136 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:08:57.160553  396136 host.go:66] Checking if "ha-086149" exists ...
	I0819 18:08:57.160839  396136 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:08:57.160889  396136 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:08:57.175840  396136 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40713
	I0819 18:08:57.176289  396136 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:08:57.176760  396136 main.go:141] libmachine: Using API Version  1
	I0819 18:08:57.176780  396136 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:08:57.177082  396136 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:08:57.177321  396136 main.go:141] libmachine: (ha-086149) Calling .DriverName
	I0819 18:08:57.177498  396136 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 18:08:57.177537  396136 main.go:141] libmachine: (ha-086149) Calling .GetSSHHostname
	I0819 18:08:57.180532  396136 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:08:57.180968  396136 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:08:57.181002  396136 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:08:57.181067  396136 main.go:141] libmachine: (ha-086149) Calling .GetSSHPort
	I0819 18:08:57.181237  396136 main.go:141] libmachine: (ha-086149) Calling .GetSSHKeyPath
	I0819 18:08:57.181380  396136 main.go:141] libmachine: (ha-086149) Calling .GetSSHUsername
	I0819 18:08:57.181552  396136 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149/id_rsa Username:docker}
	I0819 18:08:57.259986  396136 ssh_runner.go:195] Run: systemctl --version
	I0819 18:08:57.266894  396136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:08:57.283088  396136 kubeconfig.go:125] found "ha-086149" server: "https://192.168.39.254:8443"
	I0819 18:08:57.283137  396136 api_server.go:166] Checking apiserver status ...
	I0819 18:08:57.283183  396136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:08:57.297823  396136 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1121/cgroup
	W0819 18:08:57.308013  396136 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1121/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 18:08:57.308089  396136 ssh_runner.go:195] Run: ls
	I0819 18:08:57.316394  396136 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 18:08:57.322068  396136 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 18:08:57.322098  396136 status.go:422] ha-086149 apiserver status = Running (err=<nil>)
	I0819 18:08:57.322112  396136 status.go:257] ha-086149 status: &{Name:ha-086149 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 18:08:57.322134  396136 status.go:255] checking status of ha-086149-m02 ...
	I0819 18:08:57.322576  396136 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:08:57.322632  396136 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:08:57.338616  396136 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42481
	I0819 18:08:57.339052  396136 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:08:57.339617  396136 main.go:141] libmachine: Using API Version  1
	I0819 18:08:57.339655  396136 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:08:57.340006  396136 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:08:57.340237  396136 main.go:141] libmachine: (ha-086149-m02) Calling .GetState
	I0819 18:08:57.341776  396136 status.go:330] ha-086149-m02 host status = "Running" (err=<nil>)
	I0819 18:08:57.341795  396136 host.go:66] Checking if "ha-086149-m02" exists ...
	I0819 18:08:57.342106  396136 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:08:57.342171  396136 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:08:57.357233  396136 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44583
	I0819 18:08:57.357646  396136 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:08:57.358111  396136 main.go:141] libmachine: Using API Version  1
	I0819 18:08:57.358132  396136 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:08:57.358449  396136 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:08:57.358699  396136 main.go:141] libmachine: (ha-086149-m02) Calling .GetIP
	I0819 18:08:57.361597  396136 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:08:57.362070  396136 main.go:141] libmachine: (ha-086149-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:44:0e", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:02:15 +0000 UTC Type:0 Mac:52:54:00:b9:44:0e Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-086149-m02 Clientid:01:52:54:00:b9:44:0e}
	I0819 18:08:57.362100  396136 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined IP address 192.168.39.167 and MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:08:57.362222  396136 host.go:66] Checking if "ha-086149-m02" exists ...
	I0819 18:08:57.362619  396136 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:08:57.362665  396136 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:08:57.378177  396136 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44153
	I0819 18:08:57.378606  396136 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:08:57.379141  396136 main.go:141] libmachine: Using API Version  1
	I0819 18:08:57.379167  396136 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:08:57.379503  396136 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:08:57.379719  396136 main.go:141] libmachine: (ha-086149-m02) Calling .DriverName
	I0819 18:08:57.379925  396136 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 18:08:57.379945  396136 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHHostname
	I0819 18:08:57.382993  396136 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:08:57.383531  396136 main.go:141] libmachine: (ha-086149-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:44:0e", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:02:15 +0000 UTC Type:0 Mac:52:54:00:b9:44:0e Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-086149-m02 Clientid:01:52:54:00:b9:44:0e}
	I0819 18:08:57.383564  396136 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined IP address 192.168.39.167 and MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:08:57.383748  396136 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHPort
	I0819 18:08:57.383952  396136 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHKeyPath
	I0819 18:08:57.384128  396136 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHUsername
	I0819 18:08:57.384295  396136 sshutil.go:53] new ssh client: &{IP:192.168.39.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m02/id_rsa Username:docker}
	W0819 18:09:00.451976  396136 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.167:22: connect: no route to host
	W0819 18:09:00.452066  396136 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.167:22: connect: no route to host
	E0819 18:09:00.452081  396136 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.167:22: connect: no route to host
	I0819 18:09:00.452088  396136 status.go:257] ha-086149-m02 status: &{Name:ha-086149-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0819 18:09:00.452109  396136 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.167:22: connect: no route to host
	I0819 18:09:00.452117  396136 status.go:255] checking status of ha-086149-m03 ...
	I0819 18:09:00.452449  396136 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:09:00.452492  396136 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:09:00.468420  396136 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46191
	I0819 18:09:00.468864  396136 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:09:00.469354  396136 main.go:141] libmachine: Using API Version  1
	I0819 18:09:00.469378  396136 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:09:00.469721  396136 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:09:00.469937  396136 main.go:141] libmachine: (ha-086149-m03) Calling .GetState
	I0819 18:09:00.471868  396136 status.go:330] ha-086149-m03 host status = "Running" (err=<nil>)
	I0819 18:09:00.471888  396136 host.go:66] Checking if "ha-086149-m03" exists ...
	I0819 18:09:00.472239  396136 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:09:00.472290  396136 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:09:00.488470  396136 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43861
	I0819 18:09:00.488956  396136 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:09:00.489503  396136 main.go:141] libmachine: Using API Version  1
	I0819 18:09:00.489528  396136 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:09:00.489900  396136 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:09:00.490115  396136 main.go:141] libmachine: (ha-086149-m03) Calling .GetIP
	I0819 18:09:00.492998  396136 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:09:00.493359  396136 main.go:141] libmachine: (ha-086149-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:29:16", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:03:35 +0000 UTC Type:0 Mac:52:54:00:dc:29:16 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-086149-m03 Clientid:01:52:54:00:dc:29:16}
	I0819 18:09:00.493398  396136 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined IP address 192.168.39.121 and MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:09:00.493539  396136 host.go:66] Checking if "ha-086149-m03" exists ...
	I0819 18:09:00.493908  396136 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:09:00.493946  396136 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:09:00.509151  396136 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35617
	I0819 18:09:00.509591  396136 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:09:00.510061  396136 main.go:141] libmachine: Using API Version  1
	I0819 18:09:00.510094  396136 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:09:00.510455  396136 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:09:00.510653  396136 main.go:141] libmachine: (ha-086149-m03) Calling .DriverName
	I0819 18:09:00.510831  396136 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 18:09:00.510854  396136 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHHostname
	I0819 18:09:00.513490  396136 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:09:00.513944  396136 main.go:141] libmachine: (ha-086149-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:29:16", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:03:35 +0000 UTC Type:0 Mac:52:54:00:dc:29:16 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-086149-m03 Clientid:01:52:54:00:dc:29:16}
	I0819 18:09:00.513966  396136 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined IP address 192.168.39.121 and MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:09:00.514117  396136 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHPort
	I0819 18:09:00.514293  396136 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHKeyPath
	I0819 18:09:00.514489  396136 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHUsername
	I0819 18:09:00.514648  396136 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m03/id_rsa Username:docker}
	I0819 18:09:00.591147  396136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:09:00.606991  396136 kubeconfig.go:125] found "ha-086149" server: "https://192.168.39.254:8443"
	I0819 18:09:00.607033  396136 api_server.go:166] Checking apiserver status ...
	I0819 18:09:00.607080  396136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:09:00.620980  396136 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1440/cgroup
	W0819 18:09:00.631287  396136 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1440/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 18:09:00.631357  396136 ssh_runner.go:195] Run: ls
	I0819 18:09:00.640408  396136 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 18:09:00.648152  396136 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 18:09:00.648180  396136 status.go:422] ha-086149-m03 apiserver status = Running (err=<nil>)
	I0819 18:09:00.648198  396136 status.go:257] ha-086149-m03 status: &{Name:ha-086149-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 18:09:00.648222  396136 status.go:255] checking status of ha-086149-m04 ...
	I0819 18:09:00.648647  396136 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:09:00.648688  396136 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:09:00.663857  396136 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43327
	I0819 18:09:00.664276  396136 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:09:00.664761  396136 main.go:141] libmachine: Using API Version  1
	I0819 18:09:00.664786  396136 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:09:00.665088  396136 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:09:00.665337  396136 main.go:141] libmachine: (ha-086149-m04) Calling .GetState
	I0819 18:09:00.666848  396136 status.go:330] ha-086149-m04 host status = "Running" (err=<nil>)
	I0819 18:09:00.666864  396136 host.go:66] Checking if "ha-086149-m04" exists ...
	I0819 18:09:00.667158  396136 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:09:00.667199  396136 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:09:00.682406  396136 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42767
	I0819 18:09:00.682832  396136 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:09:00.683320  396136 main.go:141] libmachine: Using API Version  1
	I0819 18:09:00.683344  396136 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:09:00.683660  396136 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:09:00.683895  396136 main.go:141] libmachine: (ha-086149-m04) Calling .GetIP
	I0819 18:09:00.686610  396136 main.go:141] libmachine: (ha-086149-m04) DBG | domain ha-086149-m04 has defined MAC address 52:54:00:03:a4:7a in network mk-ha-086149
	I0819 18:09:00.687012  396136 main.go:141] libmachine: (ha-086149-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:a4:7a", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:05:01 +0000 UTC Type:0 Mac:52:54:00:03:a4:7a Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-086149-m04 Clientid:01:52:54:00:03:a4:7a}
	I0819 18:09:00.687043  396136 main.go:141] libmachine: (ha-086149-m04) DBG | domain ha-086149-m04 has defined IP address 192.168.39.173 and MAC address 52:54:00:03:a4:7a in network mk-ha-086149
	I0819 18:09:00.687123  396136 host.go:66] Checking if "ha-086149-m04" exists ...
	I0819 18:09:00.687447  396136 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:09:00.687482  396136 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:09:00.702502  396136 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38709
	I0819 18:09:00.702990  396136 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:09:00.703508  396136 main.go:141] libmachine: Using API Version  1
	I0819 18:09:00.703532  396136 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:09:00.703888  396136 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:09:00.704040  396136 main.go:141] libmachine: (ha-086149-m04) Calling .DriverName
	I0819 18:09:00.704244  396136 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 18:09:00.704265  396136 main.go:141] libmachine: (ha-086149-m04) Calling .GetSSHHostname
	I0819 18:09:00.707165  396136 main.go:141] libmachine: (ha-086149-m04) DBG | domain ha-086149-m04 has defined MAC address 52:54:00:03:a4:7a in network mk-ha-086149
	I0819 18:09:00.707547  396136 main.go:141] libmachine: (ha-086149-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:a4:7a", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:05:01 +0000 UTC Type:0 Mac:52:54:00:03:a4:7a Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-086149-m04 Clientid:01:52:54:00:03:a4:7a}
	I0819 18:09:00.707577  396136 main.go:141] libmachine: (ha-086149-m04) DBG | domain ha-086149-m04 has defined IP address 192.168.39.173 and MAC address 52:54:00:03:a4:7a in network mk-ha-086149
	I0819 18:09:00.707740  396136 main.go:141] libmachine: (ha-086149-m04) Calling .GetSSHPort
	I0819 18:09:00.707875  396136 main.go:141] libmachine: (ha-086149-m04) Calling .GetSSHKeyPath
	I0819 18:09:00.707990  396136 main.go:141] libmachine: (ha-086149-m04) Calling .GetSSHUsername
	I0819 18:09:00.708123  396136 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m04/id_rsa Username:docker}
	I0819 18:09:00.791558  396136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:09:00.805322  396136 status.go:257] ha-086149-m04 status: &{Name:ha-086149-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-086149 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-086149 status -v=7 --alsologtostderr: exit status 7 (657.071172ms)

                                                
                                                
-- stdout --
	ha-086149
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-086149-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-086149-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-086149-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 18:09:05.234080  396274 out.go:345] Setting OutFile to fd 1 ...
	I0819 18:09:05.234356  396274 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:09:05.234366  396274 out.go:358] Setting ErrFile to fd 2...
	I0819 18:09:05.234371  396274 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:09:05.234577  396274 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19468-372744/.minikube/bin
	I0819 18:09:05.234798  396274 out.go:352] Setting JSON to false
	I0819 18:09:05.234833  396274 mustload.go:65] Loading cluster: ha-086149
	I0819 18:09:05.234969  396274 notify.go:220] Checking for updates...
	I0819 18:09:05.235425  396274 config.go:182] Loaded profile config "ha-086149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:09:05.235448  396274 status.go:255] checking status of ha-086149 ...
	I0819 18:09:05.236063  396274 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:09:05.236123  396274 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:09:05.252161  396274 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44521
	I0819 18:09:05.252608  396274 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:09:05.253235  396274 main.go:141] libmachine: Using API Version  1
	I0819 18:09:05.253274  396274 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:09:05.253625  396274 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:09:05.253832  396274 main.go:141] libmachine: (ha-086149) Calling .GetState
	I0819 18:09:05.255745  396274 status.go:330] ha-086149 host status = "Running" (err=<nil>)
	I0819 18:09:05.255767  396274 host.go:66] Checking if "ha-086149" exists ...
	I0819 18:09:05.256195  396274 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:09:05.256250  396274 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:09:05.271741  396274 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33217
	I0819 18:09:05.272273  396274 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:09:05.272784  396274 main.go:141] libmachine: Using API Version  1
	I0819 18:09:05.272808  396274 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:09:05.273158  396274 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:09:05.273417  396274 main.go:141] libmachine: (ha-086149) Calling .GetIP
	I0819 18:09:05.276143  396274 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:09:05.276548  396274 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:09:05.276569  396274 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:09:05.276728  396274 host.go:66] Checking if "ha-086149" exists ...
	I0819 18:09:05.277022  396274 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:09:05.277059  396274 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:09:05.292495  396274 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39259
	I0819 18:09:05.293019  396274 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:09:05.293515  396274 main.go:141] libmachine: Using API Version  1
	I0819 18:09:05.293538  396274 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:09:05.293880  396274 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:09:05.294161  396274 main.go:141] libmachine: (ha-086149) Calling .DriverName
	I0819 18:09:05.294352  396274 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 18:09:05.294392  396274 main.go:141] libmachine: (ha-086149) Calling .GetSSHHostname
	I0819 18:09:05.297270  396274 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:09:05.297742  396274 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:09:05.297766  396274 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:09:05.298169  396274 main.go:141] libmachine: (ha-086149) Calling .GetSSHPort
	I0819 18:09:05.298381  396274 main.go:141] libmachine: (ha-086149) Calling .GetSSHKeyPath
	I0819 18:09:05.298555  396274 main.go:141] libmachine: (ha-086149) Calling .GetSSHUsername
	I0819 18:09:05.298738  396274 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149/id_rsa Username:docker}
	I0819 18:09:05.384349  396274 ssh_runner.go:195] Run: systemctl --version
	I0819 18:09:05.393057  396274 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:09:05.409867  396274 kubeconfig.go:125] found "ha-086149" server: "https://192.168.39.254:8443"
	I0819 18:09:05.409942  396274 api_server.go:166] Checking apiserver status ...
	I0819 18:09:05.409991  396274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:09:05.431031  396274 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1121/cgroup
	W0819 18:09:05.444682  396274 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1121/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 18:09:05.444766  396274 ssh_runner.go:195] Run: ls
	I0819 18:09:05.451274  396274 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 18:09:05.457124  396274 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 18:09:05.457152  396274 status.go:422] ha-086149 apiserver status = Running (err=<nil>)
	I0819 18:09:05.457165  396274 status.go:257] ha-086149 status: &{Name:ha-086149 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 18:09:05.457188  396274 status.go:255] checking status of ha-086149-m02 ...
	I0819 18:09:05.457502  396274 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:09:05.457547  396274 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:09:05.473649  396274 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43099
	I0819 18:09:05.474157  396274 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:09:05.474685  396274 main.go:141] libmachine: Using API Version  1
	I0819 18:09:05.474707  396274 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:09:05.475018  396274 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:09:05.475213  396274 main.go:141] libmachine: (ha-086149-m02) Calling .GetState
	I0819 18:09:05.477043  396274 status.go:330] ha-086149-m02 host status = "Stopped" (err=<nil>)
	I0819 18:09:05.477061  396274 status.go:343] host is not running, skipping remaining checks
	I0819 18:09:05.477070  396274 status.go:257] ha-086149-m02 status: &{Name:ha-086149-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 18:09:05.477092  396274 status.go:255] checking status of ha-086149-m03 ...
	I0819 18:09:05.477547  396274 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:09:05.477595  396274 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:09:05.493084  396274 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35759
	I0819 18:09:05.493534  396274 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:09:05.494021  396274 main.go:141] libmachine: Using API Version  1
	I0819 18:09:05.494048  396274 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:09:05.494434  396274 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:09:05.494666  396274 main.go:141] libmachine: (ha-086149-m03) Calling .GetState
	I0819 18:09:05.496391  396274 status.go:330] ha-086149-m03 host status = "Running" (err=<nil>)
	I0819 18:09:05.496409  396274 host.go:66] Checking if "ha-086149-m03" exists ...
	I0819 18:09:05.496707  396274 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:09:05.496769  396274 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:09:05.511967  396274 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44063
	I0819 18:09:05.512433  396274 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:09:05.512917  396274 main.go:141] libmachine: Using API Version  1
	I0819 18:09:05.512940  396274 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:09:05.513310  396274 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:09:05.513500  396274 main.go:141] libmachine: (ha-086149-m03) Calling .GetIP
	I0819 18:09:05.516371  396274 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:09:05.516754  396274 main.go:141] libmachine: (ha-086149-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:29:16", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:03:35 +0000 UTC Type:0 Mac:52:54:00:dc:29:16 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-086149-m03 Clientid:01:52:54:00:dc:29:16}
	I0819 18:09:05.516778  396274 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined IP address 192.168.39.121 and MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:09:05.516979  396274 host.go:66] Checking if "ha-086149-m03" exists ...
	I0819 18:09:05.517416  396274 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:09:05.517469  396274 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:09:05.534352  396274 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38291
	I0819 18:09:05.534819  396274 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:09:05.535317  396274 main.go:141] libmachine: Using API Version  1
	I0819 18:09:05.535348  396274 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:09:05.535730  396274 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:09:05.535914  396274 main.go:141] libmachine: (ha-086149-m03) Calling .DriverName
	I0819 18:09:05.536131  396274 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 18:09:05.536174  396274 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHHostname
	I0819 18:09:05.539020  396274 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:09:05.539484  396274 main.go:141] libmachine: (ha-086149-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:29:16", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:03:35 +0000 UTC Type:0 Mac:52:54:00:dc:29:16 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-086149-m03 Clientid:01:52:54:00:dc:29:16}
	I0819 18:09:05.539513  396274 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined IP address 192.168.39.121 and MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:09:05.539683  396274 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHPort
	I0819 18:09:05.539877  396274 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHKeyPath
	I0819 18:09:05.540078  396274 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHUsername
	I0819 18:09:05.540235  396274 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m03/id_rsa Username:docker}
	I0819 18:09:05.624753  396274 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:09:05.650647  396274 kubeconfig.go:125] found "ha-086149" server: "https://192.168.39.254:8443"
	I0819 18:09:05.650681  396274 api_server.go:166] Checking apiserver status ...
	I0819 18:09:05.650731  396274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:09:05.666681  396274 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1440/cgroup
	W0819 18:09:05.677047  396274 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1440/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 18:09:05.677113  396274 ssh_runner.go:195] Run: ls
	I0819 18:09:05.682075  396274 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 18:09:05.687244  396274 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 18:09:05.687275  396274 status.go:422] ha-086149-m03 apiserver status = Running (err=<nil>)
	I0819 18:09:05.687288  396274 status.go:257] ha-086149-m03 status: &{Name:ha-086149-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 18:09:05.687307  396274 status.go:255] checking status of ha-086149-m04 ...
	I0819 18:09:05.687659  396274 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:09:05.687728  396274 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:09:05.703863  396274 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44951
	I0819 18:09:05.704291  396274 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:09:05.704760  396274 main.go:141] libmachine: Using API Version  1
	I0819 18:09:05.704783  396274 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:09:05.705094  396274 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:09:05.705301  396274 main.go:141] libmachine: (ha-086149-m04) Calling .GetState
	I0819 18:09:05.706953  396274 status.go:330] ha-086149-m04 host status = "Running" (err=<nil>)
	I0819 18:09:05.706972  396274 host.go:66] Checking if "ha-086149-m04" exists ...
	I0819 18:09:05.707332  396274 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:09:05.707372  396274 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:09:05.723045  396274 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38679
	I0819 18:09:05.723554  396274 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:09:05.724034  396274 main.go:141] libmachine: Using API Version  1
	I0819 18:09:05.724070  396274 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:09:05.724424  396274 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:09:05.724656  396274 main.go:141] libmachine: (ha-086149-m04) Calling .GetIP
	I0819 18:09:05.727460  396274 main.go:141] libmachine: (ha-086149-m04) DBG | domain ha-086149-m04 has defined MAC address 52:54:00:03:a4:7a in network mk-ha-086149
	I0819 18:09:05.727905  396274 main.go:141] libmachine: (ha-086149-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:a4:7a", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:05:01 +0000 UTC Type:0 Mac:52:54:00:03:a4:7a Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-086149-m04 Clientid:01:52:54:00:03:a4:7a}
	I0819 18:09:05.727933  396274 main.go:141] libmachine: (ha-086149-m04) DBG | domain ha-086149-m04 has defined IP address 192.168.39.173 and MAC address 52:54:00:03:a4:7a in network mk-ha-086149
	I0819 18:09:05.728085  396274 host.go:66] Checking if "ha-086149-m04" exists ...
	I0819 18:09:05.728415  396274 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:09:05.728461  396274 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:09:05.743495  396274 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32817
	I0819 18:09:05.743902  396274 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:09:05.744391  396274 main.go:141] libmachine: Using API Version  1
	I0819 18:09:05.744419  396274 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:09:05.744743  396274 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:09:05.744995  396274 main.go:141] libmachine: (ha-086149-m04) Calling .DriverName
	I0819 18:09:05.745171  396274 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 18:09:05.745193  396274 main.go:141] libmachine: (ha-086149-m04) Calling .GetSSHHostname
	I0819 18:09:05.748136  396274 main.go:141] libmachine: (ha-086149-m04) DBG | domain ha-086149-m04 has defined MAC address 52:54:00:03:a4:7a in network mk-ha-086149
	I0819 18:09:05.748655  396274 main.go:141] libmachine: (ha-086149-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:a4:7a", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:05:01 +0000 UTC Type:0 Mac:52:54:00:03:a4:7a Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-086149-m04 Clientid:01:52:54:00:03:a4:7a}
	I0819 18:09:05.748686  396274 main.go:141] libmachine: (ha-086149-m04) DBG | domain ha-086149-m04 has defined IP address 192.168.39.173 and MAC address 52:54:00:03:a4:7a in network mk-ha-086149
	I0819 18:09:05.748840  396274 main.go:141] libmachine: (ha-086149-m04) Calling .GetSSHPort
	I0819 18:09:05.749004  396274 main.go:141] libmachine: (ha-086149-m04) Calling .GetSSHKeyPath
	I0819 18:09:05.749159  396274 main.go:141] libmachine: (ha-086149-m04) Calling .GetSSHUsername
	I0819 18:09:05.749298  396274 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m04/id_rsa Username:docker}
	I0819 18:09:05.831042  396274 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:09:05.845122  396274 status.go:257] ha-086149-m04 status: &{Name:ha-086149-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-086149 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-086149 status -v=7 --alsologtostderr: exit status 7 (624.414392ms)

                                                
                                                
-- stdout --
	ha-086149
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-086149-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-086149-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-086149-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 18:09:21.513437  396395 out.go:345] Setting OutFile to fd 1 ...
	I0819 18:09:21.513699  396395 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:09:21.513708  396395 out.go:358] Setting ErrFile to fd 2...
	I0819 18:09:21.513712  396395 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:09:21.513865  396395 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19468-372744/.minikube/bin
	I0819 18:09:21.514020  396395 out.go:352] Setting JSON to false
	I0819 18:09:21.514049  396395 mustload.go:65] Loading cluster: ha-086149
	I0819 18:09:21.514172  396395 notify.go:220] Checking for updates...
	I0819 18:09:21.514465  396395 config.go:182] Loaded profile config "ha-086149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:09:21.514485  396395 status.go:255] checking status of ha-086149 ...
	I0819 18:09:21.514825  396395 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:09:21.514900  396395 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:09:21.530956  396395 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34303
	I0819 18:09:21.531431  396395 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:09:21.532126  396395 main.go:141] libmachine: Using API Version  1
	I0819 18:09:21.532162  396395 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:09:21.532760  396395 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:09:21.533080  396395 main.go:141] libmachine: (ha-086149) Calling .GetState
	I0819 18:09:21.534910  396395 status.go:330] ha-086149 host status = "Running" (err=<nil>)
	I0819 18:09:21.534926  396395 host.go:66] Checking if "ha-086149" exists ...
	I0819 18:09:21.535276  396395 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:09:21.535322  396395 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:09:21.553737  396395 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34373
	I0819 18:09:21.554155  396395 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:09:21.554603  396395 main.go:141] libmachine: Using API Version  1
	I0819 18:09:21.554632  396395 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:09:21.554963  396395 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:09:21.555173  396395 main.go:141] libmachine: (ha-086149) Calling .GetIP
	I0819 18:09:21.558058  396395 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:09:21.558530  396395 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:09:21.558552  396395 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:09:21.558714  396395 host.go:66] Checking if "ha-086149" exists ...
	I0819 18:09:21.559047  396395 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:09:21.559093  396395 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:09:21.574390  396395 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37731
	I0819 18:09:21.574792  396395 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:09:21.575268  396395 main.go:141] libmachine: Using API Version  1
	I0819 18:09:21.575287  396395 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:09:21.575641  396395 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:09:21.575911  396395 main.go:141] libmachine: (ha-086149) Calling .DriverName
	I0819 18:09:21.576120  396395 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 18:09:21.576151  396395 main.go:141] libmachine: (ha-086149) Calling .GetSSHHostname
	I0819 18:09:21.578371  396395 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:09:21.578751  396395 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:09:21.578775  396395 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:09:21.578894  396395 main.go:141] libmachine: (ha-086149) Calling .GetSSHPort
	I0819 18:09:21.579098  396395 main.go:141] libmachine: (ha-086149) Calling .GetSSHKeyPath
	I0819 18:09:21.579243  396395 main.go:141] libmachine: (ha-086149) Calling .GetSSHUsername
	I0819 18:09:21.579374  396395 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149/id_rsa Username:docker}
	I0819 18:09:21.660046  396395 ssh_runner.go:195] Run: systemctl --version
	I0819 18:09:21.666722  396395 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:09:21.684174  396395 kubeconfig.go:125] found "ha-086149" server: "https://192.168.39.254:8443"
	I0819 18:09:21.684213  396395 api_server.go:166] Checking apiserver status ...
	I0819 18:09:21.684253  396395 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:09:21.699410  396395 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1121/cgroup
	W0819 18:09:21.710677  396395 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1121/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 18:09:21.710750  396395 ssh_runner.go:195] Run: ls
	I0819 18:09:21.715464  396395 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 18:09:21.720722  396395 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 18:09:21.720746  396395 status.go:422] ha-086149 apiserver status = Running (err=<nil>)
	I0819 18:09:21.720757  396395 status.go:257] ha-086149 status: &{Name:ha-086149 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 18:09:21.720779  396395 status.go:255] checking status of ha-086149-m02 ...
	I0819 18:09:21.721103  396395 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:09:21.721142  396395 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:09:21.737397  396395 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43933
	I0819 18:09:21.737921  396395 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:09:21.738469  396395 main.go:141] libmachine: Using API Version  1
	I0819 18:09:21.738498  396395 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:09:21.738818  396395 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:09:21.739038  396395 main.go:141] libmachine: (ha-086149-m02) Calling .GetState
	I0819 18:09:21.740647  396395 status.go:330] ha-086149-m02 host status = "Stopped" (err=<nil>)
	I0819 18:09:21.740664  396395 status.go:343] host is not running, skipping remaining checks
	I0819 18:09:21.740673  396395 status.go:257] ha-086149-m02 status: &{Name:ha-086149-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 18:09:21.740697  396395 status.go:255] checking status of ha-086149-m03 ...
	I0819 18:09:21.741070  396395 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:09:21.741124  396395 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:09:21.756502  396395 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44083
	I0819 18:09:21.756921  396395 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:09:21.757424  396395 main.go:141] libmachine: Using API Version  1
	I0819 18:09:21.757444  396395 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:09:21.757795  396395 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:09:21.757983  396395 main.go:141] libmachine: (ha-086149-m03) Calling .GetState
	I0819 18:09:21.759331  396395 status.go:330] ha-086149-m03 host status = "Running" (err=<nil>)
	I0819 18:09:21.759348  396395 host.go:66] Checking if "ha-086149-m03" exists ...
	I0819 18:09:21.759638  396395 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:09:21.759714  396395 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:09:21.775006  396395 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36773
	I0819 18:09:21.775466  396395 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:09:21.775878  396395 main.go:141] libmachine: Using API Version  1
	I0819 18:09:21.775900  396395 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:09:21.776235  396395 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:09:21.776435  396395 main.go:141] libmachine: (ha-086149-m03) Calling .GetIP
	I0819 18:09:21.778868  396395 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:09:21.779275  396395 main.go:141] libmachine: (ha-086149-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:29:16", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:03:35 +0000 UTC Type:0 Mac:52:54:00:dc:29:16 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-086149-m03 Clientid:01:52:54:00:dc:29:16}
	I0819 18:09:21.779296  396395 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined IP address 192.168.39.121 and MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:09:21.779394  396395 host.go:66] Checking if "ha-086149-m03" exists ...
	I0819 18:09:21.779827  396395 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:09:21.779877  396395 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:09:21.795605  396395 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43645
	I0819 18:09:21.796079  396395 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:09:21.796564  396395 main.go:141] libmachine: Using API Version  1
	I0819 18:09:21.796589  396395 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:09:21.796896  396395 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:09:21.797083  396395 main.go:141] libmachine: (ha-086149-m03) Calling .DriverName
	I0819 18:09:21.797329  396395 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 18:09:21.797353  396395 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHHostname
	I0819 18:09:21.799543  396395 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:09:21.799973  396395 main.go:141] libmachine: (ha-086149-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:29:16", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:03:35 +0000 UTC Type:0 Mac:52:54:00:dc:29:16 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-086149-m03 Clientid:01:52:54:00:dc:29:16}
	I0819 18:09:21.800002  396395 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined IP address 192.168.39.121 and MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:09:21.800111  396395 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHPort
	I0819 18:09:21.800314  396395 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHKeyPath
	I0819 18:09:21.800477  396395 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHUsername
	I0819 18:09:21.800630  396395 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m03/id_rsa Username:docker}
	I0819 18:09:21.879745  396395 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:09:21.895214  396395 kubeconfig.go:125] found "ha-086149" server: "https://192.168.39.254:8443"
	I0819 18:09:21.895247  396395 api_server.go:166] Checking apiserver status ...
	I0819 18:09:21.895293  396395 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:09:21.909334  396395 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1440/cgroup
	W0819 18:09:21.920991  396395 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1440/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 18:09:21.921062  396395 ssh_runner.go:195] Run: ls
	I0819 18:09:21.925441  396395 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 18:09:21.929748  396395 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 18:09:21.929773  396395 status.go:422] ha-086149-m03 apiserver status = Running (err=<nil>)
	I0819 18:09:21.929784  396395 status.go:257] ha-086149-m03 status: &{Name:ha-086149-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 18:09:21.929806  396395 status.go:255] checking status of ha-086149-m04 ...
	I0819 18:09:21.930128  396395 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:09:21.930183  396395 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:09:21.945699  396395 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37487
	I0819 18:09:21.946337  396395 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:09:21.946867  396395 main.go:141] libmachine: Using API Version  1
	I0819 18:09:21.946892  396395 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:09:21.947242  396395 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:09:21.947468  396395 main.go:141] libmachine: (ha-086149-m04) Calling .GetState
	I0819 18:09:21.949284  396395 status.go:330] ha-086149-m04 host status = "Running" (err=<nil>)
	I0819 18:09:21.949303  396395 host.go:66] Checking if "ha-086149-m04" exists ...
	I0819 18:09:21.949698  396395 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:09:21.949748  396395 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:09:21.964976  396395 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36521
	I0819 18:09:21.965459  396395 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:09:21.965951  396395 main.go:141] libmachine: Using API Version  1
	I0819 18:09:21.965972  396395 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:09:21.966290  396395 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:09:21.966482  396395 main.go:141] libmachine: (ha-086149-m04) Calling .GetIP
	I0819 18:09:21.969230  396395 main.go:141] libmachine: (ha-086149-m04) DBG | domain ha-086149-m04 has defined MAC address 52:54:00:03:a4:7a in network mk-ha-086149
	I0819 18:09:21.969670  396395 main.go:141] libmachine: (ha-086149-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:a4:7a", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:05:01 +0000 UTC Type:0 Mac:52:54:00:03:a4:7a Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-086149-m04 Clientid:01:52:54:00:03:a4:7a}
	I0819 18:09:21.969704  396395 main.go:141] libmachine: (ha-086149-m04) DBG | domain ha-086149-m04 has defined IP address 192.168.39.173 and MAC address 52:54:00:03:a4:7a in network mk-ha-086149
	I0819 18:09:21.969788  396395 host.go:66] Checking if "ha-086149-m04" exists ...
	I0819 18:09:21.970157  396395 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:09:21.970197  396395 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:09:21.985895  396395 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41457
	I0819 18:09:21.986359  396395 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:09:21.986824  396395 main.go:141] libmachine: Using API Version  1
	I0819 18:09:21.986848  396395 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:09:21.987207  396395 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:09:21.987396  396395 main.go:141] libmachine: (ha-086149-m04) Calling .DriverName
	I0819 18:09:21.987584  396395 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 18:09:21.987602  396395 main.go:141] libmachine: (ha-086149-m04) Calling .GetSSHHostname
	I0819 18:09:21.990095  396395 main.go:141] libmachine: (ha-086149-m04) DBG | domain ha-086149-m04 has defined MAC address 52:54:00:03:a4:7a in network mk-ha-086149
	I0819 18:09:21.990508  396395 main.go:141] libmachine: (ha-086149-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:a4:7a", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:05:01 +0000 UTC Type:0 Mac:52:54:00:03:a4:7a Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-086149-m04 Clientid:01:52:54:00:03:a4:7a}
	I0819 18:09:21.990534  396395 main.go:141] libmachine: (ha-086149-m04) DBG | domain ha-086149-m04 has defined IP address 192.168.39.173 and MAC address 52:54:00:03:a4:7a in network mk-ha-086149
	I0819 18:09:21.990674  396395 main.go:141] libmachine: (ha-086149-m04) Calling .GetSSHPort
	I0819 18:09:21.990844  396395 main.go:141] libmachine: (ha-086149-m04) Calling .GetSSHKeyPath
	I0819 18:09:21.991013  396395 main.go:141] libmachine: (ha-086149-m04) Calling .GetSSHUsername
	I0819 18:09:21.991127  396395 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m04/id_rsa Username:docker}
	I0819 18:09:22.075654  396395 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:09:22.090249  396395 status.go:257] ha-086149-m04 status: &{Name:ha-086149-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-086149 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-086149 -n ha-086149
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-086149 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-086149 logs -n 25: (1.399507633s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-086149 ssh -n                                                                 | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:05 UTC |
	|         | ha-086149-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-086149 cp ha-086149-m03:/home/docker/cp-test.txt                              | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:05 UTC |
	|         | ha-086149:/home/docker/cp-test_ha-086149-m03_ha-086149.txt                       |           |         |         |                     |                     |
	| ssh     | ha-086149 ssh -n                                                                 | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:05 UTC |
	|         | ha-086149-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-086149 ssh -n ha-086149 sudo cat                                              | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:05 UTC |
	|         | /home/docker/cp-test_ha-086149-m03_ha-086149.txt                                 |           |         |         |                     |                     |
	| cp      | ha-086149 cp ha-086149-m03:/home/docker/cp-test.txt                              | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:05 UTC |
	|         | ha-086149-m02:/home/docker/cp-test_ha-086149-m03_ha-086149-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-086149 ssh -n                                                                 | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:05 UTC |
	|         | ha-086149-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-086149 ssh -n ha-086149-m02 sudo cat                                          | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:05 UTC |
	|         | /home/docker/cp-test_ha-086149-m03_ha-086149-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-086149 cp ha-086149-m03:/home/docker/cp-test.txt                              | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:05 UTC |
	|         | ha-086149-m04:/home/docker/cp-test_ha-086149-m03_ha-086149-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-086149 ssh -n                                                                 | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:05 UTC |
	|         | ha-086149-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-086149 ssh -n ha-086149-m04 sudo cat                                          | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:05 UTC |
	|         | /home/docker/cp-test_ha-086149-m03_ha-086149-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-086149 cp testdata/cp-test.txt                                                | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:05 UTC |
	|         | ha-086149-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-086149 ssh -n                                                                 | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:05 UTC |
	|         | ha-086149-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-086149 cp ha-086149-m04:/home/docker/cp-test.txt                              | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:05 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3465103634/001/cp-test_ha-086149-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-086149 ssh -n                                                                 | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:05 UTC |
	|         | ha-086149-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-086149 cp ha-086149-m04:/home/docker/cp-test.txt                              | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:05 UTC |
	|         | ha-086149:/home/docker/cp-test_ha-086149-m04_ha-086149.txt                       |           |         |         |                     |                     |
	| ssh     | ha-086149 ssh -n                                                                 | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:05 UTC |
	|         | ha-086149-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-086149 ssh -n ha-086149 sudo cat                                              | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:05 UTC |
	|         | /home/docker/cp-test_ha-086149-m04_ha-086149.txt                                 |           |         |         |                     |                     |
	| cp      | ha-086149 cp ha-086149-m04:/home/docker/cp-test.txt                              | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:05 UTC |
	|         | ha-086149-m02:/home/docker/cp-test_ha-086149-m04_ha-086149-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-086149 ssh -n                                                                 | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:05 UTC |
	|         | ha-086149-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-086149 ssh -n ha-086149-m02 sudo cat                                          | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:05 UTC |
	|         | /home/docker/cp-test_ha-086149-m04_ha-086149-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-086149 cp ha-086149-m04:/home/docker/cp-test.txt                              | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:05 UTC |
	|         | ha-086149-m03:/home/docker/cp-test_ha-086149-m04_ha-086149-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-086149 ssh -n                                                                 | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:05 UTC |
	|         | ha-086149-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-086149 ssh -n ha-086149-m03 sudo cat                                          | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:05 UTC |
	|         | /home/docker/cp-test_ha-086149-m04_ha-086149-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-086149 node stop m02 -v=7                                                     | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-086149 node start m02 -v=7                                                    | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:08 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 18:01:14
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 18:01:14.240865  390826 out.go:345] Setting OutFile to fd 1 ...
	I0819 18:01:14.241152  390826 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:01:14.241163  390826 out.go:358] Setting ErrFile to fd 2...
	I0819 18:01:14.241167  390826 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:01:14.241405  390826 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19468-372744/.minikube/bin
	I0819 18:01:14.242090  390826 out.go:352] Setting JSON to false
	I0819 18:01:14.243024  390826 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":6217,"bootTime":1724084257,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 18:01:14.243086  390826 start.go:139] virtualization: kvm guest
	I0819 18:01:14.246082  390826 out.go:177] * [ha-086149] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 18:01:14.247574  390826 notify.go:220] Checking for updates...
	I0819 18:01:14.247589  390826 out.go:177]   - MINIKUBE_LOCATION=19468
	I0819 18:01:14.249064  390826 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 18:01:14.250572  390826 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19468-372744/kubeconfig
	I0819 18:01:14.252143  390826 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19468-372744/.minikube
	I0819 18:01:14.253509  390826 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 18:01:14.255056  390826 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 18:01:14.256458  390826 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 18:01:14.290623  390826 out.go:177] * Using the kvm2 driver based on user configuration
	I0819 18:01:14.291905  390826 start.go:297] selected driver: kvm2
	I0819 18:01:14.291928  390826 start.go:901] validating driver "kvm2" against <nil>
	I0819 18:01:14.291942  390826 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 18:01:14.292641  390826 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 18:01:14.292766  390826 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19468-372744/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 18:01:14.307537  390826 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 18:01:14.307598  390826 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 18:01:14.307841  390826 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 18:01:14.307881  390826 cni.go:84] Creating CNI manager for ""
	I0819 18:01:14.307901  390826 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0819 18:01:14.307911  390826 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0819 18:01:14.307977  390826 start.go:340] cluster config:
	{Name:ha-086149 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-086149 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0819 18:01:14.308105  390826 iso.go:125] acquiring lock: {Name:mk4c0ac1c3202b1a296739df622960e7a0bd8566 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 18:01:14.309823  390826 out.go:177] * Starting "ha-086149" primary control-plane node in "ha-086149" cluster
	I0819 18:01:14.311065  390826 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 18:01:14.311098  390826 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0819 18:01:14.311107  390826 cache.go:56] Caching tarball of preloaded images
	I0819 18:01:14.311185  390826 preload.go:172] Found /home/jenkins/minikube-integration/19468-372744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 18:01:14.311199  390826 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 18:01:14.311518  390826 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/config.json ...
	I0819 18:01:14.311542  390826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/config.json: {Name:mkc1be96187f5b28ff94ccb29ea872196c5d05af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:01:14.311728  390826 start.go:360] acquireMachinesLock for ha-086149: {Name:mk24ba67a747357e9ce40f1e460d2bb0bc59cc75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 18:01:14.311769  390826 start.go:364] duration metric: took 23.965µs to acquireMachinesLock for "ha-086149"
	I0819 18:01:14.311794  390826 start.go:93] Provisioning new machine with config: &{Name:ha-086149 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-086149 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 18:01:14.311863  390826 start.go:125] createHost starting for "" (driver="kvm2")
	I0819 18:01:14.313644  390826 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 18:01:14.313782  390826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:01:14.313827  390826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:01:14.327944  390826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35857
	I0819 18:01:14.328381  390826 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:01:14.328914  390826 main.go:141] libmachine: Using API Version  1
	I0819 18:01:14.328936  390826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:01:14.329300  390826 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:01:14.329486  390826 main.go:141] libmachine: (ha-086149) Calling .GetMachineName
	I0819 18:01:14.329632  390826 main.go:141] libmachine: (ha-086149) Calling .DriverName
	I0819 18:01:14.329800  390826 start.go:159] libmachine.API.Create for "ha-086149" (driver="kvm2")
	I0819 18:01:14.329827  390826 client.go:168] LocalClient.Create starting
	I0819 18:01:14.329868  390826 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem
	I0819 18:01:14.329911  390826 main.go:141] libmachine: Decoding PEM data...
	I0819 18:01:14.329933  390826 main.go:141] libmachine: Parsing certificate...
	I0819 18:01:14.330035  390826 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem
	I0819 18:01:14.330064  390826 main.go:141] libmachine: Decoding PEM data...
	I0819 18:01:14.330084  390826 main.go:141] libmachine: Parsing certificate...
	I0819 18:01:14.330107  390826 main.go:141] libmachine: Running pre-create checks...
	I0819 18:01:14.330123  390826 main.go:141] libmachine: (ha-086149) Calling .PreCreateCheck
	I0819 18:01:14.330444  390826 main.go:141] libmachine: (ha-086149) Calling .GetConfigRaw
	I0819 18:01:14.330802  390826 main.go:141] libmachine: Creating machine...
	I0819 18:01:14.330816  390826 main.go:141] libmachine: (ha-086149) Calling .Create
	I0819 18:01:14.330922  390826 main.go:141] libmachine: (ha-086149) Creating KVM machine...
	I0819 18:01:14.332004  390826 main.go:141] libmachine: (ha-086149) DBG | found existing default KVM network
	I0819 18:01:14.332705  390826 main.go:141] libmachine: (ha-086149) DBG | I0819 18:01:14.332572  390849 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00011d1f0}
	I0819 18:01:14.332725  390826 main.go:141] libmachine: (ha-086149) DBG | created network xml: 
	I0819 18:01:14.332736  390826 main.go:141] libmachine: (ha-086149) DBG | <network>
	I0819 18:01:14.332743  390826 main.go:141] libmachine: (ha-086149) DBG |   <name>mk-ha-086149</name>
	I0819 18:01:14.332749  390826 main.go:141] libmachine: (ha-086149) DBG |   <dns enable='no'/>
	I0819 18:01:14.332759  390826 main.go:141] libmachine: (ha-086149) DBG |   
	I0819 18:01:14.332767  390826 main.go:141] libmachine: (ha-086149) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0819 18:01:14.332781  390826 main.go:141] libmachine: (ha-086149) DBG |     <dhcp>
	I0819 18:01:14.332796  390826 main.go:141] libmachine: (ha-086149) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0819 18:01:14.332809  390826 main.go:141] libmachine: (ha-086149) DBG |     </dhcp>
	I0819 18:01:14.332818  390826 main.go:141] libmachine: (ha-086149) DBG |   </ip>
	I0819 18:01:14.332824  390826 main.go:141] libmachine: (ha-086149) DBG |   
	I0819 18:01:14.332830  390826 main.go:141] libmachine: (ha-086149) DBG | </network>
	I0819 18:01:14.332839  390826 main.go:141] libmachine: (ha-086149) DBG | 
	I0819 18:01:14.338000  390826 main.go:141] libmachine: (ha-086149) DBG | trying to create private KVM network mk-ha-086149 192.168.39.0/24...
	I0819 18:01:14.402561  390826 main.go:141] libmachine: (ha-086149) DBG | private KVM network mk-ha-086149 192.168.39.0/24 created
	I0819 18:01:14.402609  390826 main.go:141] libmachine: (ha-086149) DBG | I0819 18:01:14.402535  390849 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19468-372744/.minikube
	I0819 18:01:14.402621  390826 main.go:141] libmachine: (ha-086149) Setting up store path in /home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149 ...
	I0819 18:01:14.402647  390826 main.go:141] libmachine: (ha-086149) Building disk image from file:///home/jenkins/minikube-integration/19468-372744/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0819 18:01:14.402674  390826 main.go:141] libmachine: (ha-086149) Downloading /home/jenkins/minikube-integration/19468-372744/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19468-372744/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 18:01:14.678792  390826 main.go:141] libmachine: (ha-086149) DBG | I0819 18:01:14.678650  390849 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149/id_rsa...
	I0819 18:01:14.736590  390826 main.go:141] libmachine: (ha-086149) DBG | I0819 18:01:14.736432  390849 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149/ha-086149.rawdisk...
	I0819 18:01:14.736625  390826 main.go:141] libmachine: (ha-086149) DBG | Writing magic tar header
	I0819 18:01:14.736689  390826 main.go:141] libmachine: (ha-086149) DBG | Writing SSH key tar header
	I0819 18:01:14.736745  390826 main.go:141] libmachine: (ha-086149) DBG | I0819 18:01:14.736551  390849 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149 ...
	I0819 18:01:14.736763  390826 main.go:141] libmachine: (ha-086149) Setting executable bit set on /home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149 (perms=drwx------)
	I0819 18:01:14.736775  390826 main.go:141] libmachine: (ha-086149) Setting executable bit set on /home/jenkins/minikube-integration/19468-372744/.minikube/machines (perms=drwxr-xr-x)
	I0819 18:01:14.736783  390826 main.go:141] libmachine: (ha-086149) Setting executable bit set on /home/jenkins/minikube-integration/19468-372744/.minikube (perms=drwxr-xr-x)
	I0819 18:01:14.736798  390826 main.go:141] libmachine: (ha-086149) Setting executable bit set on /home/jenkins/minikube-integration/19468-372744 (perms=drwxrwxr-x)
	I0819 18:01:14.736819  390826 main.go:141] libmachine: (ha-086149) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149
	I0819 18:01:14.736829  390826 main.go:141] libmachine: (ha-086149) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0819 18:01:14.736838  390826 main.go:141] libmachine: (ha-086149) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0819 18:01:14.736847  390826 main.go:141] libmachine: (ha-086149) Creating domain...
	I0819 18:01:14.736858  390826 main.go:141] libmachine: (ha-086149) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19468-372744/.minikube/machines
	I0819 18:01:14.736867  390826 main.go:141] libmachine: (ha-086149) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19468-372744/.minikube
	I0819 18:01:14.736874  390826 main.go:141] libmachine: (ha-086149) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19468-372744
	I0819 18:01:14.736881  390826 main.go:141] libmachine: (ha-086149) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0819 18:01:14.736887  390826 main.go:141] libmachine: (ha-086149) DBG | Checking permissions on dir: /home/jenkins
	I0819 18:01:14.736896  390826 main.go:141] libmachine: (ha-086149) DBG | Checking permissions on dir: /home
	I0819 18:01:14.736973  390826 main.go:141] libmachine: (ha-086149) DBG | Skipping /home - not owner
	I0819 18:01:14.737957  390826 main.go:141] libmachine: (ha-086149) define libvirt domain using xml: 
	I0819 18:01:14.737981  390826 main.go:141] libmachine: (ha-086149) <domain type='kvm'>
	I0819 18:01:14.737990  390826 main.go:141] libmachine: (ha-086149)   <name>ha-086149</name>
	I0819 18:01:14.738001  390826 main.go:141] libmachine: (ha-086149)   <memory unit='MiB'>2200</memory>
	I0819 18:01:14.738013  390826 main.go:141] libmachine: (ha-086149)   <vcpu>2</vcpu>
	I0819 18:01:14.738018  390826 main.go:141] libmachine: (ha-086149)   <features>
	I0819 18:01:14.738023  390826 main.go:141] libmachine: (ha-086149)     <acpi/>
	I0819 18:01:14.738027  390826 main.go:141] libmachine: (ha-086149)     <apic/>
	I0819 18:01:14.738032  390826 main.go:141] libmachine: (ha-086149)     <pae/>
	I0819 18:01:14.738037  390826 main.go:141] libmachine: (ha-086149)     
	I0819 18:01:14.738046  390826 main.go:141] libmachine: (ha-086149)   </features>
	I0819 18:01:14.738051  390826 main.go:141] libmachine: (ha-086149)   <cpu mode='host-passthrough'>
	I0819 18:01:14.738058  390826 main.go:141] libmachine: (ha-086149)   
	I0819 18:01:14.738068  390826 main.go:141] libmachine: (ha-086149)   </cpu>
	I0819 18:01:14.738087  390826 main.go:141] libmachine: (ha-086149)   <os>
	I0819 18:01:14.738103  390826 main.go:141] libmachine: (ha-086149)     <type>hvm</type>
	I0819 18:01:14.738109  390826 main.go:141] libmachine: (ha-086149)     <boot dev='cdrom'/>
	I0819 18:01:14.738113  390826 main.go:141] libmachine: (ha-086149)     <boot dev='hd'/>
	I0819 18:01:14.738119  390826 main.go:141] libmachine: (ha-086149)     <bootmenu enable='no'/>
	I0819 18:01:14.738126  390826 main.go:141] libmachine: (ha-086149)   </os>
	I0819 18:01:14.738130  390826 main.go:141] libmachine: (ha-086149)   <devices>
	I0819 18:01:14.738136  390826 main.go:141] libmachine: (ha-086149)     <disk type='file' device='cdrom'>
	I0819 18:01:14.738146  390826 main.go:141] libmachine: (ha-086149)       <source file='/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149/boot2docker.iso'/>
	I0819 18:01:14.738151  390826 main.go:141] libmachine: (ha-086149)       <target dev='hdc' bus='scsi'/>
	I0819 18:01:14.738159  390826 main.go:141] libmachine: (ha-086149)       <readonly/>
	I0819 18:01:14.738163  390826 main.go:141] libmachine: (ha-086149)     </disk>
	I0819 18:01:14.738170  390826 main.go:141] libmachine: (ha-086149)     <disk type='file' device='disk'>
	I0819 18:01:14.738176  390826 main.go:141] libmachine: (ha-086149)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0819 18:01:14.738186  390826 main.go:141] libmachine: (ha-086149)       <source file='/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149/ha-086149.rawdisk'/>
	I0819 18:01:14.738191  390826 main.go:141] libmachine: (ha-086149)       <target dev='hda' bus='virtio'/>
	I0819 18:01:14.738197  390826 main.go:141] libmachine: (ha-086149)     </disk>
	I0819 18:01:14.738202  390826 main.go:141] libmachine: (ha-086149)     <interface type='network'>
	I0819 18:01:14.738231  390826 main.go:141] libmachine: (ha-086149)       <source network='mk-ha-086149'/>
	I0819 18:01:14.738247  390826 main.go:141] libmachine: (ha-086149)       <model type='virtio'/>
	I0819 18:01:14.738268  390826 main.go:141] libmachine: (ha-086149)     </interface>
	I0819 18:01:14.738283  390826 main.go:141] libmachine: (ha-086149)     <interface type='network'>
	I0819 18:01:14.738296  390826 main.go:141] libmachine: (ha-086149)       <source network='default'/>
	I0819 18:01:14.738310  390826 main.go:141] libmachine: (ha-086149)       <model type='virtio'/>
	I0819 18:01:14.738332  390826 main.go:141] libmachine: (ha-086149)     </interface>
	I0819 18:01:14.738346  390826 main.go:141] libmachine: (ha-086149)     <serial type='pty'>
	I0819 18:01:14.738358  390826 main.go:141] libmachine: (ha-086149)       <target port='0'/>
	I0819 18:01:14.738368  390826 main.go:141] libmachine: (ha-086149)     </serial>
	I0819 18:01:14.738396  390826 main.go:141] libmachine: (ha-086149)     <console type='pty'>
	I0819 18:01:14.738405  390826 main.go:141] libmachine: (ha-086149)       <target type='serial' port='0'/>
	I0819 18:01:14.738410  390826 main.go:141] libmachine: (ha-086149)     </console>
	I0819 18:01:14.738417  390826 main.go:141] libmachine: (ha-086149)     <rng model='virtio'>
	I0819 18:01:14.738423  390826 main.go:141] libmachine: (ha-086149)       <backend model='random'>/dev/random</backend>
	I0819 18:01:14.738430  390826 main.go:141] libmachine: (ha-086149)     </rng>
	I0819 18:01:14.738435  390826 main.go:141] libmachine: (ha-086149)     
	I0819 18:01:14.738446  390826 main.go:141] libmachine: (ha-086149)     
	I0819 18:01:14.738453  390826 main.go:141] libmachine: (ha-086149)   </devices>
	I0819 18:01:14.738469  390826 main.go:141] libmachine: (ha-086149) </domain>
	I0819 18:01:14.738479  390826 main.go:141] libmachine: (ha-086149) 
	I0819 18:01:14.743216  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:03:c5:5f in network default
	I0819 18:01:14.743804  390826 main.go:141] libmachine: (ha-086149) Ensuring networks are active...
	I0819 18:01:14.743825  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:14.744421  390826 main.go:141] libmachine: (ha-086149) Ensuring network default is active
	I0819 18:01:14.744762  390826 main.go:141] libmachine: (ha-086149) Ensuring network mk-ha-086149 is active
	I0819 18:01:14.745298  390826 main.go:141] libmachine: (ha-086149) Getting domain xml...
	I0819 18:01:14.745905  390826 main.go:141] libmachine: (ha-086149) Creating domain...
	I0819 18:01:15.953141  390826 main.go:141] libmachine: (ha-086149) Waiting to get IP...
	I0819 18:01:15.953890  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:15.954251  390826 main.go:141] libmachine: (ha-086149) DBG | unable to find current IP address of domain ha-086149 in network mk-ha-086149
	I0819 18:01:15.954271  390826 main.go:141] libmachine: (ha-086149) DBG | I0819 18:01:15.954227  390849 retry.go:31] will retry after 231.676833ms: waiting for machine to come up
	I0819 18:01:16.187742  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:16.188211  390826 main.go:141] libmachine: (ha-086149) DBG | unable to find current IP address of domain ha-086149 in network mk-ha-086149
	I0819 18:01:16.188245  390826 main.go:141] libmachine: (ha-086149) DBG | I0819 18:01:16.188162  390849 retry.go:31] will retry after 292.527195ms: waiting for machine to come up
	I0819 18:01:16.482731  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:16.483176  390826 main.go:141] libmachine: (ha-086149) DBG | unable to find current IP address of domain ha-086149 in network mk-ha-086149
	I0819 18:01:16.483203  390826 main.go:141] libmachine: (ha-086149) DBG | I0819 18:01:16.483122  390849 retry.go:31] will retry after 330.893319ms: waiting for machine to come up
	I0819 18:01:16.815745  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:16.816126  390826 main.go:141] libmachine: (ha-086149) DBG | unable to find current IP address of domain ha-086149 in network mk-ha-086149
	I0819 18:01:16.816156  390826 main.go:141] libmachine: (ha-086149) DBG | I0819 18:01:16.816076  390849 retry.go:31] will retry after 444.378344ms: waiting for machine to come up
	I0819 18:01:17.261713  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:17.262004  390826 main.go:141] libmachine: (ha-086149) DBG | unable to find current IP address of domain ha-086149 in network mk-ha-086149
	I0819 18:01:17.262034  390826 main.go:141] libmachine: (ha-086149) DBG | I0819 18:01:17.261932  390849 retry.go:31] will retry after 566.799409ms: waiting for machine to come up
	I0819 18:01:17.830885  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:17.831318  390826 main.go:141] libmachine: (ha-086149) DBG | unable to find current IP address of domain ha-086149 in network mk-ha-086149
	I0819 18:01:17.831344  390826 main.go:141] libmachine: (ha-086149) DBG | I0819 18:01:17.831270  390849 retry.go:31] will retry after 748.576215ms: waiting for machine to come up
	I0819 18:01:18.581145  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:18.581611  390826 main.go:141] libmachine: (ha-086149) DBG | unable to find current IP address of domain ha-086149 in network mk-ha-086149
	I0819 18:01:18.581660  390826 main.go:141] libmachine: (ha-086149) DBG | I0819 18:01:18.581558  390849 retry.go:31] will retry after 1.124966525s: waiting for machine to come up
	I0819 18:01:19.708677  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:19.709123  390826 main.go:141] libmachine: (ha-086149) DBG | unable to find current IP address of domain ha-086149 in network mk-ha-086149
	I0819 18:01:19.709155  390826 main.go:141] libmachine: (ha-086149) DBG | I0819 18:01:19.709077  390849 retry.go:31] will retry after 1.107728894s: waiting for machine to come up
	I0819 18:01:20.818466  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:20.818893  390826 main.go:141] libmachine: (ha-086149) DBG | unable to find current IP address of domain ha-086149 in network mk-ha-086149
	I0819 18:01:20.818959  390826 main.go:141] libmachine: (ha-086149) DBG | I0819 18:01:20.818841  390849 retry.go:31] will retry after 1.665812969s: waiting for machine to come up
	I0819 18:01:22.486711  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:22.487198  390826 main.go:141] libmachine: (ha-086149) DBG | unable to find current IP address of domain ha-086149 in network mk-ha-086149
	I0819 18:01:22.487233  390826 main.go:141] libmachine: (ha-086149) DBG | I0819 18:01:22.487151  390849 retry.go:31] will retry after 1.582489658s: waiting for machine to come up
	I0819 18:01:24.072236  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:24.072800  390826 main.go:141] libmachine: (ha-086149) DBG | unable to find current IP address of domain ha-086149 in network mk-ha-086149
	I0819 18:01:24.072833  390826 main.go:141] libmachine: (ha-086149) DBG | I0819 18:01:24.072721  390849 retry.go:31] will retry after 2.220917653s: waiting for machine to come up
	I0819 18:01:26.294955  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:26.295430  390826 main.go:141] libmachine: (ha-086149) DBG | unable to find current IP address of domain ha-086149 in network mk-ha-086149
	I0819 18:01:26.295453  390826 main.go:141] libmachine: (ha-086149) DBG | I0819 18:01:26.295399  390849 retry.go:31] will retry after 3.560062988s: waiting for machine to come up
	I0819 18:01:29.856788  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:29.857284  390826 main.go:141] libmachine: (ha-086149) DBG | unable to find current IP address of domain ha-086149 in network mk-ha-086149
	I0819 18:01:29.857309  390826 main.go:141] libmachine: (ha-086149) DBG | I0819 18:01:29.857243  390849 retry.go:31] will retry after 3.132423259s: waiting for machine to come up
	I0819 18:01:32.993589  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:32.993968  390826 main.go:141] libmachine: (ha-086149) DBG | unable to find current IP address of domain ha-086149 in network mk-ha-086149
	I0819 18:01:32.993998  390826 main.go:141] libmachine: (ha-086149) DBG | I0819 18:01:32.993903  390849 retry.go:31] will retry after 4.312546597s: waiting for machine to come up
	I0819 18:01:37.310234  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:37.310613  390826 main.go:141] libmachine: (ha-086149) Found IP for machine: 192.168.39.249
	I0819 18:01:37.310637  390826 main.go:141] libmachine: (ha-086149) Reserving static IP address...
	I0819 18:01:37.310650  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has current primary IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:37.311102  390826 main.go:141] libmachine: (ha-086149) DBG | unable to find host DHCP lease matching {name: "ha-086149", mac: "52:54:00:3b:ab:95", ip: "192.168.39.249"} in network mk-ha-086149
	I0819 18:01:37.382735  390826 main.go:141] libmachine: (ha-086149) DBG | Getting to WaitForSSH function...
	I0819 18:01:37.382762  390826 main.go:141] libmachine: (ha-086149) Reserved static IP address: 192.168.39.249
	I0819 18:01:37.382775  390826 main.go:141] libmachine: (ha-086149) Waiting for SSH to be available...
	I0819 18:01:37.385538  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:37.385901  390826 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:minikube Clientid:01:52:54:00:3b:ab:95}
	I0819 18:01:37.385933  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:37.386056  390826 main.go:141] libmachine: (ha-086149) DBG | Using SSH client type: external
	I0819 18:01:37.386085  390826 main.go:141] libmachine: (ha-086149) DBG | Using SSH private key: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149/id_rsa (-rw-------)
	I0819 18:01:37.386117  390826 main.go:141] libmachine: (ha-086149) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.249 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 18:01:37.386150  390826 main.go:141] libmachine: (ha-086149) DBG | About to run SSH command:
	I0819 18:01:37.386177  390826 main.go:141] libmachine: (ha-086149) DBG | exit 0
	I0819 18:01:37.508186  390826 main.go:141] libmachine: (ha-086149) DBG | SSH cmd err, output: <nil>: 
	I0819 18:01:37.508445  390826 main.go:141] libmachine: (ha-086149) KVM machine creation complete!
	I0819 18:01:37.508869  390826 main.go:141] libmachine: (ha-086149) Calling .GetConfigRaw
	I0819 18:01:37.509429  390826 main.go:141] libmachine: (ha-086149) Calling .DriverName
	I0819 18:01:37.509628  390826 main.go:141] libmachine: (ha-086149) Calling .DriverName
	I0819 18:01:37.509764  390826 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0819 18:01:37.509780  390826 main.go:141] libmachine: (ha-086149) Calling .GetState
	I0819 18:01:37.511032  390826 main.go:141] libmachine: Detecting operating system of created instance...
	I0819 18:01:37.511048  390826 main.go:141] libmachine: Waiting for SSH to be available...
	I0819 18:01:37.511056  390826 main.go:141] libmachine: Getting to WaitForSSH function...
	I0819 18:01:37.511063  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHHostname
	I0819 18:01:37.513123  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:37.513488  390826 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:01:37.513515  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:37.513669  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHPort
	I0819 18:01:37.513880  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHKeyPath
	I0819 18:01:37.514076  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHKeyPath
	I0819 18:01:37.514212  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHUsername
	I0819 18:01:37.514390  390826 main.go:141] libmachine: Using SSH client type: native
	I0819 18:01:37.514597  390826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0819 18:01:37.514608  390826 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0819 18:01:37.615268  390826 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 18:01:37.615299  390826 main.go:141] libmachine: Detecting the provisioner...
	I0819 18:01:37.615309  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHHostname
	I0819 18:01:37.617932  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:37.618267  390826 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:01:37.618295  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:37.618456  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHPort
	I0819 18:01:37.618688  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHKeyPath
	I0819 18:01:37.618855  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHKeyPath
	I0819 18:01:37.619026  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHUsername
	I0819 18:01:37.619166  390826 main.go:141] libmachine: Using SSH client type: native
	I0819 18:01:37.619344  390826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0819 18:01:37.619355  390826 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0819 18:01:37.724338  390826 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0819 18:01:37.724449  390826 main.go:141] libmachine: found compatible host: buildroot
	I0819 18:01:37.724459  390826 main.go:141] libmachine: Provisioning with buildroot...
	I0819 18:01:37.724470  390826 main.go:141] libmachine: (ha-086149) Calling .GetMachineName
	I0819 18:01:37.724739  390826 buildroot.go:166] provisioning hostname "ha-086149"
	I0819 18:01:37.724769  390826 main.go:141] libmachine: (ha-086149) Calling .GetMachineName
	I0819 18:01:37.724966  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHHostname
	I0819 18:01:37.727668  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:37.728005  390826 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:01:37.728048  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:37.728267  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHPort
	I0819 18:01:37.728456  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHKeyPath
	I0819 18:01:37.728626  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHKeyPath
	I0819 18:01:37.728792  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHUsername
	I0819 18:01:37.728936  390826 main.go:141] libmachine: Using SSH client type: native
	I0819 18:01:37.729115  390826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0819 18:01:37.729129  390826 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-086149 && echo "ha-086149" | sudo tee /etc/hostname
	I0819 18:01:37.842792  390826 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-086149
	
	I0819 18:01:37.842819  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHHostname
	I0819 18:01:37.845794  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:37.846081  390826 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:01:37.846104  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:37.846317  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHPort
	I0819 18:01:37.846579  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHKeyPath
	I0819 18:01:37.846767  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHKeyPath
	I0819 18:01:37.846897  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHUsername
	I0819 18:01:37.847116  390826 main.go:141] libmachine: Using SSH client type: native
	I0819 18:01:37.847282  390826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0819 18:01:37.847298  390826 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-086149' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-086149/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-086149' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 18:01:37.957710  390826 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 18:01:37.957771  390826 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19468-372744/.minikube CaCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19468-372744/.minikube}
	I0819 18:01:37.957810  390826 buildroot.go:174] setting up certificates
	I0819 18:01:37.957820  390826 provision.go:84] configureAuth start
	I0819 18:01:37.957834  390826 main.go:141] libmachine: (ha-086149) Calling .GetMachineName
	I0819 18:01:37.958169  390826 main.go:141] libmachine: (ha-086149) Calling .GetIP
	I0819 18:01:37.961063  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:37.961475  390826 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:01:37.961501  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:37.961659  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHHostname
	I0819 18:01:37.964114  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:37.964462  390826 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:01:37.964485  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:37.964677  390826 provision.go:143] copyHostCerts
	I0819 18:01:37.964713  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem
	I0819 18:01:37.964759  390826 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem, removing ...
	I0819 18:01:37.964776  390826 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem
	I0819 18:01:37.964850  390826 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem (1123 bytes)
	I0819 18:01:37.964968  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem
	I0819 18:01:37.964987  390826 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem, removing ...
	I0819 18:01:37.965004  390826 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem
	I0819 18:01:37.965034  390826 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem (1675 bytes)
	I0819 18:01:37.965088  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem
	I0819 18:01:37.965104  390826 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem, removing ...
	I0819 18:01:37.965108  390826 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem
	I0819 18:01:37.965133  390826 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem (1082 bytes)
	I0819 18:01:37.965234  390826 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem org=jenkins.ha-086149 san=[127.0.0.1 192.168.39.249 ha-086149 localhost minikube]
	I0819 18:01:38.173183  390826 provision.go:177] copyRemoteCerts
	I0819 18:01:38.173246  390826 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 18:01:38.173275  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHHostname
	I0819 18:01:38.175851  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:38.176095  390826 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:01:38.176128  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:38.176282  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHPort
	I0819 18:01:38.176497  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHKeyPath
	I0819 18:01:38.176665  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHUsername
	I0819 18:01:38.176833  390826 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149/id_rsa Username:docker}
	I0819 18:01:38.257560  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 18:01:38.257639  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 18:01:38.284684  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 18:01:38.284752  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0819 18:01:38.309385  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 18:01:38.309447  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 18:01:38.333123  390826 provision.go:87] duration metric: took 375.286063ms to configureAuth
	I0819 18:01:38.333155  390826 buildroot.go:189] setting minikube options for container-runtime
	I0819 18:01:38.333397  390826 config.go:182] Loaded profile config "ha-086149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:01:38.333516  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHHostname
	I0819 18:01:38.335910  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:38.336207  390826 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:01:38.336232  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:38.336374  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHPort
	I0819 18:01:38.336579  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHKeyPath
	I0819 18:01:38.336758  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHKeyPath
	I0819 18:01:38.336911  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHUsername
	I0819 18:01:38.337075  390826 main.go:141] libmachine: Using SSH client type: native
	I0819 18:01:38.337341  390826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0819 18:01:38.337363  390826 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 18:01:38.598506  390826 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 18:01:38.598543  390826 main.go:141] libmachine: Checking connection to Docker...
	I0819 18:01:38.598553  390826 main.go:141] libmachine: (ha-086149) Calling .GetURL
	I0819 18:01:38.599830  390826 main.go:141] libmachine: (ha-086149) DBG | Using libvirt version 6000000
	I0819 18:01:38.603049  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:38.603455  390826 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:01:38.603479  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:38.603662  390826 main.go:141] libmachine: Docker is up and running!
	I0819 18:01:38.603695  390826 main.go:141] libmachine: Reticulating splines...
	I0819 18:01:38.603704  390826 client.go:171] duration metric: took 24.273868888s to LocalClient.Create
	I0819 18:01:38.603734  390826 start.go:167] duration metric: took 24.273933922s to libmachine.API.Create "ha-086149"
	I0819 18:01:38.603746  390826 start.go:293] postStartSetup for "ha-086149" (driver="kvm2")
	I0819 18:01:38.603759  390826 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 18:01:38.603780  390826 main.go:141] libmachine: (ha-086149) Calling .DriverName
	I0819 18:01:38.604028  390826 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 18:01:38.604059  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHHostname
	I0819 18:01:38.606363  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:38.606683  390826 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:01:38.606703  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:38.606858  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHPort
	I0819 18:01:38.607012  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHKeyPath
	I0819 18:01:38.607149  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHUsername
	I0819 18:01:38.607289  390826 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149/id_rsa Username:docker}
	I0819 18:01:38.686072  390826 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 18:01:38.690382  390826 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 18:01:38.690411  390826 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-372744/.minikube/addons for local assets ...
	I0819 18:01:38.690477  390826 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-372744/.minikube/files for local assets ...
	I0819 18:01:38.690547  390826 filesync.go:149] local asset: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem -> 3800092.pem in /etc/ssl/certs
	I0819 18:01:38.690556  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem -> /etc/ssl/certs/3800092.pem
	I0819 18:01:38.690647  390826 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 18:01:38.700129  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem --> /etc/ssl/certs/3800092.pem (1708 bytes)
	I0819 18:01:38.725376  390826 start.go:296] duration metric: took 121.612672ms for postStartSetup
	I0819 18:01:38.725438  390826 main.go:141] libmachine: (ha-086149) Calling .GetConfigRaw
	I0819 18:01:38.726203  390826 main.go:141] libmachine: (ha-086149) Calling .GetIP
	I0819 18:01:38.728817  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:38.729168  390826 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:01:38.729189  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:38.729441  390826 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/config.json ...
	I0819 18:01:38.729623  390826 start.go:128] duration metric: took 24.417747393s to createHost
	I0819 18:01:38.729647  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHHostname
	I0819 18:01:38.731878  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:38.732140  390826 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:01:38.732174  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:38.732297  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHPort
	I0819 18:01:38.732481  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHKeyPath
	I0819 18:01:38.732618  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHKeyPath
	I0819 18:01:38.732709  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHUsername
	I0819 18:01:38.732872  390826 main.go:141] libmachine: Using SSH client type: native
	I0819 18:01:38.733034  390826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0819 18:01:38.733047  390826 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 18:01:38.832329  390826 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724090498.808951790
	
	I0819 18:01:38.832355  390826 fix.go:216] guest clock: 1724090498.808951790
	I0819 18:01:38.832365  390826 fix.go:229] Guest: 2024-08-19 18:01:38.80895179 +0000 UTC Remote: 2024-08-19 18:01:38.729636292 +0000 UTC m=+24.523532707 (delta=79.315498ms)
	I0819 18:01:38.832393  390826 fix.go:200] guest clock delta is within tolerance: 79.315498ms
	I0819 18:01:38.832402  390826 start.go:83] releasing machines lock for "ha-086149", held for 24.520619381s
	I0819 18:01:38.832430  390826 main.go:141] libmachine: (ha-086149) Calling .DriverName
	I0819 18:01:38.832727  390826 main.go:141] libmachine: (ha-086149) Calling .GetIP
	I0819 18:01:38.835361  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:38.835631  390826 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:01:38.835661  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:38.835753  390826 main.go:141] libmachine: (ha-086149) Calling .DriverName
	I0819 18:01:38.836218  390826 main.go:141] libmachine: (ha-086149) Calling .DriverName
	I0819 18:01:38.836367  390826 main.go:141] libmachine: (ha-086149) Calling .DriverName
	I0819 18:01:38.836443  390826 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 18:01:38.836492  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHHostname
	I0819 18:01:38.836568  390826 ssh_runner.go:195] Run: cat /version.json
	I0819 18:01:38.836594  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHHostname
	I0819 18:01:38.839021  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:38.839317  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:38.839529  390826 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:01:38.839556  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:38.839615  390826 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:01:38.839632  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:38.839691  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHPort
	I0819 18:01:38.839877  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHPort
	I0819 18:01:38.839879  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHKeyPath
	I0819 18:01:38.840170  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHKeyPath
	I0819 18:01:38.840181  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHUsername
	I0819 18:01:38.840341  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHUsername
	I0819 18:01:38.840337  390826 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149/id_rsa Username:docker}
	I0819 18:01:38.840488  390826 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149/id_rsa Username:docker}
	I0819 18:01:38.935481  390826 ssh_runner.go:195] Run: systemctl --version
	I0819 18:01:38.941309  390826 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 18:01:39.096352  390826 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 18:01:39.102482  390826 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 18:01:39.102559  390826 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 18:01:39.118206  390826 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 18:01:39.118237  390826 start.go:495] detecting cgroup driver to use...
	I0819 18:01:39.118319  390826 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 18:01:39.134273  390826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 18:01:39.148062  390826 docker.go:217] disabling cri-docker service (if available) ...
	I0819 18:01:39.148121  390826 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 18:01:39.161462  390826 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 18:01:39.175269  390826 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 18:01:39.293356  390826 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 18:01:39.445330  390826 docker.go:233] disabling docker service ...
	I0819 18:01:39.445414  390826 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 18:01:39.460090  390826 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 18:01:39.472790  390826 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 18:01:39.616740  390826 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 18:01:39.734300  390826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 18:01:39.748386  390826 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 18:01:39.766358  390826 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 18:01:39.766422  390826 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:01:39.776623  390826 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 18:01:39.776685  390826 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:01:39.786691  390826 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:01:39.796640  390826 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:01:39.806683  390826 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 18:01:39.816903  390826 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:01:39.826798  390826 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:01:39.843421  390826 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:01:39.853381  390826 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 18:01:39.862235  390826 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 18:01:39.862289  390826 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 18:01:39.874809  390826 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 18:01:39.883569  390826 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 18:01:39.998358  390826 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 18:01:40.135661  390826 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 18:01:40.135757  390826 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 18:01:40.140313  390826 start.go:563] Will wait 60s for crictl version
	I0819 18:01:40.140376  390826 ssh_runner.go:195] Run: which crictl
	I0819 18:01:40.144077  390826 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 18:01:40.180775  390826 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 18:01:40.180864  390826 ssh_runner.go:195] Run: crio --version
	I0819 18:01:40.210079  390826 ssh_runner.go:195] Run: crio --version
	I0819 18:01:40.240165  390826 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 18:01:40.241358  390826 main.go:141] libmachine: (ha-086149) Calling .GetIP
	I0819 18:01:40.244054  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:40.244407  390826 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:01:40.244433  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:40.244638  390826 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 18:01:40.248760  390826 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 18:01:40.262105  390826 kubeadm.go:883] updating cluster {Name:ha-086149 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:ha-086149 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 18:01:40.262241  390826 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 18:01:40.262306  390826 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 18:01:40.294822  390826 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0819 18:01:40.294904  390826 ssh_runner.go:195] Run: which lz4
	I0819 18:01:40.298591  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0819 18:01:40.298677  390826 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 18:01:40.302618  390826 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 18:01:40.302644  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0819 18:01:41.653130  390826 crio.go:462] duration metric: took 1.354478555s to copy over tarball
	I0819 18:01:41.653222  390826 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 18:01:43.658136  390826 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.004875453s)
	I0819 18:01:43.658164  390826 crio.go:469] duration metric: took 2.005002364s to extract the tarball
	I0819 18:01:43.658171  390826 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 18:01:43.697217  390826 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 18:01:43.745822  390826 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 18:01:43.745847  390826 cache_images.go:84] Images are preloaded, skipping loading
	I0819 18:01:43.745858  390826 kubeadm.go:934] updating node { 192.168.39.249 8443 v1.31.0 crio true true} ...
	I0819 18:01:43.746007  390826 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-086149 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.249
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-086149 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 18:01:43.746105  390826 ssh_runner.go:195] Run: crio config
	I0819 18:01:43.791378  390826 cni.go:84] Creating CNI manager for ""
	I0819 18:01:43.791406  390826 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0819 18:01:43.791428  390826 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 18:01:43.791459  390826 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.249 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-086149 NodeName:ha-086149 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.249"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.249 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 18:01:43.791667  390826 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.249
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-086149"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.249
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.249"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 18:01:43.791719  390826 kube-vip.go:115] generating kube-vip config ...
	I0819 18:01:43.791775  390826 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0819 18:01:43.808159  390826 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0819 18:01:43.808286  390826 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0819 18:01:43.808341  390826 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 18:01:43.818120  390826 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 18:01:43.818166  390826 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0819 18:01:43.827346  390826 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0819 18:01:43.843358  390826 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 18:01:43.859459  390826 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0819 18:01:43.875500  390826 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0819 18:01:43.891118  390826 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0819 18:01:43.894940  390826 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 18:01:43.906694  390826 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 18:01:44.019755  390826 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 18:01:44.037206  390826 certs.go:68] Setting up /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149 for IP: 192.168.39.249
	I0819 18:01:44.037233  390826 certs.go:194] generating shared ca certs ...
	I0819 18:01:44.037250  390826 certs.go:226] acquiring lock for ca certs: {Name:mk639e03f593e0bccac045f6e9f5ba3b96cc81e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:01:44.037395  390826 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key
	I0819 18:01:44.037430  390826 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key
	I0819 18:01:44.037439  390826 certs.go:256] generating profile certs ...
	I0819 18:01:44.037486  390826 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/client.key
	I0819 18:01:44.037513  390826 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/client.crt with IP's: []
	I0819 18:01:44.154467  390826 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/client.crt ...
	I0819 18:01:44.154501  390826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/client.crt: {Name:mk258075469b347e17ae9e52e38a8f7b4d8898f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:01:44.154664  390826 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/client.key ...
	I0819 18:01:44.154675  390826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/client.key: {Name:mkb5a4a095ddf05a1ffc45a14947f43ab1e167d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:01:44.154759  390826 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.key.2127eae6
	I0819 18:01:44.154775  390826 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.crt.2127eae6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.249 192.168.39.254]
	I0819 18:01:44.407450  390826 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.crt.2127eae6 ...
	I0819 18:01:44.407483  390826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.crt.2127eae6: {Name:mkaa4255cf0215780e52d06d0978b9ef66e9383c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:01:44.407659  390826 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.key.2127eae6 ...
	I0819 18:01:44.407689  390826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.key.2127eae6: {Name:mk13449ba75342bd86a357e19023a42b69429c07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:01:44.407769  390826 certs.go:381] copying /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.crt.2127eae6 -> /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.crt
	I0819 18:01:44.407871  390826 certs.go:385] copying /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.key.2127eae6 -> /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.key
	I0819 18:01:44.407938  390826 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/proxy-client.key
	I0819 18:01:44.407954  390826 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/proxy-client.crt with IP's: []
	I0819 18:01:44.659255  390826 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/proxy-client.crt ...
	I0819 18:01:44.659286  390826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/proxy-client.crt: {Name:mk8161be27b842429a94ece9edfb4c7103e5dd4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:01:44.659443  390826 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/proxy-client.key ...
	I0819 18:01:44.659454  390826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/proxy-client.key: {Name:mk49fe1209981c015e7b47bc5acccdb54fa003fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:01:44.659523  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 18:01:44.659544  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 18:01:44.659557  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 18:01:44.659567  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 18:01:44.659580  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0819 18:01:44.659591  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0819 18:01:44.659603  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0819 18:01:44.659616  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0819 18:01:44.659670  390826 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem (1338 bytes)
	W0819 18:01:44.659721  390826 certs.go:480] ignoring /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009_empty.pem, impossibly tiny 0 bytes
	I0819 18:01:44.659731  390826 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 18:01:44.659752  390826 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem (1082 bytes)
	I0819 18:01:44.659774  390826 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem (1123 bytes)
	I0819 18:01:44.659794  390826 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem (1675 bytes)
	I0819 18:01:44.659829  390826 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem (1708 bytes)
	I0819 18:01:44.659857  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:01:44.659871  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem -> /usr/share/ca-certificates/380009.pem
	I0819 18:01:44.659884  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem -> /usr/share/ca-certificates/3800092.pem
	I0819 18:01:44.660513  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 18:01:44.686415  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 18:01:44.714836  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 18:01:44.742072  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 18:01:44.769557  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0819 18:01:44.801060  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 18:01:44.847181  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 18:01:44.886103  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 18:01:44.912931  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 18:01:44.939740  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem --> /usr/share/ca-certificates/380009.pem (1338 bytes)
	I0819 18:01:44.966553  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem --> /usr/share/ca-certificates/3800092.pem (1708 bytes)
	I0819 18:01:44.993733  390826 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 18:01:45.012583  390826 ssh_runner.go:195] Run: openssl version
	I0819 18:01:45.018619  390826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/380009.pem && ln -fs /usr/share/ca-certificates/380009.pem /etc/ssl/certs/380009.pem"
	I0819 18:01:45.030131  390826 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/380009.pem
	I0819 18:01:45.035072  390826 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 17:56 /usr/share/ca-certificates/380009.pem
	I0819 18:01:45.035138  390826 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/380009.pem
	I0819 18:01:45.041228  390826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/380009.pem /etc/ssl/certs/51391683.0"
	I0819 18:01:45.052433  390826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3800092.pem && ln -fs /usr/share/ca-certificates/3800092.pem /etc/ssl/certs/3800092.pem"
	I0819 18:01:45.063641  390826 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3800092.pem
	I0819 18:01:45.068462  390826 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 17:56 /usr/share/ca-certificates/3800092.pem
	I0819 18:01:45.068527  390826 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3800092.pem
	I0819 18:01:45.074375  390826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3800092.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 18:01:45.085468  390826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 18:01:45.096018  390826 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:01:45.100715  390826 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:01:45.100771  390826 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:01:45.106535  390826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 18:01:45.117371  390826 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 18:01:45.121839  390826 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 18:01:45.121891  390826 kubeadm.go:392] StartCluster: {Name:ha-086149 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clust
erName:ha-086149 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 18:01:45.121970  390826 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 18:01:45.122022  390826 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 18:01:45.164294  390826 cri.go:89] found id: ""
	I0819 18:01:45.164366  390826 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 18:01:45.174823  390826 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 18:01:45.184977  390826 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 18:01:45.198329  390826 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 18:01:45.198350  390826 kubeadm.go:157] found existing configuration files:
	
	I0819 18:01:45.198399  390826 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 18:01:45.209542  390826 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 18:01:45.209593  390826 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 18:01:45.219539  390826 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 18:01:45.228956  390826 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 18:01:45.229021  390826 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 18:01:45.238691  390826 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 18:01:45.248330  390826 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 18:01:45.248400  390826 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 18:01:45.258511  390826 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 18:01:45.273668  390826 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 18:01:45.273735  390826 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 18:01:45.283470  390826 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 18:01:45.396768  390826 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0819 18:01:45.396886  390826 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 18:01:45.493304  390826 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 18:01:45.493445  390826 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 18:01:45.493562  390826 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0819 18:01:45.504233  390826 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 18:01:45.572693  390826 out.go:235]   - Generating certificates and keys ...
	I0819 18:01:45.572859  390826 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 18:01:45.572953  390826 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 18:01:45.952901  390826 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0819 18:01:46.141101  390826 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0819 18:01:46.225834  390826 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0819 18:01:46.393564  390826 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0819 18:01:46.498486  390826 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0819 18:01:46.498651  390826 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-086149 localhost] and IPs [192.168.39.249 127.0.0.1 ::1]
	I0819 18:01:46.611046  390826 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0819 18:01:46.611211  390826 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-086149 localhost] and IPs [192.168.39.249 127.0.0.1 ::1]
	I0819 18:01:46.728113  390826 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0819 18:01:46.908159  390826 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0819 18:01:47.227993  390826 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0819 18:01:47.228204  390826 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 18:01:47.338009  390826 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 18:01:47.409840  390826 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0819 18:01:47.566221  390826 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 18:01:47.801677  390826 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 18:01:47.909159  390826 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 18:01:47.910131  390826 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 18:01:47.914891  390826 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 18:01:48.091438  390826 out.go:235]   - Booting up control plane ...
	I0819 18:01:48.091596  390826 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 18:01:48.091720  390826 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 18:01:48.091811  390826 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 18:01:48.091947  390826 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 18:01:48.092083  390826 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 18:01:48.092140  390826 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 18:01:48.092324  390826 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0819 18:01:48.092472  390826 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0819 18:01:48.586342  390826 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.436454ms
	I0819 18:01:48.586444  390826 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0819 18:01:54.544621  390826 kubeadm.go:310] [api-check] The API server is healthy after 5.961720563s
	I0819 18:01:54.561358  390826 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 18:01:54.579538  390826 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 18:01:54.611082  390826 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 18:01:54.611350  390826 kubeadm.go:310] [mark-control-plane] Marking the node ha-086149 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 18:01:54.633582  390826 kubeadm.go:310] [bootstrap-token] Using token: 6ctgsc.y7paq351y1edkj9k
	I0819 18:01:54.634932  390826 out.go:235]   - Configuring RBAC rules ...
	I0819 18:01:54.635053  390826 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 18:01:54.639884  390826 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 18:01:54.652039  390826 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 18:01:54.655851  390826 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 18:01:54.661049  390826 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 18:01:54.664499  390826 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 18:01:54.957667  390826 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 18:01:55.386738  390826 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 18:01:55.956556  390826 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 18:01:55.957757  390826 kubeadm.go:310] 
	I0819 18:01:55.957856  390826 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 18:01:55.957866  390826 kubeadm.go:310] 
	I0819 18:01:55.957957  390826 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 18:01:55.957965  390826 kubeadm.go:310] 
	I0819 18:01:55.957996  390826 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 18:01:55.958073  390826 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 18:01:55.958162  390826 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 18:01:55.958171  390826 kubeadm.go:310] 
	I0819 18:01:55.958252  390826 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 18:01:55.958260  390826 kubeadm.go:310] 
	I0819 18:01:55.958325  390826 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 18:01:55.958334  390826 kubeadm.go:310] 
	I0819 18:01:55.958398  390826 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 18:01:55.958506  390826 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 18:01:55.958586  390826 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 18:01:55.958603  390826 kubeadm.go:310] 
	I0819 18:01:55.958710  390826 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 18:01:55.958810  390826 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 18:01:55.958820  390826 kubeadm.go:310] 
	I0819 18:01:55.958924  390826 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 6ctgsc.y7paq351y1edkj9k \
	I0819 18:01:55.959068  390826 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3fcbd90565c5acbc36a47b2db682cb22dce9b172c9bf3af21e506ebb67608039 \
	I0819 18:01:55.959109  390826 kubeadm.go:310] 	--control-plane 
	I0819 18:01:55.959117  390826 kubeadm.go:310] 
	I0819 18:01:55.959228  390826 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 18:01:55.959241  390826 kubeadm.go:310] 
	I0819 18:01:55.959312  390826 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 6ctgsc.y7paq351y1edkj9k \
	I0819 18:01:55.959413  390826 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3fcbd90565c5acbc36a47b2db682cb22dce9b172c9bf3af21e506ebb67608039 
	I0819 18:01:55.960426  390826 kubeadm.go:310] W0819 18:01:45.377364     856 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 18:01:55.960761  390826 kubeadm.go:310] W0819 18:01:45.378188     856 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 18:01:55.960885  390826 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 18:01:55.960917  390826 cni.go:84] Creating CNI manager for ""
	I0819 18:01:55.960930  390826 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0819 18:01:55.962469  390826 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0819 18:01:55.963759  390826 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0819 18:01:55.969468  390826 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0819 18:01:55.969489  390826 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0819 18:01:55.989602  390826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0819 18:01:56.347021  390826 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 18:01:56.347178  390826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:01:56.347192  390826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-086149 minikube.k8s.io/updated_at=2024_08_19T18_01_56_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=9c2db9d51ec33b5c53a86e9ba3d384ee332e3411 minikube.k8s.io/name=ha-086149 minikube.k8s.io/primary=true
	I0819 18:01:56.402969  390826 ops.go:34] apiserver oom_adj: -16
	I0819 18:01:56.560334  390826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:01:57.060392  390826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:01:57.560936  390826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:01:58.060657  390826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:01:58.560717  390826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:01:59.060436  390826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:01:59.560902  390826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:01:59.706545  390826 kubeadm.go:1113] duration metric: took 3.359439383s to wait for elevateKubeSystemPrivileges
	I0819 18:01:59.706592  390826 kubeadm.go:394] duration metric: took 14.584706319s to StartCluster
	I0819 18:01:59.706620  390826 settings.go:142] acquiring lock: {Name:mk396fcf49a1d0e69583cf37ff3c819e37118163 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:01:59.706712  390826 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19468-372744/kubeconfig
	I0819 18:01:59.707624  390826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/kubeconfig: {Name:mk8e7b4e1bb7da665111d2acd83eb48882c66853 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:01:59.708143  390826 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 18:01:59.708183  390826 start.go:241] waiting for startup goroutines ...
	I0819 18:01:59.708165  390826 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0819 18:01:59.708260  390826 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 18:01:59.708346  390826 addons.go:69] Setting storage-provisioner=true in profile "ha-086149"
	I0819 18:01:59.708374  390826 addons.go:69] Setting default-storageclass=true in profile "ha-086149"
	I0819 18:01:59.708382  390826 addons.go:234] Setting addon storage-provisioner=true in "ha-086149"
	I0819 18:01:59.708388  390826 config.go:182] Loaded profile config "ha-086149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:01:59.708411  390826 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-086149"
	I0819 18:01:59.708421  390826 host.go:66] Checking if "ha-086149" exists ...
	I0819 18:01:59.708836  390826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:01:59.708857  390826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:01:59.708877  390826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:01:59.708880  390826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:01:59.724644  390826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43111
	I0819 18:01:59.724698  390826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43475
	I0819 18:01:59.725176  390826 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:01:59.725182  390826 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:01:59.725736  390826 main.go:141] libmachine: Using API Version  1
	I0819 18:01:59.725765  390826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:01:59.726062  390826 main.go:141] libmachine: Using API Version  1
	I0819 18:01:59.726084  390826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:01:59.726116  390826 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:01:59.726335  390826 main.go:141] libmachine: (ha-086149) Calling .GetState
	I0819 18:01:59.726378  390826 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:01:59.726922  390826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:01:59.726953  390826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:01:59.728551  390826 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19468-372744/kubeconfig
	I0819 18:01:59.728761  390826 kapi.go:59] client config for ha-086149: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/client.crt", KeyFile:"/home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/client.key", CAFile:"/home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f18d20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0819 18:01:59.729258  390826 cert_rotation.go:140] Starting client certificate rotation controller
	I0819 18:01:59.729544  390826 addons.go:234] Setting addon default-storageclass=true in "ha-086149"
	I0819 18:01:59.729585  390826 host.go:66] Checking if "ha-086149" exists ...
	I0819 18:01:59.729959  390826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:01:59.729986  390826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:01:59.743354  390826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38843
	I0819 18:01:59.743855  390826 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:01:59.744462  390826 main.go:141] libmachine: Using API Version  1
	I0819 18:01:59.744497  390826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:01:59.744852  390826 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:01:59.745068  390826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33009
	I0819 18:01:59.745095  390826 main.go:141] libmachine: (ha-086149) Calling .GetState
	I0819 18:01:59.745585  390826 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:01:59.746106  390826 main.go:141] libmachine: Using API Version  1
	I0819 18:01:59.746133  390826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:01:59.746481  390826 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:01:59.746971  390826 main.go:141] libmachine: (ha-086149) Calling .DriverName
	I0819 18:01:59.746976  390826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:01:59.747052  390826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:01:59.748802  390826 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 18:01:59.750137  390826 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 18:01:59.750160  390826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 18:01:59.750181  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHHostname
	I0819 18:01:59.753011  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:59.753394  390826 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:01:59.753422  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:59.753577  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHPort
	I0819 18:01:59.753788  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHKeyPath
	I0819 18:01:59.753953  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHUsername
	I0819 18:01:59.754110  390826 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149/id_rsa Username:docker}
	I0819 18:01:59.763234  390826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43467
	I0819 18:01:59.763643  390826 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:01:59.764166  390826 main.go:141] libmachine: Using API Version  1
	I0819 18:01:59.764199  390826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:01:59.764552  390826 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:01:59.764777  390826 main.go:141] libmachine: (ha-086149) Calling .GetState
	I0819 18:01:59.766331  390826 main.go:141] libmachine: (ha-086149) Calling .DriverName
	I0819 18:01:59.766600  390826 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 18:01:59.766617  390826 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 18:01:59.766641  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHHostname
	I0819 18:01:59.769152  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:59.769554  390826 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:01:59.769577  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:01:59.769732  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHPort
	I0819 18:01:59.769958  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHKeyPath
	I0819 18:01:59.770156  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHUsername
	I0819 18:01:59.770314  390826 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149/id_rsa Username:docker}
	I0819 18:01:59.853383  390826 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0819 18:01:59.925462  390826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 18:01:59.935249  390826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 18:02:00.590000  390826 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0819 18:02:00.837686  390826 main.go:141] libmachine: Making call to close driver server
	I0819 18:02:00.837715  390826 main.go:141] libmachine: (ha-086149) Calling .Close
	I0819 18:02:00.837781  390826 main.go:141] libmachine: Making call to close driver server
	I0819 18:02:00.837806  390826 main.go:141] libmachine: (ha-086149) Calling .Close
	I0819 18:02:00.838145  390826 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:02:00.838163  390826 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:02:00.838175  390826 main.go:141] libmachine: Making call to close driver server
	I0819 18:02:00.838183  390826 main.go:141] libmachine: (ha-086149) Calling .Close
	I0819 18:02:00.838319  390826 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:02:00.838341  390826 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:02:00.838351  390826 main.go:141] libmachine: Making call to close driver server
	I0819 18:02:00.838359  390826 main.go:141] libmachine: (ha-086149) Calling .Close
	I0819 18:02:00.838319  390826 main.go:141] libmachine: (ha-086149) DBG | Closing plugin on server side
	I0819 18:02:00.838530  390826 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:02:00.838553  390826 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:02:00.838553  390826 main.go:141] libmachine: (ha-086149) DBG | Closing plugin on server side
	I0819 18:02:00.838703  390826 main.go:141] libmachine: (ha-086149) DBG | Closing plugin on server side
	I0819 18:02:00.838749  390826 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:02:00.838758  390826 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:02:00.838848  390826 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0819 18:02:00.838866  390826 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0819 18:02:00.839004  390826 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0819 18:02:00.839015  390826 round_trippers.go:469] Request Headers:
	I0819 18:02:00.839026  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:02:00.839036  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:02:00.857763  390826 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0819 18:02:00.858372  390826 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0819 18:02:00.858388  390826 round_trippers.go:469] Request Headers:
	I0819 18:02:00.858395  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:02:00.858400  390826 round_trippers.go:473]     Content-Type: application/json
	I0819 18:02:00.858404  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:02:00.860823  390826 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:02:00.860981  390826 main.go:141] libmachine: Making call to close driver server
	I0819 18:02:00.860994  390826 main.go:141] libmachine: (ha-086149) Calling .Close
	I0819 18:02:00.861329  390826 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:02:00.861357  390826 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:02:00.861358  390826 main.go:141] libmachine: (ha-086149) DBG | Closing plugin on server side
	I0819 18:02:00.863225  390826 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0819 18:02:00.864484  390826 addons.go:510] duration metric: took 1.156242861s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0819 18:02:00.864518  390826 start.go:246] waiting for cluster config update ...
	I0819 18:02:00.864533  390826 start.go:255] writing updated cluster config ...
	I0819 18:02:00.866115  390826 out.go:201] 
	I0819 18:02:00.867539  390826 config.go:182] Loaded profile config "ha-086149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:02:00.867643  390826 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/config.json ...
	I0819 18:02:00.869430  390826 out.go:177] * Starting "ha-086149-m02" control-plane node in "ha-086149" cluster
	I0819 18:02:00.870522  390826 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 18:02:00.870541  390826 cache.go:56] Caching tarball of preloaded images
	I0819 18:02:00.870622  390826 preload.go:172] Found /home/jenkins/minikube-integration/19468-372744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 18:02:00.870633  390826 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 18:02:00.870710  390826 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/config.json ...
	I0819 18:02:00.870888  390826 start.go:360] acquireMachinesLock for ha-086149-m02: {Name:mk24ba67a747357e9ce40f1e460d2bb0bc59cc75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 18:02:00.870936  390826 start.go:364] duration metric: took 27.935µs to acquireMachinesLock for "ha-086149-m02"
	I0819 18:02:00.870957  390826 start.go:93] Provisioning new machine with config: &{Name:ha-086149 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-086149 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 18:02:00.871042  390826 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0819 18:02:00.872431  390826 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 18:02:00.872509  390826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:02:00.872533  390826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:02:00.887364  390826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45653
	I0819 18:02:00.887803  390826 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:02:00.888322  390826 main.go:141] libmachine: Using API Version  1
	I0819 18:02:00.888343  390826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:02:00.888660  390826 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:02:00.888876  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetMachineName
	I0819 18:02:00.889071  390826 main.go:141] libmachine: (ha-086149-m02) Calling .DriverName
	I0819 18:02:00.889242  390826 start.go:159] libmachine.API.Create for "ha-086149" (driver="kvm2")
	I0819 18:02:00.889272  390826 client.go:168] LocalClient.Create starting
	I0819 18:02:00.889310  390826 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem
	I0819 18:02:00.889349  390826 main.go:141] libmachine: Decoding PEM data...
	I0819 18:02:00.889369  390826 main.go:141] libmachine: Parsing certificate...
	I0819 18:02:00.889443  390826 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem
	I0819 18:02:00.889473  390826 main.go:141] libmachine: Decoding PEM data...
	I0819 18:02:00.889489  390826 main.go:141] libmachine: Parsing certificate...
	I0819 18:02:00.889516  390826 main.go:141] libmachine: Running pre-create checks...
	I0819 18:02:00.889526  390826 main.go:141] libmachine: (ha-086149-m02) Calling .PreCreateCheck
	I0819 18:02:00.889701  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetConfigRaw
	I0819 18:02:00.890132  390826 main.go:141] libmachine: Creating machine...
	I0819 18:02:00.890150  390826 main.go:141] libmachine: (ha-086149-m02) Calling .Create
	I0819 18:02:00.890301  390826 main.go:141] libmachine: (ha-086149-m02) Creating KVM machine...
	I0819 18:02:00.891513  390826 main.go:141] libmachine: (ha-086149-m02) DBG | found existing default KVM network
	I0819 18:02:00.891656  390826 main.go:141] libmachine: (ha-086149-m02) DBG | found existing private KVM network mk-ha-086149
	I0819 18:02:00.891788  390826 main.go:141] libmachine: (ha-086149-m02) Setting up store path in /home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m02 ...
	I0819 18:02:00.891816  390826 main.go:141] libmachine: (ha-086149-m02) Building disk image from file:///home/jenkins/minikube-integration/19468-372744/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0819 18:02:00.891883  390826 main.go:141] libmachine: (ha-086149-m02) DBG | I0819 18:02:00.891770  391194 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19468-372744/.minikube
	I0819 18:02:00.891984  390826 main.go:141] libmachine: (ha-086149-m02) Downloading /home/jenkins/minikube-integration/19468-372744/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19468-372744/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 18:02:01.163735  390826 main.go:141] libmachine: (ha-086149-m02) DBG | I0819 18:02:01.163579  391194 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m02/id_rsa...
	I0819 18:02:01.344183  390826 main.go:141] libmachine: (ha-086149-m02) DBG | I0819 18:02:01.344042  391194 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m02/ha-086149-m02.rawdisk...
	I0819 18:02:01.344216  390826 main.go:141] libmachine: (ha-086149-m02) DBG | Writing magic tar header
	I0819 18:02:01.344227  390826 main.go:141] libmachine: (ha-086149-m02) DBG | Writing SSH key tar header
	I0819 18:02:01.344235  390826 main.go:141] libmachine: (ha-086149-m02) DBG | I0819 18:02:01.344169  391194 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m02 ...
	I0819 18:02:01.344299  390826 main.go:141] libmachine: (ha-086149-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m02
	I0819 18:02:01.344332  390826 main.go:141] libmachine: (ha-086149-m02) Setting executable bit set on /home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m02 (perms=drwx------)
	I0819 18:02:01.344354  390826 main.go:141] libmachine: (ha-086149-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19468-372744/.minikube/machines
	I0819 18:02:01.344366  390826 main.go:141] libmachine: (ha-086149-m02) Setting executable bit set on /home/jenkins/minikube-integration/19468-372744/.minikube/machines (perms=drwxr-xr-x)
	I0819 18:02:01.344379  390826 main.go:141] libmachine: (ha-086149-m02) Setting executable bit set on /home/jenkins/minikube-integration/19468-372744/.minikube (perms=drwxr-xr-x)
	I0819 18:02:01.344386  390826 main.go:141] libmachine: (ha-086149-m02) Setting executable bit set on /home/jenkins/minikube-integration/19468-372744 (perms=drwxrwxr-x)
	I0819 18:02:01.344394  390826 main.go:141] libmachine: (ha-086149-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0819 18:02:01.344403  390826 main.go:141] libmachine: (ha-086149-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0819 18:02:01.344417  390826 main.go:141] libmachine: (ha-086149-m02) Creating domain...
	I0819 18:02:01.344432  390826 main.go:141] libmachine: (ha-086149-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19468-372744/.minikube
	I0819 18:02:01.344449  390826 main.go:141] libmachine: (ha-086149-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19468-372744
	I0819 18:02:01.344470  390826 main.go:141] libmachine: (ha-086149-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0819 18:02:01.344483  390826 main.go:141] libmachine: (ha-086149-m02) DBG | Checking permissions on dir: /home/jenkins
	I0819 18:02:01.344523  390826 main.go:141] libmachine: (ha-086149-m02) DBG | Checking permissions on dir: /home
	I0819 18:02:01.344552  390826 main.go:141] libmachine: (ha-086149-m02) DBG | Skipping /home - not owner
	I0819 18:02:01.345652  390826 main.go:141] libmachine: (ha-086149-m02) define libvirt domain using xml: 
	I0819 18:02:01.345680  390826 main.go:141] libmachine: (ha-086149-m02) <domain type='kvm'>
	I0819 18:02:01.345692  390826 main.go:141] libmachine: (ha-086149-m02)   <name>ha-086149-m02</name>
	I0819 18:02:01.345774  390826 main.go:141] libmachine: (ha-086149-m02)   <memory unit='MiB'>2200</memory>
	I0819 18:02:01.345825  390826 main.go:141] libmachine: (ha-086149-m02)   <vcpu>2</vcpu>
	I0819 18:02:01.345841  390826 main.go:141] libmachine: (ha-086149-m02)   <features>
	I0819 18:02:01.345850  390826 main.go:141] libmachine: (ha-086149-m02)     <acpi/>
	I0819 18:02:01.345860  390826 main.go:141] libmachine: (ha-086149-m02)     <apic/>
	I0819 18:02:01.345872  390826 main.go:141] libmachine: (ha-086149-m02)     <pae/>
	I0819 18:02:01.345916  390826 main.go:141] libmachine: (ha-086149-m02)     
	I0819 18:02:01.345927  390826 main.go:141] libmachine: (ha-086149-m02)   </features>
	I0819 18:02:01.345942  390826 main.go:141] libmachine: (ha-086149-m02)   <cpu mode='host-passthrough'>
	I0819 18:02:01.345953  390826 main.go:141] libmachine: (ha-086149-m02)   
	I0819 18:02:01.345964  390826 main.go:141] libmachine: (ha-086149-m02)   </cpu>
	I0819 18:02:01.345979  390826 main.go:141] libmachine: (ha-086149-m02)   <os>
	I0819 18:02:01.345989  390826 main.go:141] libmachine: (ha-086149-m02)     <type>hvm</type>
	I0819 18:02:01.346000  390826 main.go:141] libmachine: (ha-086149-m02)     <boot dev='cdrom'/>
	I0819 18:02:01.346008  390826 main.go:141] libmachine: (ha-086149-m02)     <boot dev='hd'/>
	I0819 18:02:01.346038  390826 main.go:141] libmachine: (ha-086149-m02)     <bootmenu enable='no'/>
	I0819 18:02:01.346059  390826 main.go:141] libmachine: (ha-086149-m02)   </os>
	I0819 18:02:01.346066  390826 main.go:141] libmachine: (ha-086149-m02)   <devices>
	I0819 18:02:01.346074  390826 main.go:141] libmachine: (ha-086149-m02)     <disk type='file' device='cdrom'>
	I0819 18:02:01.346083  390826 main.go:141] libmachine: (ha-086149-m02)       <source file='/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m02/boot2docker.iso'/>
	I0819 18:02:01.346091  390826 main.go:141] libmachine: (ha-086149-m02)       <target dev='hdc' bus='scsi'/>
	I0819 18:02:01.346097  390826 main.go:141] libmachine: (ha-086149-m02)       <readonly/>
	I0819 18:02:01.346105  390826 main.go:141] libmachine: (ha-086149-m02)     </disk>
	I0819 18:02:01.346112  390826 main.go:141] libmachine: (ha-086149-m02)     <disk type='file' device='disk'>
	I0819 18:02:01.346120  390826 main.go:141] libmachine: (ha-086149-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0819 18:02:01.346130  390826 main.go:141] libmachine: (ha-086149-m02)       <source file='/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m02/ha-086149-m02.rawdisk'/>
	I0819 18:02:01.346137  390826 main.go:141] libmachine: (ha-086149-m02)       <target dev='hda' bus='virtio'/>
	I0819 18:02:01.346143  390826 main.go:141] libmachine: (ha-086149-m02)     </disk>
	I0819 18:02:01.346152  390826 main.go:141] libmachine: (ha-086149-m02)     <interface type='network'>
	I0819 18:02:01.346160  390826 main.go:141] libmachine: (ha-086149-m02)       <source network='mk-ha-086149'/>
	I0819 18:02:01.346174  390826 main.go:141] libmachine: (ha-086149-m02)       <model type='virtio'/>
	I0819 18:02:01.346181  390826 main.go:141] libmachine: (ha-086149-m02)     </interface>
	I0819 18:02:01.346186  390826 main.go:141] libmachine: (ha-086149-m02)     <interface type='network'>
	I0819 18:02:01.346192  390826 main.go:141] libmachine: (ha-086149-m02)       <source network='default'/>
	I0819 18:02:01.346199  390826 main.go:141] libmachine: (ha-086149-m02)       <model type='virtio'/>
	I0819 18:02:01.346205  390826 main.go:141] libmachine: (ha-086149-m02)     </interface>
	I0819 18:02:01.346212  390826 main.go:141] libmachine: (ha-086149-m02)     <serial type='pty'>
	I0819 18:02:01.346218  390826 main.go:141] libmachine: (ha-086149-m02)       <target port='0'/>
	I0819 18:02:01.346225  390826 main.go:141] libmachine: (ha-086149-m02)     </serial>
	I0819 18:02:01.346230  390826 main.go:141] libmachine: (ha-086149-m02)     <console type='pty'>
	I0819 18:02:01.346237  390826 main.go:141] libmachine: (ha-086149-m02)       <target type='serial' port='0'/>
	I0819 18:02:01.346242  390826 main.go:141] libmachine: (ha-086149-m02)     </console>
	I0819 18:02:01.346249  390826 main.go:141] libmachine: (ha-086149-m02)     <rng model='virtio'>
	I0819 18:02:01.346282  390826 main.go:141] libmachine: (ha-086149-m02)       <backend model='random'>/dev/random</backend>
	I0819 18:02:01.346308  390826 main.go:141] libmachine: (ha-086149-m02)     </rng>
	I0819 18:02:01.346321  390826 main.go:141] libmachine: (ha-086149-m02)     
	I0819 18:02:01.346332  390826 main.go:141] libmachine: (ha-086149-m02)     
	I0819 18:02:01.346345  390826 main.go:141] libmachine: (ha-086149-m02)   </devices>
	I0819 18:02:01.346356  390826 main.go:141] libmachine: (ha-086149-m02) </domain>
	I0819 18:02:01.346369  390826 main.go:141] libmachine: (ha-086149-m02) 
	I0819 18:02:01.353449  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined MAC address 52:54:00:25:12:fc in network default
	I0819 18:02:01.354063  390826 main.go:141] libmachine: (ha-086149-m02) Ensuring networks are active...
	I0819 18:02:01.354090  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:01.354865  390826 main.go:141] libmachine: (ha-086149-m02) Ensuring network default is active
	I0819 18:02:01.355292  390826 main.go:141] libmachine: (ha-086149-m02) Ensuring network mk-ha-086149 is active
	I0819 18:02:01.355765  390826 main.go:141] libmachine: (ha-086149-m02) Getting domain xml...
	I0819 18:02:01.356643  390826 main.go:141] libmachine: (ha-086149-m02) Creating domain...
	I0819 18:02:02.573137  390826 main.go:141] libmachine: (ha-086149-m02) Waiting to get IP...
	I0819 18:02:02.573999  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:02.574397  390826 main.go:141] libmachine: (ha-086149-m02) DBG | unable to find current IP address of domain ha-086149-m02 in network mk-ha-086149
	I0819 18:02:02.574452  390826 main.go:141] libmachine: (ha-086149-m02) DBG | I0819 18:02:02.574377  391194 retry.go:31] will retry after 213.692862ms: waiting for machine to come up
	I0819 18:02:02.789798  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:02.790223  390826 main.go:141] libmachine: (ha-086149-m02) DBG | unable to find current IP address of domain ha-086149-m02 in network mk-ha-086149
	I0819 18:02:02.790259  390826 main.go:141] libmachine: (ha-086149-m02) DBG | I0819 18:02:02.790168  391194 retry.go:31] will retry after 315.769086ms: waiting for machine to come up
	I0819 18:02:03.108010  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:03.108442  390826 main.go:141] libmachine: (ha-086149-m02) DBG | unable to find current IP address of domain ha-086149-m02 in network mk-ha-086149
	I0819 18:02:03.108477  390826 main.go:141] libmachine: (ha-086149-m02) DBG | I0819 18:02:03.108385  391194 retry.go:31] will retry after 301.828125ms: waiting for machine to come up
	I0819 18:02:03.412018  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:03.412538  390826 main.go:141] libmachine: (ha-086149-m02) DBG | unable to find current IP address of domain ha-086149-m02 in network mk-ha-086149
	I0819 18:02:03.412566  390826 main.go:141] libmachine: (ha-086149-m02) DBG | I0819 18:02:03.412497  391194 retry.go:31] will retry after 566.070222ms: waiting for machine to come up
	I0819 18:02:03.980372  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:03.980809  390826 main.go:141] libmachine: (ha-086149-m02) DBG | unable to find current IP address of domain ha-086149-m02 in network mk-ha-086149
	I0819 18:02:03.980839  390826 main.go:141] libmachine: (ha-086149-m02) DBG | I0819 18:02:03.980760  391194 retry.go:31] will retry after 725.498843ms: waiting for machine to come up
	I0819 18:02:04.707651  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:04.708163  390826 main.go:141] libmachine: (ha-086149-m02) DBG | unable to find current IP address of domain ha-086149-m02 in network mk-ha-086149
	I0819 18:02:04.708189  390826 main.go:141] libmachine: (ha-086149-m02) DBG | I0819 18:02:04.708114  391194 retry.go:31] will retry after 888.838276ms: waiting for machine to come up
	I0819 18:02:05.598151  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:05.598534  390826 main.go:141] libmachine: (ha-086149-m02) DBG | unable to find current IP address of domain ha-086149-m02 in network mk-ha-086149
	I0819 18:02:05.598561  390826 main.go:141] libmachine: (ha-086149-m02) DBG | I0819 18:02:05.598505  391194 retry.go:31] will retry after 725.496011ms: waiting for machine to come up
	I0819 18:02:06.326059  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:06.326591  390826 main.go:141] libmachine: (ha-086149-m02) DBG | unable to find current IP address of domain ha-086149-m02 in network mk-ha-086149
	I0819 18:02:06.326619  390826 main.go:141] libmachine: (ha-086149-m02) DBG | I0819 18:02:06.326549  391194 retry.go:31] will retry after 1.213657221s: waiting for machine to come up
	I0819 18:02:07.541312  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:07.541730  390826 main.go:141] libmachine: (ha-086149-m02) DBG | unable to find current IP address of domain ha-086149-m02 in network mk-ha-086149
	I0819 18:02:07.541762  390826 main.go:141] libmachine: (ha-086149-m02) DBG | I0819 18:02:07.541670  391194 retry.go:31] will retry after 1.144037477s: waiting for machine to come up
	I0819 18:02:08.687009  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:08.687346  390826 main.go:141] libmachine: (ha-086149-m02) DBG | unable to find current IP address of domain ha-086149-m02 in network mk-ha-086149
	I0819 18:02:08.687378  390826 main.go:141] libmachine: (ha-086149-m02) DBG | I0819 18:02:08.687317  391194 retry.go:31] will retry after 1.786431516s: waiting for machine to come up
	I0819 18:02:10.475126  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:10.475572  390826 main.go:141] libmachine: (ha-086149-m02) DBG | unable to find current IP address of domain ha-086149-m02 in network mk-ha-086149
	I0819 18:02:10.475604  390826 main.go:141] libmachine: (ha-086149-m02) DBG | I0819 18:02:10.475516  391194 retry.go:31] will retry after 2.7984425s: waiting for machine to come up
	I0819 18:02:13.276769  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:13.277252  390826 main.go:141] libmachine: (ha-086149-m02) DBG | unable to find current IP address of domain ha-086149-m02 in network mk-ha-086149
	I0819 18:02:13.277281  390826 main.go:141] libmachine: (ha-086149-m02) DBG | I0819 18:02:13.277177  391194 retry.go:31] will retry after 3.557169037s: waiting for machine to come up
	I0819 18:02:16.836169  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:16.836715  390826 main.go:141] libmachine: (ha-086149-m02) DBG | unable to find current IP address of domain ha-086149-m02 in network mk-ha-086149
	I0819 18:02:16.836739  390826 main.go:141] libmachine: (ha-086149-m02) DBG | I0819 18:02:16.836637  391194 retry.go:31] will retry after 3.947371274s: waiting for machine to come up
	I0819 18:02:20.788796  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:20.789268  390826 main.go:141] libmachine: (ha-086149-m02) DBG | unable to find current IP address of domain ha-086149-m02 in network mk-ha-086149
	I0819 18:02:20.789290  390826 main.go:141] libmachine: (ha-086149-m02) DBG | I0819 18:02:20.789224  391194 retry.go:31] will retry after 5.582773093s: waiting for machine to come up
	I0819 18:02:26.374103  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:26.374654  390826 main.go:141] libmachine: (ha-086149-m02) Found IP for machine: 192.168.39.167
	I0819 18:02:26.374678  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has current primary IP address 192.168.39.167 and MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:26.374684  390826 main.go:141] libmachine: (ha-086149-m02) Reserving static IP address...
	I0819 18:02:26.375127  390826 main.go:141] libmachine: (ha-086149-m02) DBG | unable to find host DHCP lease matching {name: "ha-086149-m02", mac: "52:54:00:b9:44:0e", ip: "192.168.39.167"} in network mk-ha-086149
	I0819 18:02:26.451534  390826 main.go:141] libmachine: (ha-086149-m02) DBG | Getting to WaitForSSH function...
	I0819 18:02:26.451567  390826 main.go:141] libmachine: (ha-086149-m02) Reserved static IP address: 192.168.39.167
	I0819 18:02:26.451582  390826 main.go:141] libmachine: (ha-086149-m02) Waiting for SSH to be available...
	I0819 18:02:26.454800  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:26.455320  390826 main.go:141] libmachine: (ha-086149-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:b9:44:0e", ip: ""} in network mk-ha-086149
	I0819 18:02:26.455347  390826 main.go:141] libmachine: (ha-086149-m02) DBG | unable to find defined IP address of network mk-ha-086149 interface with MAC address 52:54:00:b9:44:0e
	I0819 18:02:26.455518  390826 main.go:141] libmachine: (ha-086149-m02) DBG | Using SSH client type: external
	I0819 18:02:26.455550  390826 main.go:141] libmachine: (ha-086149-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m02/id_rsa (-rw-------)
	I0819 18:02:26.455578  390826 main.go:141] libmachine: (ha-086149-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 18:02:26.455595  390826 main.go:141] libmachine: (ha-086149-m02) DBG | About to run SSH command:
	I0819 18:02:26.455612  390826 main.go:141] libmachine: (ha-086149-m02) DBG | exit 0
	I0819 18:02:26.459237  390826 main.go:141] libmachine: (ha-086149-m02) DBG | SSH cmd err, output: exit status 255: 
	I0819 18:02:26.459267  390826 main.go:141] libmachine: (ha-086149-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0819 18:02:26.459278  390826 main.go:141] libmachine: (ha-086149-m02) DBG | command : exit 0
	I0819 18:02:26.459290  390826 main.go:141] libmachine: (ha-086149-m02) DBG | err     : exit status 255
	I0819 18:02:26.459302  390826 main.go:141] libmachine: (ha-086149-m02) DBG | output  : 
	I0819 18:02:29.460056  390826 main.go:141] libmachine: (ha-086149-m02) DBG | Getting to WaitForSSH function...
	I0819 18:02:29.463263  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:29.463618  390826 main.go:141] libmachine: (ha-086149-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:44:0e", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:02:15 +0000 UTC Type:0 Mac:52:54:00:b9:44:0e Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-086149-m02 Clientid:01:52:54:00:b9:44:0e}
	I0819 18:02:29.463647  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined IP address 192.168.39.167 and MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:29.463807  390826 main.go:141] libmachine: (ha-086149-m02) DBG | Using SSH client type: external
	I0819 18:02:29.463838  390826 main.go:141] libmachine: (ha-086149-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m02/id_rsa (-rw-------)
	I0819 18:02:29.463870  390826 main.go:141] libmachine: (ha-086149-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.167 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 18:02:29.463884  390826 main.go:141] libmachine: (ha-086149-m02) DBG | About to run SSH command:
	I0819 18:02:29.463919  390826 main.go:141] libmachine: (ha-086149-m02) DBG | exit 0
	I0819 18:02:29.591884  390826 main.go:141] libmachine: (ha-086149-m02) DBG | SSH cmd err, output: <nil>: 
	I0819 18:02:29.592289  390826 main.go:141] libmachine: (ha-086149-m02) KVM machine creation complete!
	I0819 18:02:29.592585  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetConfigRaw
	I0819 18:02:29.593231  390826 main.go:141] libmachine: (ha-086149-m02) Calling .DriverName
	I0819 18:02:29.593450  390826 main.go:141] libmachine: (ha-086149-m02) Calling .DriverName
	I0819 18:02:29.593703  390826 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0819 18:02:29.593722  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetState
	I0819 18:02:29.594958  390826 main.go:141] libmachine: Detecting operating system of created instance...
	I0819 18:02:29.594972  390826 main.go:141] libmachine: Waiting for SSH to be available...
	I0819 18:02:29.594977  390826 main.go:141] libmachine: Getting to WaitForSSH function...
	I0819 18:02:29.594985  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHHostname
	I0819 18:02:29.597081  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:29.597433  390826 main.go:141] libmachine: (ha-086149-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:44:0e", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:02:15 +0000 UTC Type:0 Mac:52:54:00:b9:44:0e Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-086149-m02 Clientid:01:52:54:00:b9:44:0e}
	I0819 18:02:29.597461  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined IP address 192.168.39.167 and MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:29.597582  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHPort
	I0819 18:02:29.597780  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHKeyPath
	I0819 18:02:29.597928  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHKeyPath
	I0819 18:02:29.598082  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHUsername
	I0819 18:02:29.598242  390826 main.go:141] libmachine: Using SSH client type: native
	I0819 18:02:29.598481  390826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.167 22 <nil> <nil>}
	I0819 18:02:29.598495  390826 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0819 18:02:29.711103  390826 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 18:02:29.711127  390826 main.go:141] libmachine: Detecting the provisioner...
	I0819 18:02:29.711150  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHHostname
	I0819 18:02:29.714092  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:29.714482  390826 main.go:141] libmachine: (ha-086149-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:44:0e", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:02:15 +0000 UTC Type:0 Mac:52:54:00:b9:44:0e Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-086149-m02 Clientid:01:52:54:00:b9:44:0e}
	I0819 18:02:29.714514  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined IP address 192.168.39.167 and MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:29.714667  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHPort
	I0819 18:02:29.714895  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHKeyPath
	I0819 18:02:29.715068  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHKeyPath
	I0819 18:02:29.715177  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHUsername
	I0819 18:02:29.715311  390826 main.go:141] libmachine: Using SSH client type: native
	I0819 18:02:29.715508  390826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.167 22 <nil> <nil>}
	I0819 18:02:29.715523  390826 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0819 18:02:29.832407  390826 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0819 18:02:29.832502  390826 main.go:141] libmachine: found compatible host: buildroot
	I0819 18:02:29.832517  390826 main.go:141] libmachine: Provisioning with buildroot...
	I0819 18:02:29.832529  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetMachineName
	I0819 18:02:29.832801  390826 buildroot.go:166] provisioning hostname "ha-086149-m02"
	I0819 18:02:29.832836  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetMachineName
	I0819 18:02:29.833053  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHHostname
	I0819 18:02:29.835580  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:29.836030  390826 main.go:141] libmachine: (ha-086149-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:44:0e", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:02:15 +0000 UTC Type:0 Mac:52:54:00:b9:44:0e Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-086149-m02 Clientid:01:52:54:00:b9:44:0e}
	I0819 18:02:29.836077  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined IP address 192.168.39.167 and MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:29.836240  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHPort
	I0819 18:02:29.836432  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHKeyPath
	I0819 18:02:29.836590  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHKeyPath
	I0819 18:02:29.836769  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHUsername
	I0819 18:02:29.836968  390826 main.go:141] libmachine: Using SSH client type: native
	I0819 18:02:29.837196  390826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.167 22 <nil> <nil>}
	I0819 18:02:29.837218  390826 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-086149-m02 && echo "ha-086149-m02" | sudo tee /etc/hostname
	I0819 18:02:29.961904  390826 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-086149-m02
	
	I0819 18:02:29.961935  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHHostname
	I0819 18:02:29.964835  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:29.965249  390826 main.go:141] libmachine: (ha-086149-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:44:0e", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:02:15 +0000 UTC Type:0 Mac:52:54:00:b9:44:0e Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-086149-m02 Clientid:01:52:54:00:b9:44:0e}
	I0819 18:02:29.965273  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined IP address 192.168.39.167 and MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:29.965458  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHPort
	I0819 18:02:29.965670  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHKeyPath
	I0819 18:02:29.965837  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHKeyPath
	I0819 18:02:29.965957  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHUsername
	I0819 18:02:29.966106  390826 main.go:141] libmachine: Using SSH client type: native
	I0819 18:02:29.966269  390826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.167 22 <nil> <nil>}
	I0819 18:02:29.966290  390826 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-086149-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-086149-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-086149-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 18:02:30.089048  390826 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 18:02:30.089086  390826 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19468-372744/.minikube CaCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19468-372744/.minikube}
	I0819 18:02:30.089109  390826 buildroot.go:174] setting up certificates
	I0819 18:02:30.089119  390826 provision.go:84] configureAuth start
	I0819 18:02:30.089129  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetMachineName
	I0819 18:02:30.089461  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetIP
	I0819 18:02:30.092265  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:30.092669  390826 main.go:141] libmachine: (ha-086149-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:44:0e", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:02:15 +0000 UTC Type:0 Mac:52:54:00:b9:44:0e Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-086149-m02 Clientid:01:52:54:00:b9:44:0e}
	I0819 18:02:30.092701  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined IP address 192.168.39.167 and MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:30.092884  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHHostname
	I0819 18:02:30.095727  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:30.096099  390826 main.go:141] libmachine: (ha-086149-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:44:0e", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:02:15 +0000 UTC Type:0 Mac:52:54:00:b9:44:0e Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-086149-m02 Clientid:01:52:54:00:b9:44:0e}
	I0819 18:02:30.096125  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined IP address 192.168.39.167 and MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:30.096378  390826 provision.go:143] copyHostCerts
	I0819 18:02:30.096408  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem
	I0819 18:02:30.096439  390826 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem, removing ...
	I0819 18:02:30.096448  390826 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem
	I0819 18:02:30.096554  390826 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem (1082 bytes)
	I0819 18:02:30.096631  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem
	I0819 18:02:30.096648  390826 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem, removing ...
	I0819 18:02:30.096655  390826 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem
	I0819 18:02:30.096681  390826 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem (1123 bytes)
	I0819 18:02:30.096726  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem
	I0819 18:02:30.096740  390826 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem, removing ...
	I0819 18:02:30.096747  390826 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem
	I0819 18:02:30.096767  390826 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem (1675 bytes)
	I0819 18:02:30.096813  390826 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem org=jenkins.ha-086149-m02 san=[127.0.0.1 192.168.39.167 ha-086149-m02 localhost minikube]
	I0819 18:02:30.185382  390826 provision.go:177] copyRemoteCerts
	I0819 18:02:30.185447  390826 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 18:02:30.185477  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHHostname
	I0819 18:02:30.188112  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:30.188524  390826 main.go:141] libmachine: (ha-086149-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:44:0e", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:02:15 +0000 UTC Type:0 Mac:52:54:00:b9:44:0e Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-086149-m02 Clientid:01:52:54:00:b9:44:0e}
	I0819 18:02:30.188561  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined IP address 192.168.39.167 and MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:30.188806  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHPort
	I0819 18:02:30.189073  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHKeyPath
	I0819 18:02:30.189248  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHUsername
	I0819 18:02:30.189403  390826 sshutil.go:53] new ssh client: &{IP:192.168.39.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m02/id_rsa Username:docker}
	I0819 18:02:30.278357  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 18:02:30.278448  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0819 18:02:30.303041  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 18:02:30.303128  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 18:02:30.328073  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 18:02:30.328160  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 18:02:30.352418  390826 provision.go:87] duration metric: took 263.283773ms to configureAuth
	I0819 18:02:30.352453  390826 buildroot.go:189] setting minikube options for container-runtime
	I0819 18:02:30.352659  390826 config.go:182] Loaded profile config "ha-086149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:02:30.352754  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHHostname
	I0819 18:02:30.355415  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:30.355751  390826 main.go:141] libmachine: (ha-086149-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:44:0e", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:02:15 +0000 UTC Type:0 Mac:52:54:00:b9:44:0e Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-086149-m02 Clientid:01:52:54:00:b9:44:0e}
	I0819 18:02:30.355783  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined IP address 192.168.39.167 and MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:30.355978  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHPort
	I0819 18:02:30.356180  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHKeyPath
	I0819 18:02:30.356334  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHKeyPath
	I0819 18:02:30.356473  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHUsername
	I0819 18:02:30.356613  390826 main.go:141] libmachine: Using SSH client type: native
	I0819 18:02:30.356785  390826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.167 22 <nil> <nil>}
	I0819 18:02:30.356801  390826 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 18:02:30.647226  390826 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 18:02:30.647261  390826 main.go:141] libmachine: Checking connection to Docker...
	I0819 18:02:30.647279  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetURL
	I0819 18:02:30.648827  390826 main.go:141] libmachine: (ha-086149-m02) DBG | Using libvirt version 6000000
	I0819 18:02:30.650998  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:30.651345  390826 main.go:141] libmachine: (ha-086149-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:44:0e", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:02:15 +0000 UTC Type:0 Mac:52:54:00:b9:44:0e Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-086149-m02 Clientid:01:52:54:00:b9:44:0e}
	I0819 18:02:30.651523  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined IP address 192.168.39.167 and MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:30.651593  390826 main.go:141] libmachine: Docker is up and running!
	I0819 18:02:30.651608  390826 main.go:141] libmachine: Reticulating splines...
	I0819 18:02:30.651617  390826 client.go:171] duration metric: took 29.762332975s to LocalClient.Create
	I0819 18:02:30.651641  390826 start.go:167] duration metric: took 29.762401242s to libmachine.API.Create "ha-086149"
	I0819 18:02:30.651650  390826 start.go:293] postStartSetup for "ha-086149-m02" (driver="kvm2")
	I0819 18:02:30.651660  390826 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 18:02:30.651714  390826 main.go:141] libmachine: (ha-086149-m02) Calling .DriverName
	I0819 18:02:30.651984  390826 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 18:02:30.652147  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHHostname
	I0819 18:02:30.654564  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:30.654965  390826 main.go:141] libmachine: (ha-086149-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:44:0e", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:02:15 +0000 UTC Type:0 Mac:52:54:00:b9:44:0e Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-086149-m02 Clientid:01:52:54:00:b9:44:0e}
	I0819 18:02:30.654987  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined IP address 192.168.39.167 and MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:30.655156  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHPort
	I0819 18:02:30.655369  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHKeyPath
	I0819 18:02:30.655538  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHUsername
	I0819 18:02:30.655725  390826 sshutil.go:53] new ssh client: &{IP:192.168.39.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m02/id_rsa Username:docker}
	I0819 18:02:30.742439  390826 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 18:02:30.747128  390826 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 18:02:30.747159  390826 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-372744/.minikube/addons for local assets ...
	I0819 18:02:30.747239  390826 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-372744/.minikube/files for local assets ...
	I0819 18:02:30.747311  390826 filesync.go:149] local asset: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem -> 3800092.pem in /etc/ssl/certs
	I0819 18:02:30.747323  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem -> /etc/ssl/certs/3800092.pem
	I0819 18:02:30.747406  390826 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 18:02:30.757484  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem --> /etc/ssl/certs/3800092.pem (1708 bytes)
	I0819 18:02:30.785461  390826 start.go:296] duration metric: took 133.794234ms for postStartSetup
	I0819 18:02:30.785531  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetConfigRaw
	I0819 18:02:30.786174  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetIP
	I0819 18:02:30.789492  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:30.789906  390826 main.go:141] libmachine: (ha-086149-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:44:0e", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:02:15 +0000 UTC Type:0 Mac:52:54:00:b9:44:0e Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-086149-m02 Clientid:01:52:54:00:b9:44:0e}
	I0819 18:02:30.789943  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined IP address 192.168.39.167 and MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:30.790207  390826 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/config.json ...
	I0819 18:02:30.790487  390826 start.go:128] duration metric: took 29.919427382s to createHost
	I0819 18:02:30.790520  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHHostname
	I0819 18:02:30.792954  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:30.793297  390826 main.go:141] libmachine: (ha-086149-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:44:0e", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:02:15 +0000 UTC Type:0 Mac:52:54:00:b9:44:0e Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-086149-m02 Clientid:01:52:54:00:b9:44:0e}
	I0819 18:02:30.793329  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined IP address 192.168.39.167 and MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:30.793558  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHPort
	I0819 18:02:30.793778  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHKeyPath
	I0819 18:02:30.793952  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHKeyPath
	I0819 18:02:30.794104  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHUsername
	I0819 18:02:30.794257  390826 main.go:141] libmachine: Using SSH client type: native
	I0819 18:02:30.794425  390826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.167 22 <nil> <nil>}
	I0819 18:02:30.794439  390826 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 18:02:30.908358  390826 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724090550.891768613
	
	I0819 18:02:30.908386  390826 fix.go:216] guest clock: 1724090550.891768613
	I0819 18:02:30.908394  390826 fix.go:229] Guest: 2024-08-19 18:02:30.891768613 +0000 UTC Remote: 2024-08-19 18:02:30.790503904 +0000 UTC m=+76.584400326 (delta=101.264709ms)
	I0819 18:02:30.908411  390826 fix.go:200] guest clock delta is within tolerance: 101.264709ms
	I0819 18:02:30.908416  390826 start.go:83] releasing machines lock for "ha-086149-m02", held for 30.03747204s
	I0819 18:02:30.908436  390826 main.go:141] libmachine: (ha-086149-m02) Calling .DriverName
	I0819 18:02:30.908746  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetIP
	I0819 18:02:30.911790  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:30.912299  390826 main.go:141] libmachine: (ha-086149-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:44:0e", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:02:15 +0000 UTC Type:0 Mac:52:54:00:b9:44:0e Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-086149-m02 Clientid:01:52:54:00:b9:44:0e}
	I0819 18:02:30.912324  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined IP address 192.168.39.167 and MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:30.914702  390826 out.go:177] * Found network options:
	I0819 18:02:30.916264  390826 out.go:177]   - NO_PROXY=192.168.39.249
	W0819 18:02:30.917550  390826 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 18:02:30.917584  390826 main.go:141] libmachine: (ha-086149-m02) Calling .DriverName
	I0819 18:02:30.918210  390826 main.go:141] libmachine: (ha-086149-m02) Calling .DriverName
	I0819 18:02:30.918395  390826 main.go:141] libmachine: (ha-086149-m02) Calling .DriverName
	I0819 18:02:30.918487  390826 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 18:02:30.918533  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHHostname
	W0819 18:02:30.918573  390826 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 18:02:30.918658  390826 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 18:02:30.918684  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHHostname
	I0819 18:02:30.921189  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:30.921523  390826 main.go:141] libmachine: (ha-086149-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:44:0e", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:02:15 +0000 UTC Type:0 Mac:52:54:00:b9:44:0e Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-086149-m02 Clientid:01:52:54:00:b9:44:0e}
	I0819 18:02:30.921551  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined IP address 192.168.39.167 and MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:30.921575  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:30.921721  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHPort
	I0819 18:02:30.921900  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHKeyPath
	I0819 18:02:30.921953  390826 main.go:141] libmachine: (ha-086149-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:44:0e", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:02:15 +0000 UTC Type:0 Mac:52:54:00:b9:44:0e Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-086149-m02 Clientid:01:52:54:00:b9:44:0e}
	I0819 18:02:30.921976  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined IP address 192.168.39.167 and MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:30.922073  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHUsername
	I0819 18:02:30.922143  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHPort
	I0819 18:02:30.922222  390826 sshutil.go:53] new ssh client: &{IP:192.168.39.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m02/id_rsa Username:docker}
	I0819 18:02:30.922313  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHKeyPath
	I0819 18:02:30.922446  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHUsername
	I0819 18:02:30.922578  390826 sshutil.go:53] new ssh client: &{IP:192.168.39.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m02/id_rsa Username:docker}
	I0819 18:02:31.162275  390826 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 18:02:31.168463  390826 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 18:02:31.168543  390826 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 18:02:31.185415  390826 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 18:02:31.185453  390826 start.go:495] detecting cgroup driver to use...
	I0819 18:02:31.185531  390826 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 18:02:31.203803  390826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 18:02:31.218769  390826 docker.go:217] disabling cri-docker service (if available) ...
	I0819 18:02:31.218849  390826 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 18:02:31.233091  390826 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 18:02:31.247534  390826 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 18:02:31.365020  390826 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 18:02:31.507633  390826 docker.go:233] disabling docker service ...
	I0819 18:02:31.507752  390826 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 18:02:31.522469  390826 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 18:02:31.535904  390826 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 18:02:31.684033  390826 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 18:02:31.816794  390826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 18:02:31.830888  390826 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 18:02:31.850134  390826 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 18:02:31.850203  390826 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:02:31.860550  390826 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 18:02:31.860618  390826 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:02:31.870742  390826 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:02:31.880834  390826 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:02:31.891213  390826 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 18:02:31.901856  390826 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:02:31.912615  390826 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:02:31.931114  390826 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:02:31.942288  390826 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 18:02:31.951905  390826 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 18:02:31.951992  390826 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 18:02:31.965733  390826 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 18:02:31.976631  390826 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 18:02:32.105549  390826 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 18:02:32.245821  390826 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 18:02:32.245895  390826 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 18:02:32.250785  390826 start.go:563] Will wait 60s for crictl version
	I0819 18:02:32.250836  390826 ssh_runner.go:195] Run: which crictl
	I0819 18:02:32.254658  390826 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 18:02:32.293963  390826 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 18:02:32.294078  390826 ssh_runner.go:195] Run: crio --version
	I0819 18:02:32.320948  390826 ssh_runner.go:195] Run: crio --version
	I0819 18:02:32.352515  390826 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 18:02:32.353910  390826 out.go:177]   - env NO_PROXY=192.168.39.249
	I0819 18:02:32.355059  390826 main.go:141] libmachine: (ha-086149-m02) Calling .GetIP
	I0819 18:02:32.357803  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:32.358225  390826 main.go:141] libmachine: (ha-086149-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:44:0e", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:02:15 +0000 UTC Type:0 Mac:52:54:00:b9:44:0e Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-086149-m02 Clientid:01:52:54:00:b9:44:0e}
	I0819 18:02:32.358257  390826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined IP address 192.168.39.167 and MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:02:32.358399  390826 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 18:02:32.362630  390826 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 18:02:32.375092  390826 mustload.go:65] Loading cluster: ha-086149
	I0819 18:02:32.375333  390826 config.go:182] Loaded profile config "ha-086149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:02:32.375732  390826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:02:32.375770  390826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:02:32.392292  390826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41425
	I0819 18:02:32.392699  390826 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:02:32.393169  390826 main.go:141] libmachine: Using API Version  1
	I0819 18:02:32.393193  390826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:02:32.393492  390826 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:02:32.393683  390826 main.go:141] libmachine: (ha-086149) Calling .GetState
	I0819 18:02:32.395300  390826 host.go:66] Checking if "ha-086149" exists ...
	I0819 18:02:32.395638  390826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:02:32.395664  390826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:02:32.410687  390826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42331
	I0819 18:02:32.411091  390826 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:02:32.411571  390826 main.go:141] libmachine: Using API Version  1
	I0819 18:02:32.411592  390826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:02:32.411927  390826 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:02:32.412133  390826 main.go:141] libmachine: (ha-086149) Calling .DriverName
	I0819 18:02:32.412299  390826 certs.go:68] Setting up /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149 for IP: 192.168.39.167
	I0819 18:02:32.412312  390826 certs.go:194] generating shared ca certs ...
	I0819 18:02:32.412332  390826 certs.go:226] acquiring lock for ca certs: {Name:mk639e03f593e0bccac045f6e9f5ba3b96cc81e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:02:32.412477  390826 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key
	I0819 18:02:32.412535  390826 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key
	I0819 18:02:32.412548  390826 certs.go:256] generating profile certs ...
	I0819 18:02:32.412635  390826 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/client.key
	I0819 18:02:32.412669  390826 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.key.29108782
	I0819 18:02:32.412693  390826 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.crt.29108782 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.249 192.168.39.167 192.168.39.254]
	I0819 18:02:32.613410  390826 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.crt.29108782 ...
	I0819 18:02:32.613445  390826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.crt.29108782: {Name:mk786a0be0a01b23577616474723d3dd1af61718 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:02:32.613633  390826 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.key.29108782 ...
	I0819 18:02:32.613652  390826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.key.29108782: {Name:mk35ec7528c86be4e226ad885f6517ee223a81da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:02:32.613749  390826 certs.go:381] copying /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.crt.29108782 -> /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.crt
	I0819 18:02:32.613904  390826 certs.go:385] copying /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.key.29108782 -> /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.key
	I0819 18:02:32.614083  390826 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/proxy-client.key
	I0819 18:02:32.614103  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 18:02:32.614123  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 18:02:32.614146  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 18:02:32.614167  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 18:02:32.614194  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0819 18:02:32.614216  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0819 18:02:32.614233  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0819 18:02:32.614254  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0819 18:02:32.614320  390826 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem (1338 bytes)
	W0819 18:02:32.614361  390826 certs.go:480] ignoring /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009_empty.pem, impossibly tiny 0 bytes
	I0819 18:02:32.614379  390826 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 18:02:32.614416  390826 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem (1082 bytes)
	I0819 18:02:32.614449  390826 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem (1123 bytes)
	I0819 18:02:32.614480  390826 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem (1675 bytes)
	I0819 18:02:32.614535  390826 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem (1708 bytes)
	I0819 18:02:32.614573  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem -> /usr/share/ca-certificates/380009.pem
	I0819 18:02:32.614595  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem -> /usr/share/ca-certificates/3800092.pem
	I0819 18:02:32.614614  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:02:32.614663  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHHostname
	I0819 18:02:32.617605  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:02:32.618037  390826 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:02:32.618064  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:02:32.618291  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHPort
	I0819 18:02:32.618489  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHKeyPath
	I0819 18:02:32.618662  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHUsername
	I0819 18:02:32.618811  390826 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149/id_rsa Username:docker}
	I0819 18:02:32.688132  390826 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0819 18:02:32.693432  390826 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0819 18:02:32.705643  390826 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0819 18:02:32.710393  390826 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0819 18:02:32.726929  390826 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0819 18:02:32.731805  390826 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0819 18:02:32.743991  390826 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0819 18:02:32.748405  390826 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0819 18:02:32.760696  390826 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0819 18:02:32.764761  390826 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0819 18:02:32.775576  390826 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0819 18:02:32.780335  390826 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0819 18:02:32.798982  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 18:02:32.824573  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 18:02:32.848456  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 18:02:32.872289  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 18:02:32.895762  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0819 18:02:32.919267  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 18:02:32.943247  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 18:02:32.967491  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 18:02:32.991733  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem --> /usr/share/ca-certificates/380009.pem (1338 bytes)
	I0819 18:02:33.016178  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem --> /usr/share/ca-certificates/3800092.pem (1708 bytes)
	I0819 18:02:33.041029  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 18:02:33.067154  390826 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0819 18:02:33.085779  390826 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0819 18:02:33.103563  390826 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0819 18:02:33.120415  390826 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0819 18:02:33.137279  390826 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0819 18:02:33.154210  390826 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0819 18:02:33.171254  390826 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0819 18:02:33.188327  390826 ssh_runner.go:195] Run: openssl version
	I0819 18:02:33.194174  390826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3800092.pem && ln -fs /usr/share/ca-certificates/3800092.pem /etc/ssl/certs/3800092.pem"
	I0819 18:02:33.204906  390826 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3800092.pem
	I0819 18:02:33.209552  390826 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 17:56 /usr/share/ca-certificates/3800092.pem
	I0819 18:02:33.209612  390826 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3800092.pem
	I0819 18:02:33.215435  390826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3800092.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 18:02:33.225689  390826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 18:02:33.236693  390826 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:02:33.241350  390826 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:02:33.241405  390826 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:02:33.247220  390826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 18:02:33.258368  390826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/380009.pem && ln -fs /usr/share/ca-certificates/380009.pem /etc/ssl/certs/380009.pem"
	I0819 18:02:33.268726  390826 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/380009.pem
	I0819 18:02:33.273014  390826 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 17:56 /usr/share/ca-certificates/380009.pem
	I0819 18:02:33.273115  390826 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/380009.pem
	I0819 18:02:33.278623  390826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/380009.pem /etc/ssl/certs/51391683.0"
	I0819 18:02:33.288625  390826 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 18:02:33.292635  390826 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 18:02:33.292700  390826 kubeadm.go:934] updating node {m02 192.168.39.167 8443 v1.31.0 crio true true} ...
	I0819 18:02:33.292792  390826 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-086149-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.167
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-086149 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 18:02:33.292862  390826 kube-vip.go:115] generating kube-vip config ...
	I0819 18:02:33.292923  390826 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0819 18:02:33.308724  390826 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0819 18:02:33.308804  390826 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0819 18:02:33.308863  390826 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 18:02:33.318019  390826 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0819 18:02:33.318070  390826 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0819 18:02:33.327419  390826 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0819 18:02:33.327441  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0819 18:02:33.327505  390826 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19468-372744/.minikube/cache/linux/amd64/v1.31.0/kubeadm
	I0819 18:02:33.327540  390826 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19468-372744/.minikube/cache/linux/amd64/v1.31.0/kubelet
	I0819 18:02:33.327513  390826 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0819 18:02:33.331840  390826 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0819 18:02:33.331859  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0819 18:02:34.279980  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0819 18:02:34.280077  390826 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0819 18:02:34.285199  390826 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0819 18:02:34.285234  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0819 18:02:34.603755  390826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:02:34.619504  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0819 18:02:34.619621  390826 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0819 18:02:34.624859  390826 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0819 18:02:34.624891  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0819 18:02:35.017238  390826 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0819 18:02:35.027772  390826 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0819 18:02:35.046060  390826 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 18:02:35.063995  390826 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0819 18:02:35.081954  390826 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0819 18:02:35.085926  390826 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 18:02:35.097918  390826 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 18:02:35.230726  390826 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 18:02:35.256273  390826 host.go:66] Checking if "ha-086149" exists ...
	I0819 18:02:35.256696  390826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:02:35.256749  390826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:02:35.272157  390826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42187
	I0819 18:02:35.272599  390826 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:02:35.273101  390826 main.go:141] libmachine: Using API Version  1
	I0819 18:02:35.273133  390826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:02:35.273423  390826 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:02:35.273619  390826 main.go:141] libmachine: (ha-086149) Calling .DriverName
	I0819 18:02:35.273745  390826 start.go:317] joinCluster: &{Name:ha-086149 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cluster
Name:ha-086149 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.167 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 18:02:35.273856  390826 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0819 18:02:35.273872  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHHostname
	I0819 18:02:35.276695  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:02:35.277091  390826 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:02:35.277122  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:02:35.277325  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHPort
	I0819 18:02:35.277514  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHKeyPath
	I0819 18:02:35.277691  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHUsername
	I0819 18:02:35.277874  390826 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149/id_rsa Username:docker}
	I0819 18:02:35.434664  390826 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.167 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 18:02:35.434728  390826 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token iv0xtp.asp8701sfrdl07f7 --discovery-token-ca-cert-hash sha256:3fcbd90565c5acbc36a47b2db682cb22dce9b172c9bf3af21e506ebb67608039 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-086149-m02 --control-plane --apiserver-advertise-address=192.168.39.167 --apiserver-bind-port=8443"
	I0819 18:02:55.605897  390826 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token iv0xtp.asp8701sfrdl07f7 --discovery-token-ca-cert-hash sha256:3fcbd90565c5acbc36a47b2db682cb22dce9b172c9bf3af21e506ebb67608039 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-086149-m02 --control-plane --apiserver-advertise-address=192.168.39.167 --apiserver-bind-port=8443": (20.171137965s)
	I0819 18:02:55.605943  390826 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0819 18:02:56.123662  390826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-086149-m02 minikube.k8s.io/updated_at=2024_08_19T18_02_56_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=9c2db9d51ec33b5c53a86e9ba3d384ee332e3411 minikube.k8s.io/name=ha-086149 minikube.k8s.io/primary=false
	I0819 18:02:56.272852  390826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-086149-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0819 18:02:56.436503  390826 start.go:319] duration metric: took 21.162750418s to joinCluster
	I0819 18:02:56.436592  390826 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.167 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 18:02:56.436892  390826 config.go:182] Loaded profile config "ha-086149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:02:56.438255  390826 out.go:177] * Verifying Kubernetes components...
	I0819 18:02:56.439648  390826 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 18:02:56.697352  390826 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 18:02:56.729948  390826 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19468-372744/kubeconfig
	I0819 18:02:56.730270  390826 kapi.go:59] client config for ha-086149: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/client.crt", KeyFile:"/home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/client.key", CAFile:"/home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f18d20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0819 18:02:56.730341  390826 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.249:8443
	I0819 18:02:56.730553  390826 node_ready.go:35] waiting up to 6m0s for node "ha-086149-m02" to be "Ready" ...
	I0819 18:02:56.730668  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:02:56.730681  390826 round_trippers.go:469] Request Headers:
	I0819 18:02:56.730691  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:02:56.730697  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:02:56.740148  390826 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0819 18:02:57.231105  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:02:57.231133  390826 round_trippers.go:469] Request Headers:
	I0819 18:02:57.231158  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:02:57.231172  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:02:57.236076  390826 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 18:02:57.730875  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:02:57.730895  390826 round_trippers.go:469] Request Headers:
	I0819 18:02:57.730904  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:02:57.730908  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:02:57.735538  390826 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 18:02:58.231679  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:02:58.231704  390826 round_trippers.go:469] Request Headers:
	I0819 18:02:58.231713  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:02:58.231717  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:02:58.236296  390826 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 18:02:58.731499  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:02:58.731527  390826 round_trippers.go:469] Request Headers:
	I0819 18:02:58.731537  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:02:58.731543  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:02:58.737763  390826 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0819 18:02:58.738771  390826 node_ready.go:53] node "ha-086149-m02" has status "Ready":"False"
	I0819 18:02:59.231010  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:02:59.231034  390826 round_trippers.go:469] Request Headers:
	I0819 18:02:59.231045  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:02:59.231052  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:02:59.234392  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:02:59.731233  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:02:59.731255  390826 round_trippers.go:469] Request Headers:
	I0819 18:02:59.731263  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:02:59.731267  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:02:59.734343  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:03:00.231336  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:03:00.231365  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:00.231376  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:00.231381  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:00.234918  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:03:00.730874  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:03:00.730896  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:00.730906  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:00.730910  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:00.733879  390826 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:03:01.230977  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:03:01.231003  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:01.231012  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:01.231017  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:01.234331  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:03:01.235035  390826 node_ready.go:53] node "ha-086149-m02" has status "Ready":"False"
	I0819 18:03:01.731548  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:03:01.731578  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:01.731590  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:01.731598  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:01.734946  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:03:02.231110  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:03:02.231141  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:02.231153  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:02.231161  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:02.235244  390826 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 18:03:02.731504  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:03:02.731539  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:02.731548  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:02.731552  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:02.734876  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:03:03.231781  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:03:03.231812  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:03.231821  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:03.231827  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:03.235825  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:03:03.236470  390826 node_ready.go:53] node "ha-086149-m02" has status "Ready":"False"
	I0819 18:03:03.730747  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:03:03.730778  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:03.730799  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:03.730805  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:03.792652  390826 round_trippers.go:574] Response Status: 200 OK in 61 milliseconds
	I0819 18:03:04.230764  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:03:04.230790  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:04.230798  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:04.230802  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:04.234364  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:03:04.730972  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:03:04.731052  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:04.731121  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:04.731132  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:04.735082  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:03:05.231072  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:03:05.231102  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:05.231116  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:05.231123  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:05.234442  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:03:05.730901  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:03:05.730927  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:05.730938  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:05.730944  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:05.734154  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:03:05.734752  390826 node_ready.go:53] node "ha-086149-m02" has status "Ready":"False"
	I0819 18:03:06.231650  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:03:06.231698  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:06.231710  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:06.231715  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:06.235728  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:03:06.730827  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:03:06.730851  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:06.730860  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:06.730864  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:06.734133  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:03:07.231066  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:03:07.231089  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:07.231097  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:07.231102  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:07.234190  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:03:07.731381  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:03:07.731407  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:07.731417  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:07.731423  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:07.734338  390826 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:03:07.735019  390826 node_ready.go:53] node "ha-086149-m02" has status "Ready":"False"
	I0819 18:03:08.231232  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:03:08.231257  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:08.231266  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:08.231269  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:08.234657  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:03:08.731627  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:03:08.731652  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:08.731660  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:08.731665  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:08.735050  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:03:09.230998  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:03:09.231024  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:09.231034  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:09.231048  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:09.234466  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:03:09.731200  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:03:09.731221  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:09.731228  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:09.731233  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:09.734342  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:03:09.735050  390826 node_ready.go:53] node "ha-086149-m02" has status "Ready":"False"
	I0819 18:03:10.231484  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:03:10.231508  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:10.231516  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:10.231522  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:10.234963  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:03:10.731474  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:03:10.731497  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:10.731505  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:10.731509  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:10.734155  390826 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:03:11.231584  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:03:11.231610  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:11.231618  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:11.231622  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:11.235119  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:03:11.731096  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:03:11.731119  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:11.731126  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:11.731130  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:11.733679  390826 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:03:12.231700  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:03:12.231725  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:12.231733  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:12.231737  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:12.235030  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:03:12.235689  390826 node_ready.go:53] node "ha-086149-m02" has status "Ready":"False"
	I0819 18:03:12.731249  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:03:12.731275  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:12.731283  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:12.731287  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:12.734643  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:03:13.231771  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:03:13.231796  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:13.231805  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:13.231809  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:13.235248  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:03:13.731231  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:03:13.731256  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:13.731264  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:13.731268  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:13.734118  390826 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:03:14.231122  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:03:14.231145  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:14.231157  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:14.231165  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:14.234050  390826 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:03:14.731726  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:03:14.731754  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:14.731765  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:14.731770  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:14.735456  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:03:14.736766  390826 node_ready.go:53] node "ha-086149-m02" has status "Ready":"False"
	I0819 18:03:15.231145  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:03:15.231169  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:15.231180  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:15.231187  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:15.234498  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:03:15.731033  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:03:15.731056  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:15.731064  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:15.731068  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:15.734005  390826 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:03:15.734737  390826 node_ready.go:49] node "ha-086149-m02" has status "Ready":"True"
	I0819 18:03:15.734768  390826 node_ready.go:38] duration metric: took 19.004186055s for node "ha-086149-m02" to be "Ready" ...
	I0819 18:03:15.734778  390826 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 18:03:15.734889  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0819 18:03:15.734902  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:15.734911  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:15.734916  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:15.739266  390826 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 18:03:15.745067  390826 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-8fjpd" in "kube-system" namespace to be "Ready" ...
	I0819 18:03:15.745161  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-8fjpd
	I0819 18:03:15.745174  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:15.745181  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:15.745187  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:15.747661  390826 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:03:15.748193  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149
	I0819 18:03:15.748207  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:15.748214  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:15.748218  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:15.750451  390826 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:03:15.750961  390826 pod_ready.go:93] pod "coredns-6f6b679f8f-8fjpd" in "kube-system" namespace has status "Ready":"True"
	I0819 18:03:15.750984  390826 pod_ready.go:82] duration metric: took 5.891312ms for pod "coredns-6f6b679f8f-8fjpd" in "kube-system" namespace to be "Ready" ...
	I0819 18:03:15.750995  390826 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-p65cb" in "kube-system" namespace to be "Ready" ...
	I0819 18:03:15.751059  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-p65cb
	I0819 18:03:15.751069  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:15.751079  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:15.751087  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:15.753277  390826 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:03:15.753835  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149
	I0819 18:03:15.753852  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:15.753861  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:15.753866  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:15.757857  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:03:15.758499  390826 pod_ready.go:93] pod "coredns-6f6b679f8f-p65cb" in "kube-system" namespace has status "Ready":"True"
	I0819 18:03:15.758517  390826 pod_ready.go:82] duration metric: took 7.514249ms for pod "coredns-6f6b679f8f-p65cb" in "kube-system" namespace to be "Ready" ...
	I0819 18:03:15.758525  390826 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-086149" in "kube-system" namespace to be "Ready" ...
	I0819 18:03:15.758580  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-086149
	I0819 18:03:15.758589  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:15.758595  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:15.758599  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:15.760699  390826 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:03:15.761371  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149
	I0819 18:03:15.761388  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:15.761398  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:15.761405  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:15.763562  390826 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:03:15.764008  390826 pod_ready.go:93] pod "etcd-ha-086149" in "kube-system" namespace has status "Ready":"True"
	I0819 18:03:15.764023  390826 pod_ready.go:82] duration metric: took 5.492637ms for pod "etcd-ha-086149" in "kube-system" namespace to be "Ready" ...
	I0819 18:03:15.764031  390826 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-086149-m02" in "kube-system" namespace to be "Ready" ...
	I0819 18:03:15.764072  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-086149-m02
	I0819 18:03:15.764080  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:15.764087  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:15.764090  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:15.765969  390826 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 18:03:15.766584  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:03:15.766601  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:15.766608  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:15.766613  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:15.768705  390826 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:03:15.769197  390826 pod_ready.go:93] pod "etcd-ha-086149-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 18:03:15.769216  390826 pod_ready.go:82] duration metric: took 5.179803ms for pod "etcd-ha-086149-m02" in "kube-system" namespace to be "Ready" ...
	I0819 18:03:15.769231  390826 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-086149" in "kube-system" namespace to be "Ready" ...
	I0819 18:03:15.931631  390826 request.go:632] Waited for 162.326929ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-086149
	I0819 18:03:15.931721  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-086149
	I0819 18:03:15.931728  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:15.931742  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:15.931759  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:15.935829  390826 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 18:03:16.131866  390826 request.go:632] Waited for 195.373418ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-086149
	I0819 18:03:16.131924  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149
	I0819 18:03:16.131928  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:16.131936  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:16.131940  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:16.135634  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:03:16.136131  390826 pod_ready.go:93] pod "kube-apiserver-ha-086149" in "kube-system" namespace has status "Ready":"True"
	I0819 18:03:16.136151  390826 pod_ready.go:82] duration metric: took 366.910938ms for pod "kube-apiserver-ha-086149" in "kube-system" namespace to be "Ready" ...
	I0819 18:03:16.136163  390826 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-086149-m02" in "kube-system" namespace to be "Ready" ...
	I0819 18:03:16.331318  390826 request.go:632] Waited for 195.07968ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-086149-m02
	I0819 18:03:16.331422  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-086149-m02
	I0819 18:03:16.331434  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:16.331447  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:16.331452  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:16.334968  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:03:16.532129  390826 request.go:632] Waited for 196.406522ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:03:16.532207  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:03:16.532217  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:16.532237  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:16.532246  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:16.535691  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:03:16.536191  390826 pod_ready.go:93] pod "kube-apiserver-ha-086149-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 18:03:16.536210  390826 pod_ready.go:82] duration metric: took 400.038947ms for pod "kube-apiserver-ha-086149-m02" in "kube-system" namespace to be "Ready" ...
	I0819 18:03:16.536233  390826 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-086149" in "kube-system" namespace to be "Ready" ...
	I0819 18:03:16.731414  390826 request.go:632] Waited for 195.094037ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-086149
	I0819 18:03:16.731500  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-086149
	I0819 18:03:16.731512  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:16.731525  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:16.731533  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:16.735046  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:03:16.931185  390826 request.go:632] Waited for 195.318382ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-086149
	I0819 18:03:16.931265  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149
	I0819 18:03:16.931272  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:16.931282  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:16.931291  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:16.934590  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:03:16.935075  390826 pod_ready.go:93] pod "kube-controller-manager-ha-086149" in "kube-system" namespace has status "Ready":"True"
	I0819 18:03:16.935096  390826 pod_ready.go:82] duration metric: took 398.853679ms for pod "kube-controller-manager-ha-086149" in "kube-system" namespace to be "Ready" ...
	I0819 18:03:16.935110  390826 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-086149-m02" in "kube-system" namespace to be "Ready" ...
	I0819 18:03:17.131090  390826 request.go:632] Waited for 195.897067ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-086149-m02
	I0819 18:03:17.131170  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-086149-m02
	I0819 18:03:17.131176  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:17.131183  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:17.131195  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:17.134780  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:03:17.331893  390826 request.go:632] Waited for 196.406154ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:03:17.332004  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:03:17.332016  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:17.332028  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:17.332037  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:17.335217  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:03:17.336031  390826 pod_ready.go:93] pod "kube-controller-manager-ha-086149-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 18:03:17.336050  390826 pod_ready.go:82] duration metric: took 400.932335ms for pod "kube-controller-manager-ha-086149-m02" in "kube-system" namespace to be "Ready" ...
	I0819 18:03:17.336063  390826 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fwkf2" in "kube-system" namespace to be "Ready" ...
	I0819 18:03:17.531346  390826 request.go:632] Waited for 195.177557ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fwkf2
	I0819 18:03:17.531423  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fwkf2
	I0819 18:03:17.531432  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:17.531443  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:17.531454  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:17.534764  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:03:17.731902  390826 request.go:632] Waited for 196.380838ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-086149
	I0819 18:03:17.731973  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149
	I0819 18:03:17.731980  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:17.732099  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:17.732153  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:17.735125  390826 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:03:17.735706  390826 pod_ready.go:93] pod "kube-proxy-fwkf2" in "kube-system" namespace has status "Ready":"True"
	I0819 18:03:17.735725  390826 pod_ready.go:82] duration metric: took 399.655828ms for pod "kube-proxy-fwkf2" in "kube-system" namespace to be "Ready" ...
	I0819 18:03:17.735736  390826 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vx94r" in "kube-system" namespace to be "Ready" ...
	I0819 18:03:17.931749  390826 request.go:632] Waited for 195.943138ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vx94r
	I0819 18:03:17.931819  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vx94r
	I0819 18:03:17.931824  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:17.931832  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:17.931839  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:17.935457  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:03:18.131628  390826 request.go:632] Waited for 195.400935ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:03:18.131709  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:03:18.131715  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:18.131723  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:18.131728  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:18.135208  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:03:18.136090  390826 pod_ready.go:93] pod "kube-proxy-vx94r" in "kube-system" namespace has status "Ready":"True"
	I0819 18:03:18.136112  390826 pod_ready.go:82] duration metric: took 400.367682ms for pod "kube-proxy-vx94r" in "kube-system" namespace to be "Ready" ...
	I0819 18:03:18.136123  390826 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-086149" in "kube-system" namespace to be "Ready" ...
	I0819 18:03:18.331374  390826 request.go:632] Waited for 195.162024ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-086149
	I0819 18:03:18.331465  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-086149
	I0819 18:03:18.331472  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:18.331484  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:18.331491  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:18.334662  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:03:18.531670  390826 request.go:632] Waited for 196.392053ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-086149
	I0819 18:03:18.531752  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149
	I0819 18:03:18.531757  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:18.531765  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:18.531772  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:18.535077  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:03:18.535730  390826 pod_ready.go:93] pod "kube-scheduler-ha-086149" in "kube-system" namespace has status "Ready":"True"
	I0819 18:03:18.535753  390826 pod_ready.go:82] duration metric: took 399.624046ms for pod "kube-scheduler-ha-086149" in "kube-system" namespace to be "Ready" ...
	I0819 18:03:18.535765  390826 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-086149-m02" in "kube-system" namespace to be "Ready" ...
	I0819 18:03:18.731816  390826 request.go:632] Waited for 195.936826ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-086149-m02
	I0819 18:03:18.731898  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-086149-m02
	I0819 18:03:18.731904  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:18.731910  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:18.731916  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:18.735060  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:03:18.931057  390826 request.go:632] Waited for 195.342395ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:03:18.931154  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:03:18.931161  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:18.931172  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:18.931177  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:18.934179  390826 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:03:18.935067  390826 pod_ready.go:93] pod "kube-scheduler-ha-086149-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 18:03:18.935089  390826 pod_ready.go:82] duration metric: took 399.3179ms for pod "kube-scheduler-ha-086149-m02" in "kube-system" namespace to be "Ready" ...
	I0819 18:03:18.935103  390826 pod_ready.go:39] duration metric: took 3.20028863s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 18:03:18.935122  390826 api_server.go:52] waiting for apiserver process to appear ...
	I0819 18:03:18.935181  390826 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:03:18.951375  390826 api_server.go:72] duration metric: took 22.514748322s to wait for apiserver process to appear ...
	I0819 18:03:18.951401  390826 api_server.go:88] waiting for apiserver healthz status ...
	I0819 18:03:18.951426  390826 api_server.go:253] Checking apiserver healthz at https://192.168.39.249:8443/healthz ...
	I0819 18:03:18.957673  390826 api_server.go:279] https://192.168.39.249:8443/healthz returned 200:
	ok
	I0819 18:03:18.957760  390826 round_trippers.go:463] GET https://192.168.39.249:8443/version
	I0819 18:03:18.957772  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:18.957784  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:18.957799  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:18.958846  390826 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 18:03:18.958950  390826 api_server.go:141] control plane version: v1.31.0
	I0819 18:03:18.958982  390826 api_server.go:131] duration metric: took 7.572392ms to wait for apiserver health ...
	I0819 18:03:18.958993  390826 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 18:03:19.131417  390826 request.go:632] Waited for 172.338441ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0819 18:03:19.131494  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0819 18:03:19.131503  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:19.131511  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:19.131519  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:19.138959  390826 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0819 18:03:19.144476  390826 system_pods.go:59] 17 kube-system pods found
	I0819 18:03:19.144510  390826 system_pods.go:61] "coredns-6f6b679f8f-8fjpd" [4bedb900-107a-4f7e-aae7-391b18da4a26] Running
	I0819 18:03:19.144515  390826 system_pods.go:61] "coredns-6f6b679f8f-p65cb" [7f30449e-d4ea-4d6f-a63a-08551024bd04] Running
	I0819 18:03:19.144520  390826 system_pods.go:61] "etcd-ha-086149" [0dc3ab02-31e8-4110-accd-85d2e18db232] Running
	I0819 18:03:19.144524  390826 system_pods.go:61] "etcd-ha-086149-m02" [06fcadf6-a4b1-40c8-8ce8-bc1df1fad746] Running
	I0819 18:03:19.144527  390826 system_pods.go:61] "kindnet-dgj9c" [142f260c-d74e-411f-ac87-f4398f573b94] Running
	I0819 18:03:19.144530  390826 system_pods.go:61] "kindnet-vb66s" [9322737a-5f8a-4d5a-a7d1-ba076bc8f2d8] Running
	I0819 18:03:19.144534  390826 system_pods.go:61] "kube-apiserver-ha-086149" [98466e03-c8b3-4d70-97b0-ba24afe776a9] Running
	I0819 18:03:19.144537  390826 system_pods.go:61] "kube-apiserver-ha-086149-m02" [afbc7c61-72ec-4571-9a5e-3d8afd08ae6b] Running
	I0819 18:03:19.144540  390826 system_pods.go:61] "kube-controller-manager-ha-086149" [910295fd-3d2e-4390-b9cd-9e1169813375] Running
	I0819 18:03:19.144544  390826 system_pods.go:61] "kube-controller-manager-ha-086149-m02" [dad58fc3-85d8-444c-bfb8-3a74c5016f32] Running
	I0819 18:03:19.144547  390826 system_pods.go:61] "kube-proxy-fwkf2" [001a3fe7-633c-44f8-9a8c-7401cec7af54] Running
	I0819 18:03:19.144550  390826 system_pods.go:61] "kube-proxy-vx94r" [8960702f-2f02-4e67-9d4f-02860491e5f2] Running
	I0819 18:03:19.144554  390826 system_pods.go:61] "kube-scheduler-ha-086149" [6d113319-d44e-4a5a-8e0a-f0a890e13e43] Running
	I0819 18:03:19.144557  390826 system_pods.go:61] "kube-scheduler-ha-086149-m02" [5d64ff86-a24d-4836-a7d7-ebb968bb39c8] Running
	I0819 18:03:19.144560  390826 system_pods.go:61] "kube-vip-ha-086149" [25176ed4-e5b0-4e5e-9835-736c856d2643] Running
	I0819 18:03:19.144563  390826 system_pods.go:61] "kube-vip-ha-086149-m02" [8c6b400d-f73e-44b5-a31f-3607329360be] Running
	I0819 18:03:19.144566  390826 system_pods.go:61] "storage-provisioner" [c12159a8-5f84-4d19-aa54-7b56a9669f6c] Running
	I0819 18:03:19.144572  390826 system_pods.go:74] duration metric: took 185.572931ms to wait for pod list to return data ...
	I0819 18:03:19.144587  390826 default_sa.go:34] waiting for default service account to be created ...
	I0819 18:03:19.331565  390826 request.go:632] Waited for 186.891864ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/default/serviceaccounts
	I0819 18:03:19.331653  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/default/serviceaccounts
	I0819 18:03:19.331663  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:19.331685  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:19.331691  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:19.335645  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:03:19.335910  390826 default_sa.go:45] found service account: "default"
	I0819 18:03:19.335931  390826 default_sa.go:55] duration metric: took 191.337823ms for default service account to be created ...
	I0819 18:03:19.335940  390826 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 18:03:19.531534  390826 request.go:632] Waited for 195.502082ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0819 18:03:19.531599  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0819 18:03:19.531606  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:19.531620  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:19.531628  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:19.536807  390826 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0819 18:03:19.543031  390826 system_pods.go:86] 17 kube-system pods found
	I0819 18:03:19.543067  390826 system_pods.go:89] "coredns-6f6b679f8f-8fjpd" [4bedb900-107a-4f7e-aae7-391b18da4a26] Running
	I0819 18:03:19.543076  390826 system_pods.go:89] "coredns-6f6b679f8f-p65cb" [7f30449e-d4ea-4d6f-a63a-08551024bd04] Running
	I0819 18:03:19.543082  390826 system_pods.go:89] "etcd-ha-086149" [0dc3ab02-31e8-4110-accd-85d2e18db232] Running
	I0819 18:03:19.543088  390826 system_pods.go:89] "etcd-ha-086149-m02" [06fcadf6-a4b1-40c8-8ce8-bc1df1fad746] Running
	I0819 18:03:19.543093  390826 system_pods.go:89] "kindnet-dgj9c" [142f260c-d74e-411f-ac87-f4398f573b94] Running
	I0819 18:03:19.543099  390826 system_pods.go:89] "kindnet-vb66s" [9322737a-5f8a-4d5a-a7d1-ba076bc8f2d8] Running
	I0819 18:03:19.543102  390826 system_pods.go:89] "kube-apiserver-ha-086149" [98466e03-c8b3-4d70-97b0-ba24afe776a9] Running
	I0819 18:03:19.543106  390826 system_pods.go:89] "kube-apiserver-ha-086149-m02" [afbc7c61-72ec-4571-9a5e-3d8afd08ae6b] Running
	I0819 18:03:19.543111  390826 system_pods.go:89] "kube-controller-manager-ha-086149" [910295fd-3d2e-4390-b9cd-9e1169813375] Running
	I0819 18:03:19.543117  390826 system_pods.go:89] "kube-controller-manager-ha-086149-m02" [dad58fc3-85d8-444c-bfb8-3a74c5016f32] Running
	I0819 18:03:19.543127  390826 system_pods.go:89] "kube-proxy-fwkf2" [001a3fe7-633c-44f8-9a8c-7401cec7af54] Running
	I0819 18:03:19.543132  390826 system_pods.go:89] "kube-proxy-vx94r" [8960702f-2f02-4e67-9d4f-02860491e5f2] Running
	I0819 18:03:19.543141  390826 system_pods.go:89] "kube-scheduler-ha-086149" [6d113319-d44e-4a5a-8e0a-f0a890e13e43] Running
	I0819 18:03:19.543147  390826 system_pods.go:89] "kube-scheduler-ha-086149-m02" [5d64ff86-a24d-4836-a7d7-ebb968bb39c8] Running
	I0819 18:03:19.543152  390826 system_pods.go:89] "kube-vip-ha-086149" [25176ed4-e5b0-4e5e-9835-736c856d2643] Running
	I0819 18:03:19.543157  390826 system_pods.go:89] "kube-vip-ha-086149-m02" [8c6b400d-f73e-44b5-a31f-3607329360be] Running
	I0819 18:03:19.543164  390826 system_pods.go:89] "storage-provisioner" [c12159a8-5f84-4d19-aa54-7b56a9669f6c] Running
	I0819 18:03:19.543173  390826 system_pods.go:126] duration metric: took 207.224242ms to wait for k8s-apps to be running ...
	I0819 18:03:19.543184  390826 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 18:03:19.543240  390826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:03:19.559271  390826 system_svc.go:56] duration metric: took 16.074576ms WaitForService to wait for kubelet
	I0819 18:03:19.559304  390826 kubeadm.go:582] duration metric: took 23.122680186s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 18:03:19.559326  390826 node_conditions.go:102] verifying NodePressure condition ...
	I0819 18:03:19.731891  390826 request.go:632] Waited for 172.461302ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes
	I0819 18:03:19.731971  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes
	I0819 18:03:19.731978  390826 round_trippers.go:469] Request Headers:
	I0819 18:03:19.731996  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:03:19.732004  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:03:19.735656  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:03:19.736479  390826 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 18:03:19.736505  390826 node_conditions.go:123] node cpu capacity is 2
	I0819 18:03:19.736518  390826 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 18:03:19.736521  390826 node_conditions.go:123] node cpu capacity is 2
	I0819 18:03:19.736526  390826 node_conditions.go:105] duration metric: took 177.195708ms to run NodePressure ...
	I0819 18:03:19.736541  390826 start.go:241] waiting for startup goroutines ...
	I0819 18:03:19.736573  390826 start.go:255] writing updated cluster config ...
	I0819 18:03:19.738641  390826 out.go:201] 
	I0819 18:03:19.740006  390826 config.go:182] Loaded profile config "ha-086149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:03:19.740106  390826 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/config.json ...
	I0819 18:03:19.741755  390826 out.go:177] * Starting "ha-086149-m03" control-plane node in "ha-086149" cluster
	I0819 18:03:19.742817  390826 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 18:03:19.742845  390826 cache.go:56] Caching tarball of preloaded images
	I0819 18:03:19.742979  390826 preload.go:172] Found /home/jenkins/minikube-integration/19468-372744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 18:03:19.742997  390826 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 18:03:19.743124  390826 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/config.json ...
	I0819 18:03:19.743337  390826 start.go:360] acquireMachinesLock for ha-086149-m03: {Name:mk24ba67a747357e9ce40f1e460d2bb0bc59cc75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 18:03:19.743395  390826 start.go:364] duration metric: took 31.394µs to acquireMachinesLock for "ha-086149-m03"
	I0819 18:03:19.743420  390826 start.go:93] Provisioning new machine with config: &{Name:ha-086149 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-086149 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.167 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 18:03:19.743550  390826 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0819 18:03:19.744878  390826 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 18:03:19.744991  390826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:03:19.745035  390826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:03:19.760980  390826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37165
	I0819 18:03:19.761382  390826 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:03:19.761864  390826 main.go:141] libmachine: Using API Version  1
	I0819 18:03:19.761897  390826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:03:19.762259  390826 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:03:19.762470  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetMachineName
	I0819 18:03:19.762620  390826 main.go:141] libmachine: (ha-086149-m03) Calling .DriverName
	I0819 18:03:19.762770  390826 start.go:159] libmachine.API.Create for "ha-086149" (driver="kvm2")
	I0819 18:03:19.762798  390826 client.go:168] LocalClient.Create starting
	I0819 18:03:19.762836  390826 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem
	I0819 18:03:19.762870  390826 main.go:141] libmachine: Decoding PEM data...
	I0819 18:03:19.762886  390826 main.go:141] libmachine: Parsing certificate...
	I0819 18:03:19.762957  390826 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem
	I0819 18:03:19.762978  390826 main.go:141] libmachine: Decoding PEM data...
	I0819 18:03:19.762992  390826 main.go:141] libmachine: Parsing certificate...
	I0819 18:03:19.763008  390826 main.go:141] libmachine: Running pre-create checks...
	I0819 18:03:19.763016  390826 main.go:141] libmachine: (ha-086149-m03) Calling .PreCreateCheck
	I0819 18:03:19.763200  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetConfigRaw
	I0819 18:03:19.763570  390826 main.go:141] libmachine: Creating machine...
	I0819 18:03:19.763589  390826 main.go:141] libmachine: (ha-086149-m03) Calling .Create
	I0819 18:03:19.763734  390826 main.go:141] libmachine: (ha-086149-m03) Creating KVM machine...
	I0819 18:03:19.764990  390826 main.go:141] libmachine: (ha-086149-m03) DBG | found existing default KVM network
	I0819 18:03:19.765107  390826 main.go:141] libmachine: (ha-086149-m03) DBG | found existing private KVM network mk-ha-086149
	I0819 18:03:19.765251  390826 main.go:141] libmachine: (ha-086149-m03) Setting up store path in /home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m03 ...
	I0819 18:03:19.765273  390826 main.go:141] libmachine: (ha-086149-m03) Building disk image from file:///home/jenkins/minikube-integration/19468-372744/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0819 18:03:19.765333  390826 main.go:141] libmachine: (ha-086149-m03) DBG | I0819 18:03:19.765243  391617 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19468-372744/.minikube
	I0819 18:03:19.765467  390826 main.go:141] libmachine: (ha-086149-m03) Downloading /home/jenkins/minikube-integration/19468-372744/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19468-372744/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 18:03:20.039210  390826 main.go:141] libmachine: (ha-086149-m03) DBG | I0819 18:03:20.039078  391617 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m03/id_rsa...
	I0819 18:03:20.302554  390826 main.go:141] libmachine: (ha-086149-m03) DBG | I0819 18:03:20.302429  391617 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m03/ha-086149-m03.rawdisk...
	I0819 18:03:20.302587  390826 main.go:141] libmachine: (ha-086149-m03) DBG | Writing magic tar header
	I0819 18:03:20.302599  390826 main.go:141] libmachine: (ha-086149-m03) DBG | Writing SSH key tar header
	I0819 18:03:20.302607  390826 main.go:141] libmachine: (ha-086149-m03) DBG | I0819 18:03:20.302554  391617 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m03 ...
	I0819 18:03:20.302720  390826 main.go:141] libmachine: (ha-086149-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m03
	I0819 18:03:20.302762  390826 main.go:141] libmachine: (ha-086149-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19468-372744/.minikube/machines
	I0819 18:03:20.302777  390826 main.go:141] libmachine: (ha-086149-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19468-372744/.minikube
	I0819 18:03:20.302789  390826 main.go:141] libmachine: (ha-086149-m03) Setting executable bit set on /home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m03 (perms=drwx------)
	I0819 18:03:20.302843  390826 main.go:141] libmachine: (ha-086149-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19468-372744
	I0819 18:03:20.302895  390826 main.go:141] libmachine: (ha-086149-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0819 18:03:20.302913  390826 main.go:141] libmachine: (ha-086149-m03) Setting executable bit set on /home/jenkins/minikube-integration/19468-372744/.minikube/machines (perms=drwxr-xr-x)
	I0819 18:03:20.302928  390826 main.go:141] libmachine: (ha-086149-m03) Setting executable bit set on /home/jenkins/minikube-integration/19468-372744/.minikube (perms=drwxr-xr-x)
	I0819 18:03:20.302936  390826 main.go:141] libmachine: (ha-086149-m03) Setting executable bit set on /home/jenkins/minikube-integration/19468-372744 (perms=drwxrwxr-x)
	I0819 18:03:20.302947  390826 main.go:141] libmachine: (ha-086149-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0819 18:03:20.302958  390826 main.go:141] libmachine: (ha-086149-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0819 18:03:20.302976  390826 main.go:141] libmachine: (ha-086149-m03) Creating domain...
	I0819 18:03:20.302994  390826 main.go:141] libmachine: (ha-086149-m03) DBG | Checking permissions on dir: /home/jenkins
	I0819 18:03:20.303012  390826 main.go:141] libmachine: (ha-086149-m03) DBG | Checking permissions on dir: /home
	I0819 18:03:20.303023  390826 main.go:141] libmachine: (ha-086149-m03) DBG | Skipping /home - not owner
	I0819 18:03:20.303803  390826 main.go:141] libmachine: (ha-086149-m03) define libvirt domain using xml: 
	I0819 18:03:20.303825  390826 main.go:141] libmachine: (ha-086149-m03) <domain type='kvm'>
	I0819 18:03:20.303836  390826 main.go:141] libmachine: (ha-086149-m03)   <name>ha-086149-m03</name>
	I0819 18:03:20.303844  390826 main.go:141] libmachine: (ha-086149-m03)   <memory unit='MiB'>2200</memory>
	I0819 18:03:20.303874  390826 main.go:141] libmachine: (ha-086149-m03)   <vcpu>2</vcpu>
	I0819 18:03:20.303896  390826 main.go:141] libmachine: (ha-086149-m03)   <features>
	I0819 18:03:20.303906  390826 main.go:141] libmachine: (ha-086149-m03)     <acpi/>
	I0819 18:03:20.303915  390826 main.go:141] libmachine: (ha-086149-m03)     <apic/>
	I0819 18:03:20.303920  390826 main.go:141] libmachine: (ha-086149-m03)     <pae/>
	I0819 18:03:20.303927  390826 main.go:141] libmachine: (ha-086149-m03)     
	I0819 18:03:20.303933  390826 main.go:141] libmachine: (ha-086149-m03)   </features>
	I0819 18:03:20.303940  390826 main.go:141] libmachine: (ha-086149-m03)   <cpu mode='host-passthrough'>
	I0819 18:03:20.303945  390826 main.go:141] libmachine: (ha-086149-m03)   
	I0819 18:03:20.303950  390826 main.go:141] libmachine: (ha-086149-m03)   </cpu>
	I0819 18:03:20.303955  390826 main.go:141] libmachine: (ha-086149-m03)   <os>
	I0819 18:03:20.303962  390826 main.go:141] libmachine: (ha-086149-m03)     <type>hvm</type>
	I0819 18:03:20.303968  390826 main.go:141] libmachine: (ha-086149-m03)     <boot dev='cdrom'/>
	I0819 18:03:20.303973  390826 main.go:141] libmachine: (ha-086149-m03)     <boot dev='hd'/>
	I0819 18:03:20.303979  390826 main.go:141] libmachine: (ha-086149-m03)     <bootmenu enable='no'/>
	I0819 18:03:20.303983  390826 main.go:141] libmachine: (ha-086149-m03)   </os>
	I0819 18:03:20.303989  390826 main.go:141] libmachine: (ha-086149-m03)   <devices>
	I0819 18:03:20.303996  390826 main.go:141] libmachine: (ha-086149-m03)     <disk type='file' device='cdrom'>
	I0819 18:03:20.304004  390826 main.go:141] libmachine: (ha-086149-m03)       <source file='/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m03/boot2docker.iso'/>
	I0819 18:03:20.304015  390826 main.go:141] libmachine: (ha-086149-m03)       <target dev='hdc' bus='scsi'/>
	I0819 18:03:20.304023  390826 main.go:141] libmachine: (ha-086149-m03)       <readonly/>
	I0819 18:03:20.304027  390826 main.go:141] libmachine: (ha-086149-m03)     </disk>
	I0819 18:03:20.304034  390826 main.go:141] libmachine: (ha-086149-m03)     <disk type='file' device='disk'>
	I0819 18:03:20.304043  390826 main.go:141] libmachine: (ha-086149-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0819 18:03:20.304051  390826 main.go:141] libmachine: (ha-086149-m03)       <source file='/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m03/ha-086149-m03.rawdisk'/>
	I0819 18:03:20.304058  390826 main.go:141] libmachine: (ha-086149-m03)       <target dev='hda' bus='virtio'/>
	I0819 18:03:20.304063  390826 main.go:141] libmachine: (ha-086149-m03)     </disk>
	I0819 18:03:20.304071  390826 main.go:141] libmachine: (ha-086149-m03)     <interface type='network'>
	I0819 18:03:20.304076  390826 main.go:141] libmachine: (ha-086149-m03)       <source network='mk-ha-086149'/>
	I0819 18:03:20.304083  390826 main.go:141] libmachine: (ha-086149-m03)       <model type='virtio'/>
	I0819 18:03:20.304104  390826 main.go:141] libmachine: (ha-086149-m03)     </interface>
	I0819 18:03:20.304122  390826 main.go:141] libmachine: (ha-086149-m03)     <interface type='network'>
	I0819 18:03:20.304149  390826 main.go:141] libmachine: (ha-086149-m03)       <source network='default'/>
	I0819 18:03:20.304166  390826 main.go:141] libmachine: (ha-086149-m03)       <model type='virtio'/>
	I0819 18:03:20.304181  390826 main.go:141] libmachine: (ha-086149-m03)     </interface>
	I0819 18:03:20.304193  390826 main.go:141] libmachine: (ha-086149-m03)     <serial type='pty'>
	I0819 18:03:20.304203  390826 main.go:141] libmachine: (ha-086149-m03)       <target port='0'/>
	I0819 18:03:20.304214  390826 main.go:141] libmachine: (ha-086149-m03)     </serial>
	I0819 18:03:20.304231  390826 main.go:141] libmachine: (ha-086149-m03)     <console type='pty'>
	I0819 18:03:20.304249  390826 main.go:141] libmachine: (ha-086149-m03)       <target type='serial' port='0'/>
	I0819 18:03:20.304261  390826 main.go:141] libmachine: (ha-086149-m03)     </console>
	I0819 18:03:20.304271  390826 main.go:141] libmachine: (ha-086149-m03)     <rng model='virtio'>
	I0819 18:03:20.304286  390826 main.go:141] libmachine: (ha-086149-m03)       <backend model='random'>/dev/random</backend>
	I0819 18:03:20.304296  390826 main.go:141] libmachine: (ha-086149-m03)     </rng>
	I0819 18:03:20.304308  390826 main.go:141] libmachine: (ha-086149-m03)     
	I0819 18:03:20.304322  390826 main.go:141] libmachine: (ha-086149-m03)     
	I0819 18:03:20.304335  390826 main.go:141] libmachine: (ha-086149-m03)   </devices>
	I0819 18:03:20.304345  390826 main.go:141] libmachine: (ha-086149-m03) </domain>
	I0819 18:03:20.304359  390826 main.go:141] libmachine: (ha-086149-m03) 
	I0819 18:03:20.311221  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined MAC address 52:54:00:ae:a0:91 in network default
	I0819 18:03:20.311840  390826 main.go:141] libmachine: (ha-086149-m03) Ensuring networks are active...
	I0819 18:03:20.311863  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:20.312607  390826 main.go:141] libmachine: (ha-086149-m03) Ensuring network default is active
	I0819 18:03:20.312955  390826 main.go:141] libmachine: (ha-086149-m03) Ensuring network mk-ha-086149 is active
	I0819 18:03:20.313312  390826 main.go:141] libmachine: (ha-086149-m03) Getting domain xml...
	I0819 18:03:20.314122  390826 main.go:141] libmachine: (ha-086149-m03) Creating domain...
	I0819 18:03:21.562949  390826 main.go:141] libmachine: (ha-086149-m03) Waiting to get IP...
	I0819 18:03:21.563827  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:21.564282  390826 main.go:141] libmachine: (ha-086149-m03) DBG | unable to find current IP address of domain ha-086149-m03 in network mk-ha-086149
	I0819 18:03:21.564318  390826 main.go:141] libmachine: (ha-086149-m03) DBG | I0819 18:03:21.564272  391617 retry.go:31] will retry after 287.519385ms: waiting for machine to come up
	I0819 18:03:21.853642  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:21.854188  390826 main.go:141] libmachine: (ha-086149-m03) DBG | unable to find current IP address of domain ha-086149-m03 in network mk-ha-086149
	I0819 18:03:21.854218  390826 main.go:141] libmachine: (ha-086149-m03) DBG | I0819 18:03:21.854115  391617 retry.go:31] will retry after 380.562809ms: waiting for machine to come up
	I0819 18:03:22.236389  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:22.236849  390826 main.go:141] libmachine: (ha-086149-m03) DBG | unable to find current IP address of domain ha-086149-m03 in network mk-ha-086149
	I0819 18:03:22.236877  390826 main.go:141] libmachine: (ha-086149-m03) DBG | I0819 18:03:22.236812  391617 retry.go:31] will retry after 327.555766ms: waiting for machine to come up
	I0819 18:03:22.566254  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:22.566623  390826 main.go:141] libmachine: (ha-086149-m03) DBG | unable to find current IP address of domain ha-086149-m03 in network mk-ha-086149
	I0819 18:03:22.566648  390826 main.go:141] libmachine: (ha-086149-m03) DBG | I0819 18:03:22.566579  391617 retry.go:31] will retry after 411.488107ms: waiting for machine to come up
	I0819 18:03:22.979125  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:22.979687  390826 main.go:141] libmachine: (ha-086149-m03) DBG | unable to find current IP address of domain ha-086149-m03 in network mk-ha-086149
	I0819 18:03:22.979717  390826 main.go:141] libmachine: (ha-086149-m03) DBG | I0819 18:03:22.979605  391617 retry.go:31] will retry after 520.603963ms: waiting for machine to come up
	I0819 18:03:23.502110  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:23.502597  390826 main.go:141] libmachine: (ha-086149-m03) DBG | unable to find current IP address of domain ha-086149-m03 in network mk-ha-086149
	I0819 18:03:23.502620  390826 main.go:141] libmachine: (ha-086149-m03) DBG | I0819 18:03:23.502547  391617 retry.go:31] will retry after 785.663535ms: waiting for machine to come up
	I0819 18:03:24.289488  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:24.289969  390826 main.go:141] libmachine: (ha-086149-m03) DBG | unable to find current IP address of domain ha-086149-m03 in network mk-ha-086149
	I0819 18:03:24.289999  390826 main.go:141] libmachine: (ha-086149-m03) DBG | I0819 18:03:24.289903  391617 retry.go:31] will retry after 1.114679695s: waiting for machine to come up
	I0819 18:03:25.405954  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:25.406298  390826 main.go:141] libmachine: (ha-086149-m03) DBG | unable to find current IP address of domain ha-086149-m03 in network mk-ha-086149
	I0819 18:03:25.406320  390826 main.go:141] libmachine: (ha-086149-m03) DBG | I0819 18:03:25.406252  391617 retry.go:31] will retry after 1.122956034s: waiting for machine to come up
	I0819 18:03:26.530546  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:26.530920  390826 main.go:141] libmachine: (ha-086149-m03) DBG | unable to find current IP address of domain ha-086149-m03 in network mk-ha-086149
	I0819 18:03:26.530945  390826 main.go:141] libmachine: (ha-086149-m03) DBG | I0819 18:03:26.530869  391617 retry.go:31] will retry after 1.212325896s: waiting for machine to come up
	I0819 18:03:27.744699  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:27.745099  390826 main.go:141] libmachine: (ha-086149-m03) DBG | unable to find current IP address of domain ha-086149-m03 in network mk-ha-086149
	I0819 18:03:27.745134  390826 main.go:141] libmachine: (ha-086149-m03) DBG | I0819 18:03:27.745053  391617 retry.go:31] will retry after 1.909860275s: waiting for machine to come up
	I0819 18:03:29.657018  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:29.657535  390826 main.go:141] libmachine: (ha-086149-m03) DBG | unable to find current IP address of domain ha-086149-m03 in network mk-ha-086149
	I0819 18:03:29.657560  390826 main.go:141] libmachine: (ha-086149-m03) DBG | I0819 18:03:29.657483  391617 retry.go:31] will retry after 2.070750747s: waiting for machine to come up
	I0819 18:03:31.729452  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:31.729972  390826 main.go:141] libmachine: (ha-086149-m03) DBG | unable to find current IP address of domain ha-086149-m03 in network mk-ha-086149
	I0819 18:03:31.730001  390826 main.go:141] libmachine: (ha-086149-m03) DBG | I0819 18:03:31.729906  391617 retry.go:31] will retry after 2.499787973s: waiting for machine to come up
	I0819 18:03:34.231619  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:34.232035  390826 main.go:141] libmachine: (ha-086149-m03) DBG | unable to find current IP address of domain ha-086149-m03 in network mk-ha-086149
	I0819 18:03:34.232068  390826 main.go:141] libmachine: (ha-086149-m03) DBG | I0819 18:03:34.231974  391617 retry.go:31] will retry after 3.724609684s: waiting for machine to come up
	I0819 18:03:37.960873  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:37.961342  390826 main.go:141] libmachine: (ha-086149-m03) DBG | unable to find current IP address of domain ha-086149-m03 in network mk-ha-086149
	I0819 18:03:37.961377  390826 main.go:141] libmachine: (ha-086149-m03) DBG | I0819 18:03:37.961291  391617 retry.go:31] will retry after 4.221691155s: waiting for machine to come up
	I0819 18:03:42.184935  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:42.185477  390826 main.go:141] libmachine: (ha-086149-m03) Found IP for machine: 192.168.39.121
	I0819 18:03:42.185514  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has current primary IP address 192.168.39.121 and MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:42.185524  390826 main.go:141] libmachine: (ha-086149-m03) Reserving static IP address...
	I0819 18:03:42.186031  390826 main.go:141] libmachine: (ha-086149-m03) DBG | unable to find host DHCP lease matching {name: "ha-086149-m03", mac: "52:54:00:dc:29:16", ip: "192.168.39.121"} in network mk-ha-086149
	I0819 18:03:42.261896  390826 main.go:141] libmachine: (ha-086149-m03) DBG | Getting to WaitForSSH function...
	I0819 18:03:42.261933  390826 main.go:141] libmachine: (ha-086149-m03) Reserved static IP address: 192.168.39.121
	I0819 18:03:42.261942  390826 main.go:141] libmachine: (ha-086149-m03) Waiting for SSH to be available...
	I0819 18:03:42.264703  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:42.265036  390826 main.go:141] libmachine: (ha-086149-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:dc:29:16", ip: ""} in network mk-ha-086149
	I0819 18:03:42.265064  390826 main.go:141] libmachine: (ha-086149-m03) DBG | unable to find defined IP address of network mk-ha-086149 interface with MAC address 52:54:00:dc:29:16
	I0819 18:03:42.265234  390826 main.go:141] libmachine: (ha-086149-m03) DBG | Using SSH client type: external
	I0819 18:03:42.265265  390826 main.go:141] libmachine: (ha-086149-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m03/id_rsa (-rw-------)
	I0819 18:03:42.265301  390826 main.go:141] libmachine: (ha-086149-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 18:03:42.265318  390826 main.go:141] libmachine: (ha-086149-m03) DBG | About to run SSH command:
	I0819 18:03:42.265333  390826 main.go:141] libmachine: (ha-086149-m03) DBG | exit 0
	I0819 18:03:42.268920  390826 main.go:141] libmachine: (ha-086149-m03) DBG | SSH cmd err, output: exit status 255: 
	I0819 18:03:42.268942  390826 main.go:141] libmachine: (ha-086149-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0819 18:03:42.268951  390826 main.go:141] libmachine: (ha-086149-m03) DBG | command : exit 0
	I0819 18:03:42.268956  390826 main.go:141] libmachine: (ha-086149-m03) DBG | err     : exit status 255
	I0819 18:03:42.268988  390826 main.go:141] libmachine: (ha-086149-m03) DBG | output  : 
	I0819 18:03:45.269987  390826 main.go:141] libmachine: (ha-086149-m03) DBG | Getting to WaitForSSH function...
	I0819 18:03:45.272426  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:45.272810  390826 main.go:141] libmachine: (ha-086149-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:29:16", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:03:35 +0000 UTC Type:0 Mac:52:54:00:dc:29:16 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-086149-m03 Clientid:01:52:54:00:dc:29:16}
	I0819 18:03:45.272864  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined IP address 192.168.39.121 and MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:45.272961  390826 main.go:141] libmachine: (ha-086149-m03) DBG | Using SSH client type: external
	I0819 18:03:45.272995  390826 main.go:141] libmachine: (ha-086149-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m03/id_rsa (-rw-------)
	I0819 18:03:45.273024  390826 main.go:141] libmachine: (ha-086149-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.121 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 18:03:45.273037  390826 main.go:141] libmachine: (ha-086149-m03) DBG | About to run SSH command:
	I0819 18:03:45.273054  390826 main.go:141] libmachine: (ha-086149-m03) DBG | exit 0
	I0819 18:03:45.399822  390826 main.go:141] libmachine: (ha-086149-m03) DBG | SSH cmd err, output: <nil>: 
	I0819 18:03:45.400094  390826 main.go:141] libmachine: (ha-086149-m03) KVM machine creation complete!
	I0819 18:03:45.400461  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetConfigRaw
	I0819 18:03:45.401263  390826 main.go:141] libmachine: (ha-086149-m03) Calling .DriverName
	I0819 18:03:45.401502  390826 main.go:141] libmachine: (ha-086149-m03) Calling .DriverName
	I0819 18:03:45.401684  390826 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0819 18:03:45.401702  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetState
	I0819 18:03:45.403056  390826 main.go:141] libmachine: Detecting operating system of created instance...
	I0819 18:03:45.403074  390826 main.go:141] libmachine: Waiting for SSH to be available...
	I0819 18:03:45.403087  390826 main.go:141] libmachine: Getting to WaitForSSH function...
	I0819 18:03:45.403100  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHHostname
	I0819 18:03:45.405437  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:45.405814  390826 main.go:141] libmachine: (ha-086149-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:29:16", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:03:35 +0000 UTC Type:0 Mac:52:54:00:dc:29:16 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-086149-m03 Clientid:01:52:54:00:dc:29:16}
	I0819 18:03:45.405918  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHPort
	I0819 18:03:45.405924  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined IP address 192.168.39.121 and MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:45.406093  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHKeyPath
	I0819 18:03:45.406253  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHKeyPath
	I0819 18:03:45.406386  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHUsername
	I0819 18:03:45.406565  390826 main.go:141] libmachine: Using SSH client type: native
	I0819 18:03:45.406993  390826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I0819 18:03:45.407014  390826 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0819 18:03:45.507039  390826 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 18:03:45.507061  390826 main.go:141] libmachine: Detecting the provisioner...
	I0819 18:03:45.507069  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHHostname
	I0819 18:03:45.509836  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:45.510167  390826 main.go:141] libmachine: (ha-086149-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:29:16", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:03:35 +0000 UTC Type:0 Mac:52:54:00:dc:29:16 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-086149-m03 Clientid:01:52:54:00:dc:29:16}
	I0819 18:03:45.510202  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined IP address 192.168.39.121 and MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:45.510329  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHPort
	I0819 18:03:45.510518  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHKeyPath
	I0819 18:03:45.510702  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHKeyPath
	I0819 18:03:45.510843  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHUsername
	I0819 18:03:45.511049  390826 main.go:141] libmachine: Using SSH client type: native
	I0819 18:03:45.511259  390826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I0819 18:03:45.511273  390826 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0819 18:03:45.612553  390826 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0819 18:03:45.612627  390826 main.go:141] libmachine: found compatible host: buildroot
	I0819 18:03:45.612636  390826 main.go:141] libmachine: Provisioning with buildroot...
	I0819 18:03:45.612648  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetMachineName
	I0819 18:03:45.612913  390826 buildroot.go:166] provisioning hostname "ha-086149-m03"
	I0819 18:03:45.612940  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetMachineName
	I0819 18:03:45.613126  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHHostname
	I0819 18:03:45.616510  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:45.616855  390826 main.go:141] libmachine: (ha-086149-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:29:16", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:03:35 +0000 UTC Type:0 Mac:52:54:00:dc:29:16 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-086149-m03 Clientid:01:52:54:00:dc:29:16}
	I0819 18:03:45.616877  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined IP address 192.168.39.121 and MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:45.617041  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHPort
	I0819 18:03:45.617258  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHKeyPath
	I0819 18:03:45.617452  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHKeyPath
	I0819 18:03:45.617602  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHUsername
	I0819 18:03:45.617764  390826 main.go:141] libmachine: Using SSH client type: native
	I0819 18:03:45.617953  390826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I0819 18:03:45.617968  390826 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-086149-m03 && echo "ha-086149-m03" | sudo tee /etc/hostname
	I0819 18:03:45.737142  390826 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-086149-m03
	
	I0819 18:03:45.737171  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHHostname
	I0819 18:03:45.739860  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:45.740210  390826 main.go:141] libmachine: (ha-086149-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:29:16", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:03:35 +0000 UTC Type:0 Mac:52:54:00:dc:29:16 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-086149-m03 Clientid:01:52:54:00:dc:29:16}
	I0819 18:03:45.740238  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined IP address 192.168.39.121 and MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:45.740391  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHPort
	I0819 18:03:45.740585  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHKeyPath
	I0819 18:03:45.740744  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHKeyPath
	I0819 18:03:45.740913  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHUsername
	I0819 18:03:45.741112  390826 main.go:141] libmachine: Using SSH client type: native
	I0819 18:03:45.741291  390826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I0819 18:03:45.741307  390826 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-086149-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-086149-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-086149-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 18:03:45.854110  390826 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 18:03:45.854149  390826 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19468-372744/.minikube CaCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19468-372744/.minikube}
	I0819 18:03:45.854177  390826 buildroot.go:174] setting up certificates
	I0819 18:03:45.854191  390826 provision.go:84] configureAuth start
	I0819 18:03:45.854211  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetMachineName
	I0819 18:03:45.854510  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetIP
	I0819 18:03:45.857102  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:45.857533  390826 main.go:141] libmachine: (ha-086149-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:29:16", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:03:35 +0000 UTC Type:0 Mac:52:54:00:dc:29:16 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-086149-m03 Clientid:01:52:54:00:dc:29:16}
	I0819 18:03:45.857565  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined IP address 192.168.39.121 and MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:45.857619  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHHostname
	I0819 18:03:45.859546  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:45.859906  390826 main.go:141] libmachine: (ha-086149-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:29:16", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:03:35 +0000 UTC Type:0 Mac:52:54:00:dc:29:16 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-086149-m03 Clientid:01:52:54:00:dc:29:16}
	I0819 18:03:45.859935  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined IP address 192.168.39.121 and MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:45.860102  390826 provision.go:143] copyHostCerts
	I0819 18:03:45.860136  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem
	I0819 18:03:45.860178  390826 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem, removing ...
	I0819 18:03:45.860194  390826 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem
	I0819 18:03:45.860304  390826 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem (1082 bytes)
	I0819 18:03:45.860406  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem
	I0819 18:03:45.860434  390826 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem, removing ...
	I0819 18:03:45.860444  390826 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem
	I0819 18:03:45.860484  390826 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem (1123 bytes)
	I0819 18:03:45.860554  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem
	I0819 18:03:45.860577  390826 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem, removing ...
	I0819 18:03:45.860585  390826 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem
	I0819 18:03:45.860612  390826 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem (1675 bytes)
	I0819 18:03:45.860669  390826 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem org=jenkins.ha-086149-m03 san=[127.0.0.1 192.168.39.121 ha-086149-m03 localhost minikube]
	I0819 18:03:46.063456  390826 provision.go:177] copyRemoteCerts
	I0819 18:03:46.063521  390826 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 18:03:46.063548  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHHostname
	I0819 18:03:46.066201  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:46.066553  390826 main.go:141] libmachine: (ha-086149-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:29:16", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:03:35 +0000 UTC Type:0 Mac:52:54:00:dc:29:16 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-086149-m03 Clientid:01:52:54:00:dc:29:16}
	I0819 18:03:46.066592  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined IP address 192.168.39.121 and MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:46.066809  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHPort
	I0819 18:03:46.067042  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHKeyPath
	I0819 18:03:46.067205  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHUsername
	I0819 18:03:46.067339  390826 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m03/id_rsa Username:docker}
	I0819 18:03:46.145768  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 18:03:46.145857  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 18:03:46.170315  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 18:03:46.170396  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0819 18:03:46.195880  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 18:03:46.195969  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 18:03:46.220716  390826 provision.go:87] duration metric: took 366.505975ms to configureAuth
	I0819 18:03:46.220747  390826 buildroot.go:189] setting minikube options for container-runtime
	I0819 18:03:46.221026  390826 config.go:182] Loaded profile config "ha-086149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:03:46.221124  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHHostname
	I0819 18:03:46.223980  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:46.224366  390826 main.go:141] libmachine: (ha-086149-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:29:16", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:03:35 +0000 UTC Type:0 Mac:52:54:00:dc:29:16 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-086149-m03 Clientid:01:52:54:00:dc:29:16}
	I0819 18:03:46.224397  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined IP address 192.168.39.121 and MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:46.224551  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHPort
	I0819 18:03:46.224742  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHKeyPath
	I0819 18:03:46.224948  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHKeyPath
	I0819 18:03:46.225092  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHUsername
	I0819 18:03:46.225286  390826 main.go:141] libmachine: Using SSH client type: native
	I0819 18:03:46.225484  390826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I0819 18:03:46.225500  390826 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 18:03:46.486402  390826 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 18:03:46.486443  390826 main.go:141] libmachine: Checking connection to Docker...
	I0819 18:03:46.486455  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetURL
	I0819 18:03:46.487831  390826 main.go:141] libmachine: (ha-086149-m03) DBG | Using libvirt version 6000000
	I0819 18:03:46.489937  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:46.490329  390826 main.go:141] libmachine: (ha-086149-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:29:16", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:03:35 +0000 UTC Type:0 Mac:52:54:00:dc:29:16 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-086149-m03 Clientid:01:52:54:00:dc:29:16}
	I0819 18:03:46.490355  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined IP address 192.168.39.121 and MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:46.490545  390826 main.go:141] libmachine: Docker is up and running!
	I0819 18:03:46.490562  390826 main.go:141] libmachine: Reticulating splines...
	I0819 18:03:46.490570  390826 client.go:171] duration metric: took 26.727760379s to LocalClient.Create
	I0819 18:03:46.490594  390826 start.go:167] duration metric: took 26.727824625s to libmachine.API.Create "ha-086149"
	I0819 18:03:46.490604  390826 start.go:293] postStartSetup for "ha-086149-m03" (driver="kvm2")
	I0819 18:03:46.490614  390826 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 18:03:46.490640  390826 main.go:141] libmachine: (ha-086149-m03) Calling .DriverName
	I0819 18:03:46.490898  390826 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 18:03:46.490925  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHHostname
	I0819 18:03:46.493180  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:46.493483  390826 main.go:141] libmachine: (ha-086149-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:29:16", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:03:35 +0000 UTC Type:0 Mac:52:54:00:dc:29:16 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-086149-m03 Clientid:01:52:54:00:dc:29:16}
	I0819 18:03:46.493513  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined IP address 192.168.39.121 and MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:46.493680  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHPort
	I0819 18:03:46.493889  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHKeyPath
	I0819 18:03:46.494032  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHUsername
	I0819 18:03:46.494164  390826 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m03/id_rsa Username:docker}
	I0819 18:03:46.574090  390826 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 18:03:46.578707  390826 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 18:03:46.578740  390826 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-372744/.minikube/addons for local assets ...
	I0819 18:03:46.578823  390826 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-372744/.minikube/files for local assets ...
	I0819 18:03:46.578922  390826 filesync.go:149] local asset: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem -> 3800092.pem in /etc/ssl/certs
	I0819 18:03:46.578936  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem -> /etc/ssl/certs/3800092.pem
	I0819 18:03:46.579074  390826 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 18:03:46.588838  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem --> /etc/ssl/certs/3800092.pem (1708 bytes)
	I0819 18:03:46.613093  390826 start.go:296] duration metric: took 122.46782ms for postStartSetup
	I0819 18:03:46.613152  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetConfigRaw
	I0819 18:03:46.613789  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetIP
	I0819 18:03:46.616297  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:46.616623  390826 main.go:141] libmachine: (ha-086149-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:29:16", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:03:35 +0000 UTC Type:0 Mac:52:54:00:dc:29:16 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-086149-m03 Clientid:01:52:54:00:dc:29:16}
	I0819 18:03:46.616654  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined IP address 192.168.39.121 and MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:46.616956  390826 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/config.json ...
	I0819 18:03:46.617168  390826 start.go:128] duration metric: took 26.873605845s to createHost
	I0819 18:03:46.617195  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHHostname
	I0819 18:03:46.619322  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:46.619667  390826 main.go:141] libmachine: (ha-086149-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:29:16", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:03:35 +0000 UTC Type:0 Mac:52:54:00:dc:29:16 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-086149-m03 Clientid:01:52:54:00:dc:29:16}
	I0819 18:03:46.619714  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined IP address 192.168.39.121 and MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:46.619818  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHPort
	I0819 18:03:46.619992  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHKeyPath
	I0819 18:03:46.620150  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHKeyPath
	I0819 18:03:46.620300  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHUsername
	I0819 18:03:46.620482  390826 main.go:141] libmachine: Using SSH client type: native
	I0819 18:03:46.620675  390826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I0819 18:03:46.620696  390826 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 18:03:46.724518  390826 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724090626.701943273
	
	I0819 18:03:46.724543  390826 fix.go:216] guest clock: 1724090626.701943273
	I0819 18:03:46.724553  390826 fix.go:229] Guest: 2024-08-19 18:03:46.701943273 +0000 UTC Remote: 2024-08-19 18:03:46.61718268 +0000 UTC m=+152.411079094 (delta=84.760593ms)
	I0819 18:03:46.724574  390826 fix.go:200] guest clock delta is within tolerance: 84.760593ms
	I0819 18:03:46.724581  390826 start.go:83] releasing machines lock for "ha-086149-m03", held for 26.981173416s
	I0819 18:03:46.724610  390826 main.go:141] libmachine: (ha-086149-m03) Calling .DriverName
	I0819 18:03:46.724908  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetIP
	I0819 18:03:46.727738  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:46.728140  390826 main.go:141] libmachine: (ha-086149-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:29:16", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:03:35 +0000 UTC Type:0 Mac:52:54:00:dc:29:16 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-086149-m03 Clientid:01:52:54:00:dc:29:16}
	I0819 18:03:46.728175  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined IP address 192.168.39.121 and MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:46.730672  390826 out.go:177] * Found network options:
	I0819 18:03:46.731980  390826 out.go:177]   - NO_PROXY=192.168.39.249,192.168.39.167
	W0819 18:03:46.733217  390826 proxy.go:119] fail to check proxy env: Error ip not in block
	W0819 18:03:46.733257  390826 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 18:03:46.733282  390826 main.go:141] libmachine: (ha-086149-m03) Calling .DriverName
	I0819 18:03:46.733975  390826 main.go:141] libmachine: (ha-086149-m03) Calling .DriverName
	I0819 18:03:46.734206  390826 main.go:141] libmachine: (ha-086149-m03) Calling .DriverName
	I0819 18:03:46.734324  390826 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 18:03:46.734366  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHHostname
	W0819 18:03:46.734442  390826 proxy.go:119] fail to check proxy env: Error ip not in block
	W0819 18:03:46.734468  390826 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 18:03:46.734544  390826 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 18:03:46.734569  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHHostname
	I0819 18:03:46.737322  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:46.737704  390826 main.go:141] libmachine: (ha-086149-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:29:16", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:03:35 +0000 UTC Type:0 Mac:52:54:00:dc:29:16 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-086149-m03 Clientid:01:52:54:00:dc:29:16}
	I0819 18:03:46.737739  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined IP address 192.168.39.121 and MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:46.737759  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:46.737855  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHPort
	I0819 18:03:46.738053  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHKeyPath
	I0819 18:03:46.738151  390826 main.go:141] libmachine: (ha-086149-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:29:16", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:03:35 +0000 UTC Type:0 Mac:52:54:00:dc:29:16 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-086149-m03 Clientid:01:52:54:00:dc:29:16}
	I0819 18:03:46.738176  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined IP address 192.168.39.121 and MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:46.738260  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHUsername
	I0819 18:03:46.738362  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHPort
	I0819 18:03:46.738441  390826 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m03/id_rsa Username:docker}
	I0819 18:03:46.738514  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHKeyPath
	I0819 18:03:46.738666  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHUsername
	I0819 18:03:46.738792  390826 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m03/id_rsa Username:docker}
	I0819 18:03:46.968222  390826 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 18:03:46.975455  390826 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 18:03:46.975532  390826 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 18:03:46.994322  390826 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 18:03:46.994347  390826 start.go:495] detecting cgroup driver to use...
	I0819 18:03:46.994414  390826 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 18:03:47.011730  390826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 18:03:47.026577  390826 docker.go:217] disabling cri-docker service (if available) ...
	I0819 18:03:47.026633  390826 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 18:03:47.041533  390826 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 18:03:47.056162  390826 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 18:03:47.167389  390826 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 18:03:47.313793  390826 docker.go:233] disabling docker service ...
	I0819 18:03:47.313873  390826 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 18:03:47.328361  390826 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 18:03:47.342498  390826 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 18:03:47.493438  390826 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 18:03:47.610714  390826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 18:03:47.626461  390826 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 18:03:47.647036  390826 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 18:03:47.647094  390826 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:03:47.659477  390826 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 18:03:47.659549  390826 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:03:47.670849  390826 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:03:47.681739  390826 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:03:47.692596  390826 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 18:03:47.704404  390826 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:03:47.715964  390826 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:03:47.734064  390826 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:03:47.745411  390826 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 18:03:47.755479  390826 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 18:03:47.755547  390826 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 18:03:47.780800  390826 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 18:03:47.793377  390826 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 18:03:47.933910  390826 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 18:03:48.078348  390826 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 18:03:48.078455  390826 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 18:03:48.083459  390826 start.go:563] Will wait 60s for crictl version
	I0819 18:03:48.083519  390826 ssh_runner.go:195] Run: which crictl
	I0819 18:03:48.087505  390826 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 18:03:48.135923  390826 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 18:03:48.136006  390826 ssh_runner.go:195] Run: crio --version
	I0819 18:03:48.165703  390826 ssh_runner.go:195] Run: crio --version
	I0819 18:03:48.199600  390826 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 18:03:48.200917  390826 out.go:177]   - env NO_PROXY=192.168.39.249
	I0819 18:03:48.202367  390826 out.go:177]   - env NO_PROXY=192.168.39.249,192.168.39.167
	I0819 18:03:48.203631  390826 main.go:141] libmachine: (ha-086149-m03) Calling .GetIP
	I0819 18:03:48.206345  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:48.206716  390826 main.go:141] libmachine: (ha-086149-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:29:16", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:03:35 +0000 UTC Type:0 Mac:52:54:00:dc:29:16 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-086149-m03 Clientid:01:52:54:00:dc:29:16}
	I0819 18:03:48.206749  390826 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined IP address 192.168.39.121 and MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:03:48.206952  390826 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 18:03:48.211794  390826 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 18:03:48.224853  390826 mustload.go:65] Loading cluster: ha-086149
	I0819 18:03:48.225134  390826 config.go:182] Loaded profile config "ha-086149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:03:48.225493  390826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:03:48.225551  390826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:03:48.241022  390826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34715
	I0819 18:03:48.241501  390826 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:03:48.241979  390826 main.go:141] libmachine: Using API Version  1
	I0819 18:03:48.241998  390826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:03:48.242413  390826 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:03:48.242604  390826 main.go:141] libmachine: (ha-086149) Calling .GetState
	I0819 18:03:48.244144  390826 host.go:66] Checking if "ha-086149" exists ...
	I0819 18:03:48.244541  390826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:03:48.244585  390826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:03:48.259491  390826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46449
	I0819 18:03:48.260161  390826 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:03:48.260668  390826 main.go:141] libmachine: Using API Version  1
	I0819 18:03:48.260695  390826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:03:48.261068  390826 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:03:48.261282  390826 main.go:141] libmachine: (ha-086149) Calling .DriverName
	I0819 18:03:48.261480  390826 certs.go:68] Setting up /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149 for IP: 192.168.39.121
	I0819 18:03:48.261491  390826 certs.go:194] generating shared ca certs ...
	I0819 18:03:48.261509  390826 certs.go:226] acquiring lock for ca certs: {Name:mk639e03f593e0bccac045f6e9f5ba3b96cc81e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:03:48.261630  390826 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key
	I0819 18:03:48.261673  390826 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key
	I0819 18:03:48.261682  390826 certs.go:256] generating profile certs ...
	I0819 18:03:48.261752  390826 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/client.key
	I0819 18:03:48.261775  390826 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.key.e2003681
	I0819 18:03:48.261790  390826 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.crt.e2003681 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.249 192.168.39.167 192.168.39.121 192.168.39.254]
	I0819 18:03:48.530583  390826 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.crt.e2003681 ...
	I0819 18:03:48.530617  390826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.crt.e2003681: {Name:mk6e3f1430e8073774c0e837d2d1e72b4e3b6cd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:03:48.530786  390826 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.key.e2003681 ...
	I0819 18:03:48.530801  390826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.key.e2003681: {Name:mk5c3eff97ebe025fa66882eab16f0ed1dc1cd31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:03:48.530873  390826 certs.go:381] copying /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.crt.e2003681 -> /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.crt
	I0819 18:03:48.531012  390826 certs.go:385] copying /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.key.e2003681 -> /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.key
	I0819 18:03:48.531151  390826 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/proxy-client.key
	I0819 18:03:48.531169  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 18:03:48.531183  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 18:03:48.531196  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 18:03:48.531209  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 18:03:48.531221  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0819 18:03:48.531234  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0819 18:03:48.531249  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0819 18:03:48.531263  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0819 18:03:48.531311  390826 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem (1338 bytes)
	W0819 18:03:48.531339  390826 certs.go:480] ignoring /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009_empty.pem, impossibly tiny 0 bytes
	I0819 18:03:48.531349  390826 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 18:03:48.531368  390826 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem (1082 bytes)
	I0819 18:03:48.531389  390826 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem (1123 bytes)
	I0819 18:03:48.531409  390826 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem (1675 bytes)
	I0819 18:03:48.531449  390826 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem (1708 bytes)
	I0819 18:03:48.531480  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem -> /usr/share/ca-certificates/3800092.pem
	I0819 18:03:48.531494  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:03:48.531512  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem -> /usr/share/ca-certificates/380009.pem
	I0819 18:03:48.531547  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHHostname
	I0819 18:03:48.535035  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:03:48.535535  390826 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:03:48.535560  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:03:48.535798  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHPort
	I0819 18:03:48.536047  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHKeyPath
	I0819 18:03:48.536234  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHUsername
	I0819 18:03:48.536390  390826 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149/id_rsa Username:docker}
	I0819 18:03:48.608114  390826 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0819 18:03:48.613669  390826 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0819 18:03:48.625657  390826 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0819 18:03:48.629793  390826 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0819 18:03:48.640760  390826 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0819 18:03:48.644960  390826 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0819 18:03:48.656070  390826 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0819 18:03:48.662684  390826 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0819 18:03:48.674212  390826 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0819 18:03:48.678812  390826 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0819 18:03:48.690281  390826 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0819 18:03:48.694691  390826 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0819 18:03:48.705848  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 18:03:48.734198  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 18:03:48.758895  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 18:03:48.785766  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 18:03:48.810763  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0819 18:03:48.835521  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 18:03:48.862239  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 18:03:48.887336  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 18:03:48.913014  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem --> /usr/share/ca-certificates/3800092.pem (1708 bytes)
	I0819 18:03:48.939494  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 18:03:48.966050  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem --> /usr/share/ca-certificates/380009.pem (1338 bytes)
	I0819 18:03:48.992403  390826 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0819 18:03:49.010524  390826 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0819 18:03:49.028443  390826 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0819 18:03:49.046238  390826 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0819 18:03:49.064239  390826 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0819 18:03:49.083179  390826 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0819 18:03:49.100385  390826 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0819 18:03:49.118509  390826 ssh_runner.go:195] Run: openssl version
	I0819 18:03:49.124644  390826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3800092.pem && ln -fs /usr/share/ca-certificates/3800092.pem /etc/ssl/certs/3800092.pem"
	I0819 18:03:49.135796  390826 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3800092.pem
	I0819 18:03:49.140415  390826 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 17:56 /usr/share/ca-certificates/3800092.pem
	I0819 18:03:49.140488  390826 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3800092.pem
	I0819 18:03:49.146811  390826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3800092.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 18:03:49.159207  390826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 18:03:49.171214  390826 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:03:49.176781  390826 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:03:49.176860  390826 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:03:49.182907  390826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 18:03:49.194856  390826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/380009.pem && ln -fs /usr/share/ca-certificates/380009.pem /etc/ssl/certs/380009.pem"
	I0819 18:03:49.207429  390826 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/380009.pem
	I0819 18:03:49.212225  390826 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 17:56 /usr/share/ca-certificates/380009.pem
	I0819 18:03:49.212307  390826 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/380009.pem
	I0819 18:03:49.218334  390826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/380009.pem /etc/ssl/certs/51391683.0"
	I0819 18:03:49.229746  390826 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 18:03:49.234052  390826 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 18:03:49.234124  390826 kubeadm.go:934] updating node {m03 192.168.39.121 8443 v1.31.0 crio true true} ...
	I0819 18:03:49.234234  390826 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-086149-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.121
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-086149 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 18:03:49.234272  390826 kube-vip.go:115] generating kube-vip config ...
	I0819 18:03:49.234320  390826 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0819 18:03:49.252054  390826 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0819 18:03:49.252251  390826 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0819 18:03:49.252340  390826 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 18:03:49.265031  390826 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0819 18:03:49.265106  390826 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0819 18:03:49.276590  390826 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256
	I0819 18:03:49.276599  390826 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0819 18:03:49.276630  390826 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256
	I0819 18:03:49.276667  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0819 18:03:49.276680  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0819 18:03:49.276653  390826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:03:49.276757  390826 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0819 18:03:49.276758  390826 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0819 18:03:49.293522  390826 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0819 18:03:49.293549  390826 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0819 18:03:49.293557  390826 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0819 18:03:49.293571  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0819 18:03:49.293576  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0819 18:03:49.293643  390826 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0819 18:03:49.322684  390826 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0819 18:03:49.322738  390826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0819 18:03:50.233307  390826 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0819 18:03:50.244938  390826 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0819 18:03:50.263266  390826 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 18:03:50.282149  390826 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0819 18:03:50.300705  390826 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0819 18:03:50.304802  390826 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 18:03:50.319084  390826 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 18:03:50.459263  390826 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 18:03:50.478155  390826 host.go:66] Checking if "ha-086149" exists ...
	I0819 18:03:50.478705  390826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:03:50.478766  390826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:03:50.495371  390826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36785
	I0819 18:03:50.495861  390826 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:03:50.496418  390826 main.go:141] libmachine: Using API Version  1
	I0819 18:03:50.496447  390826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:03:50.496782  390826 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:03:50.497071  390826 main.go:141] libmachine: (ha-086149) Calling .DriverName
	I0819 18:03:50.497222  390826 start.go:317] joinCluster: &{Name:ha-086149 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cluster
Name:ha-086149 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.167 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.121 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 18:03:50.497453  390826 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0819 18:03:50.497477  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHHostname
	I0819 18:03:50.500909  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:03:50.501506  390826 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:03:50.501545  390826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:03:50.501750  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHPort
	I0819 18:03:50.501950  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHKeyPath
	I0819 18:03:50.502104  390826 main.go:141] libmachine: (ha-086149) Calling .GetSSHUsername
	I0819 18:03:50.502278  390826 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149/id_rsa Username:docker}
	I0819 18:03:50.646804  390826 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.121 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 18:03:50.646866  390826 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 6x5lez.k6vwxltnwheu1hpl --discovery-token-ca-cert-hash sha256:3fcbd90565c5acbc36a47b2db682cb22dce9b172c9bf3af21e506ebb67608039 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-086149-m03 --control-plane --apiserver-advertise-address=192.168.39.121 --apiserver-bind-port=8443"
	I0819 18:04:12.687591  390826 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 6x5lez.k6vwxltnwheu1hpl --discovery-token-ca-cert-hash sha256:3fcbd90565c5acbc36a47b2db682cb22dce9b172c9bf3af21e506ebb67608039 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-086149-m03 --control-plane --apiserver-advertise-address=192.168.39.121 --apiserver-bind-port=8443": (22.040692153s)
	I0819 18:04:12.687633  390826 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0819 18:04:13.312539  390826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-086149-m03 minikube.k8s.io/updated_at=2024_08_19T18_04_13_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=9c2db9d51ec33b5c53a86e9ba3d384ee332e3411 minikube.k8s.io/name=ha-086149 minikube.k8s.io/primary=false
	I0819 18:04:13.457097  390826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-086149-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0819 18:04:13.569807  390826 start.go:319] duration metric: took 23.072581927s to joinCluster
	I0819 18:04:13.569882  390826 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.121 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 18:04:13.570288  390826 config.go:182] Loaded profile config "ha-086149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:04:13.572111  390826 out.go:177] * Verifying Kubernetes components...
	I0819 18:04:13.573929  390826 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 18:04:13.828073  390826 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 18:04:13.844916  390826 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19468-372744/kubeconfig
	I0819 18:04:13.845293  390826 kapi.go:59] client config for ha-086149: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/client.crt", KeyFile:"/home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/client.key", CAFile:"/home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f18d20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0819 18:04:13.845381  390826 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.249:8443
	I0819 18:04:13.845704  390826 node_ready.go:35] waiting up to 6m0s for node "ha-086149-m03" to be "Ready" ...
	I0819 18:04:13.845814  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:13.845826  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:13.845838  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:13.845850  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:13.849539  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:14.346905  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:14.346934  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:14.346947  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:14.346952  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:14.350340  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:14.846036  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:14.846068  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:14.846079  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:14.846084  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:14.849971  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:15.346541  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:15.346565  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:15.346574  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:15.346578  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:15.350495  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:15.846729  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:15.846753  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:15.846762  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:15.846767  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:15.850076  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:15.850601  390826 node_ready.go:53] node "ha-086149-m03" has status "Ready":"False"
	I0819 18:04:16.346369  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:16.346397  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:16.346408  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:16.346414  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:16.349512  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:16.846583  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:16.846602  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:16.846611  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:16.846615  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:16.850015  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:17.346969  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:17.346998  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:17.347017  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:17.347026  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:17.350830  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:17.846710  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:17.846733  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:17.846742  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:17.846748  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:17.853356  390826 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0819 18:04:17.853948  390826 node_ready.go:53] node "ha-086149-m03" has status "Ready":"False"
	I0819 18:04:18.346914  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:18.346939  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:18.346948  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:18.346951  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:18.350629  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:18.846873  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:18.846901  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:18.846911  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:18.846916  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:18.850632  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:19.346761  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:19.346789  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:19.346803  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:19.346809  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:19.350515  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:19.846438  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:19.846462  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:19.846471  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:19.846475  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:19.850219  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:20.346791  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:20.346818  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:20.346827  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:20.346831  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:20.351232  390826 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 18:04:20.351910  390826 node_ready.go:53] node "ha-086149-m03" has status "Ready":"False"
	I0819 18:04:20.846718  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:20.846742  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:20.846751  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:20.846754  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:20.850708  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:21.346270  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:21.346306  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:21.346319  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:21.346325  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:21.350140  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:21.846014  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:21.846042  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:21.846055  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:21.846062  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:21.849821  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:22.346851  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:22.346874  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:22.346890  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:22.346896  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:22.350181  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:22.846211  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:22.846234  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:22.846244  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:22.846248  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:22.850284  390826 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 18:04:22.851170  390826 node_ready.go:53] node "ha-086149-m03" has status "Ready":"False"
	I0819 18:04:23.346555  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:23.346581  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:23.346591  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:23.346596  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:23.350737  390826 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 18:04:23.846961  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:23.846985  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:23.846993  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:23.846996  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:23.850329  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:24.346793  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:24.346823  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:24.346834  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:24.346840  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:24.350007  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:24.846921  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:24.846944  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:24.846952  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:24.846956  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:24.850374  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:25.346991  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:25.347016  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:25.347027  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:25.347034  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:25.350611  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:25.351307  390826 node_ready.go:53] node "ha-086149-m03" has status "Ready":"False"
	I0819 18:04:25.846064  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:25.846088  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:25.846096  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:25.846100  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:25.849560  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:26.346018  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:26.346042  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:26.346051  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:26.346056  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:26.349570  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:26.846165  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:26.846192  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:26.846201  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:26.846204  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:26.849609  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:27.346757  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:27.346784  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:27.346795  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:27.346801  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:27.350244  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:27.846196  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:27.846226  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:27.846238  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:27.846246  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:27.854340  390826 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0819 18:04:27.854975  390826 node_ready.go:53] node "ha-086149-m03" has status "Ready":"False"
	I0819 18:04:28.346127  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:28.346152  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:28.346161  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:28.346165  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:28.349629  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:28.846420  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:28.846448  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:28.846458  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:28.846464  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:28.849925  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:29.345989  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:29.346015  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:29.346023  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:29.346027  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:29.349422  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:29.846338  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:29.846361  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:29.846369  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:29.846373  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:29.849856  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:30.346240  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:30.346265  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:30.346276  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:30.346283  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:30.349830  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:30.350507  390826 node_ready.go:53] node "ha-086149-m03" has status "Ready":"False"
	I0819 18:04:30.846234  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:30.846260  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:30.846269  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:30.846274  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:30.849946  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:31.346467  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:31.346492  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:31.346501  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:31.346506  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:31.350232  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:31.846155  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:31.846181  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:31.846190  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:31.846195  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:31.849420  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:32.346432  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:32.346455  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:32.346464  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:32.346469  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:32.350283  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:32.351930  390826 node_ready.go:49] node "ha-086149-m03" has status "Ready":"True"
	I0819 18:04:32.351973  390826 node_ready.go:38] duration metric: took 18.506247273s for node "ha-086149-m03" to be "Ready" ...
	I0819 18:04:32.351987  390826 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 18:04:32.352088  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0819 18:04:32.352101  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:32.352112  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:32.352118  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:32.359889  390826 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0819 18:04:32.366650  390826 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-8fjpd" in "kube-system" namespace to be "Ready" ...
	I0819 18:04:32.366736  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-8fjpd
	I0819 18:04:32.366744  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:32.366752  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:32.366755  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:32.369474  390826 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:04:32.369992  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149
	I0819 18:04:32.370007  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:32.370015  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:32.370018  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:32.372990  390826 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:04:32.373543  390826 pod_ready.go:93] pod "coredns-6f6b679f8f-8fjpd" in "kube-system" namespace has status "Ready":"True"
	I0819 18:04:32.373566  390826 pod_ready.go:82] duration metric: took 6.888361ms for pod "coredns-6f6b679f8f-8fjpd" in "kube-system" namespace to be "Ready" ...
	I0819 18:04:32.373579  390826 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-p65cb" in "kube-system" namespace to be "Ready" ...
	I0819 18:04:32.373647  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-p65cb
	I0819 18:04:32.373658  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:32.373667  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:32.373687  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:32.376325  390826 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:04:32.376857  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149
	I0819 18:04:32.376872  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:32.376880  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:32.376884  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:32.380110  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:32.381042  390826 pod_ready.go:93] pod "coredns-6f6b679f8f-p65cb" in "kube-system" namespace has status "Ready":"True"
	I0819 18:04:32.381060  390826 pod_ready.go:82] duration metric: took 7.473792ms for pod "coredns-6f6b679f8f-p65cb" in "kube-system" namespace to be "Ready" ...
	I0819 18:04:32.381070  390826 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-086149" in "kube-system" namespace to be "Ready" ...
	I0819 18:04:32.381114  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-086149
	I0819 18:04:32.381122  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:32.381140  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:32.381147  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:32.384359  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:32.385039  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149
	I0819 18:04:32.385054  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:32.385063  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:32.385070  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:32.387506  390826 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:04:32.388151  390826 pod_ready.go:93] pod "etcd-ha-086149" in "kube-system" namespace has status "Ready":"True"
	I0819 18:04:32.388168  390826 pod_ready.go:82] duration metric: took 7.092714ms for pod "etcd-ha-086149" in "kube-system" namespace to be "Ready" ...
	I0819 18:04:32.388177  390826 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-086149-m02" in "kube-system" namespace to be "Ready" ...
	I0819 18:04:32.388218  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-086149-m02
	I0819 18:04:32.388226  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:32.388233  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:32.388238  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:32.390613  390826 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:04:32.391213  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:04:32.391226  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:32.391232  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:32.391238  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:32.393364  390826 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:04:32.393931  390826 pod_ready.go:93] pod "etcd-ha-086149-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 18:04:32.393948  390826 pod_ready.go:82] duration metric: took 5.765365ms for pod "etcd-ha-086149-m02" in "kube-system" namespace to be "Ready" ...
	I0819 18:04:32.393959  390826 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-086149-m03" in "kube-system" namespace to be "Ready" ...
	I0819 18:04:32.546816  390826 request.go:632] Waited for 152.771522ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-086149-m03
	I0819 18:04:32.546893  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-086149-m03
	I0819 18:04:32.546903  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:32.546918  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:32.546928  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:32.550551  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:32.746678  390826 request.go:632] Waited for 195.290084ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:32.746738  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:32.746746  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:32.746764  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:32.746773  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:32.750195  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:32.750639  390826 pod_ready.go:93] pod "etcd-ha-086149-m03" in "kube-system" namespace has status "Ready":"True"
	I0819 18:04:32.750656  390826 pod_ready.go:82] duration metric: took 356.689273ms for pod "etcd-ha-086149-m03" in "kube-system" namespace to be "Ready" ...
	I0819 18:04:32.750674  390826 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-086149" in "kube-system" namespace to be "Ready" ...
	I0819 18:04:32.946849  390826 request.go:632] Waited for 196.085092ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-086149
	I0819 18:04:32.946918  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-086149
	I0819 18:04:32.946924  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:32.946931  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:32.946936  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:32.950468  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:33.147488  390826 request.go:632] Waited for 196.367007ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-086149
	I0819 18:04:33.147562  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149
	I0819 18:04:33.147567  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:33.147575  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:33.147581  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:33.150962  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:33.151882  390826 pod_ready.go:93] pod "kube-apiserver-ha-086149" in "kube-system" namespace has status "Ready":"True"
	I0819 18:04:33.151905  390826 pod_ready.go:82] duration metric: took 401.22217ms for pod "kube-apiserver-ha-086149" in "kube-system" namespace to be "Ready" ...
	I0819 18:04:33.151917  390826 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-086149-m02" in "kube-system" namespace to be "Ready" ...
	I0819 18:04:33.346707  390826 request.go:632] Waited for 194.702075ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-086149-m02
	I0819 18:04:33.346796  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-086149-m02
	I0819 18:04:33.346808  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:33.346817  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:33.346825  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:33.350430  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:33.546596  390826 request.go:632] Waited for 195.286829ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:04:33.546683  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:04:33.546692  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:33.546700  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:33.546705  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:33.551049  390826 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 18:04:33.551746  390826 pod_ready.go:93] pod "kube-apiserver-ha-086149-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 18:04:33.551777  390826 pod_ready.go:82] duration metric: took 399.852789ms for pod "kube-apiserver-ha-086149-m02" in "kube-system" namespace to be "Ready" ...
	I0819 18:04:33.551791  390826 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-086149-m03" in "kube-system" namespace to be "Ready" ...
	I0819 18:04:33.746717  390826 request.go:632] Waited for 194.821367ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-086149-m03
	I0819 18:04:33.746777  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-086149-m03
	I0819 18:04:33.746782  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:33.746789  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:33.746796  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:33.750604  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:33.947221  390826 request.go:632] Waited for 195.38286ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:33.947304  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:33.947315  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:33.947329  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:33.947341  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:33.950842  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:33.951897  390826 pod_ready.go:93] pod "kube-apiserver-ha-086149-m03" in "kube-system" namespace has status "Ready":"True"
	I0819 18:04:33.951917  390826 pod_ready.go:82] duration metric: took 400.118494ms for pod "kube-apiserver-ha-086149-m03" in "kube-system" namespace to be "Ready" ...
	I0819 18:04:33.951927  390826 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-086149" in "kube-system" namespace to be "Ready" ...
	I0819 18:04:34.147020  390826 request.go:632] Waited for 194.980048ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-086149
	I0819 18:04:34.147083  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-086149
	I0819 18:04:34.147090  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:34.147098  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:34.147102  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:34.150960  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:34.347360  390826 request.go:632] Waited for 195.328364ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-086149
	I0819 18:04:34.347446  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149
	I0819 18:04:34.347457  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:34.347470  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:34.347480  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:34.351092  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:34.351857  390826 pod_ready.go:93] pod "kube-controller-manager-ha-086149" in "kube-system" namespace has status "Ready":"True"
	I0819 18:04:34.351887  390826 pod_ready.go:82] duration metric: took 399.95211ms for pod "kube-controller-manager-ha-086149" in "kube-system" namespace to be "Ready" ...
	I0819 18:04:34.351903  390826 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-086149-m02" in "kube-system" namespace to be "Ready" ...
	I0819 18:04:34.547332  390826 request.go:632] Waited for 195.247162ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-086149-m02
	I0819 18:04:34.547414  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-086149-m02
	I0819 18:04:34.547426  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:34.547440  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:34.547448  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:34.550597  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:34.746892  390826 request.go:632] Waited for 195.376173ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:04:34.746979  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:04:34.746988  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:34.746997  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:34.747006  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:34.750140  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:34.750824  390826 pod_ready.go:93] pod "kube-controller-manager-ha-086149-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 18:04:34.750843  390826 pod_ready.go:82] duration metric: took 398.929687ms for pod "kube-controller-manager-ha-086149-m02" in "kube-system" namespace to be "Ready" ...
	I0819 18:04:34.750859  390826 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-086149-m03" in "kube-system" namespace to be "Ready" ...
	I0819 18:04:34.947372  390826 request.go:632] Waited for 196.431945ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-086149-m03
	I0819 18:04:34.947437  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-086149-m03
	I0819 18:04:34.947442  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:34.947450  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:34.947455  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:34.951173  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:35.146575  390826 request.go:632] Waited for 194.306794ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:35.146642  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:35.146650  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:35.146660  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:35.146669  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:35.149906  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:35.150538  390826 pod_ready.go:93] pod "kube-controller-manager-ha-086149-m03" in "kube-system" namespace has status "Ready":"True"
	I0819 18:04:35.150557  390826 pod_ready.go:82] duration metric: took 399.692281ms for pod "kube-controller-manager-ha-086149-m03" in "kube-system" namespace to be "Ready" ...
	I0819 18:04:35.150568  390826 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8snb5" in "kube-system" namespace to be "Ready" ...
	I0819 18:04:35.347236  390826 request.go:632] Waited for 196.586465ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8snb5
	I0819 18:04:35.347302  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8snb5
	I0819 18:04:35.347307  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:35.347316  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:35.347319  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:35.350883  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:35.547128  390826 request.go:632] Waited for 195.353155ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:35.547188  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:35.547193  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:35.547201  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:35.547207  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:35.550473  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:35.551110  390826 pod_ready.go:93] pod "kube-proxy-8snb5" in "kube-system" namespace has status "Ready":"True"
	I0819 18:04:35.551129  390826 pod_ready.go:82] duration metric: took 400.555696ms for pod "kube-proxy-8snb5" in "kube-system" namespace to be "Ready" ...
	I0819 18:04:35.551141  390826 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fwkf2" in "kube-system" namespace to be "Ready" ...
	I0819 18:04:35.747312  390826 request.go:632] Waited for 196.091883ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fwkf2
	I0819 18:04:35.747404  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fwkf2
	I0819 18:04:35.747410  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:35.747418  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:35.747427  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:35.751161  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:35.946854  390826 request.go:632] Waited for 194.274206ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-086149
	I0819 18:04:35.946924  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149
	I0819 18:04:35.946930  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:35.946940  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:35.946950  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:35.949959  390826 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:04:35.950784  390826 pod_ready.go:93] pod "kube-proxy-fwkf2" in "kube-system" namespace has status "Ready":"True"
	I0819 18:04:35.950803  390826 pod_ready.go:82] duration metric: took 399.650676ms for pod "kube-proxy-fwkf2" in "kube-system" namespace to be "Ready" ...
	I0819 18:04:35.950814  390826 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vx94r" in "kube-system" namespace to be "Ready" ...
	I0819 18:04:36.146946  390826 request.go:632] Waited for 196.043967ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vx94r
	I0819 18:04:36.147019  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vx94r
	I0819 18:04:36.147025  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:36.147033  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:36.147038  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:36.150726  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:36.346845  390826 request.go:632] Waited for 195.38793ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:04:36.346912  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:04:36.346918  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:36.346926  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:36.346930  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:36.350328  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:36.351045  390826 pod_ready.go:93] pod "kube-proxy-vx94r" in "kube-system" namespace has status "Ready":"True"
	I0819 18:04:36.351071  390826 pod_ready.go:82] duration metric: took 400.249518ms for pod "kube-proxy-vx94r" in "kube-system" namespace to be "Ready" ...
	I0819 18:04:36.351085  390826 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-086149" in "kube-system" namespace to be "Ready" ...
	I0819 18:04:36.547228  390826 request.go:632] Waited for 196.042508ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-086149
	I0819 18:04:36.547298  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-086149
	I0819 18:04:36.547303  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:36.547316  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:36.547320  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:36.551158  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:36.747250  390826 request.go:632] Waited for 195.383213ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-086149
	I0819 18:04:36.747325  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149
	I0819 18:04:36.747333  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:36.747342  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:36.747371  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:36.750310  390826 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:04:36.750994  390826 pod_ready.go:93] pod "kube-scheduler-ha-086149" in "kube-system" namespace has status "Ready":"True"
	I0819 18:04:36.751023  390826 pod_ready.go:82] duration metric: took 399.92967ms for pod "kube-scheduler-ha-086149" in "kube-system" namespace to be "Ready" ...
	I0819 18:04:36.751039  390826 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-086149-m02" in "kube-system" namespace to be "Ready" ...
	I0819 18:04:36.946962  390826 request.go:632] Waited for 195.825478ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-086149-m02
	I0819 18:04:36.947043  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-086149-m02
	I0819 18:04:36.947048  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:36.947056  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:36.947061  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:36.950479  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:37.146460  390826 request.go:632] Waited for 195.287394ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:04:37.146546  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m02
	I0819 18:04:37.146552  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:37.146559  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:37.146566  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:37.150208  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:37.151006  390826 pod_ready.go:93] pod "kube-scheduler-ha-086149-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 18:04:37.151027  390826 pod_ready.go:82] duration metric: took 399.979634ms for pod "kube-scheduler-ha-086149-m02" in "kube-system" namespace to be "Ready" ...
	I0819 18:04:37.151037  390826 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-086149-m03" in "kube-system" namespace to be "Ready" ...
	I0819 18:04:37.347103  390826 request.go:632] Waited for 195.969715ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-086149-m03
	I0819 18:04:37.347198  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-086149-m03
	I0819 18:04:37.347215  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:37.347228  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:37.347237  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:37.350608  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:37.547132  390826 request.go:632] Waited for 195.865595ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:37.547206  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-086149-m03
	I0819 18:04:37.547215  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:37.547232  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:37.547241  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:37.551223  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:37.551989  390826 pod_ready.go:93] pod "kube-scheduler-ha-086149-m03" in "kube-system" namespace has status "Ready":"True"
	I0819 18:04:37.552010  390826 pod_ready.go:82] duration metric: took 400.966575ms for pod "kube-scheduler-ha-086149-m03" in "kube-system" namespace to be "Ready" ...
	I0819 18:04:37.552022  390826 pod_ready.go:39] duration metric: took 5.200017437s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 18:04:37.552038  390826 api_server.go:52] waiting for apiserver process to appear ...
	I0819 18:04:37.552091  390826 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:04:37.573907  390826 api_server.go:72] duration metric: took 24.003963962s to wait for apiserver process to appear ...
	I0819 18:04:37.573952  390826 api_server.go:88] waiting for apiserver healthz status ...
	I0819 18:04:37.573979  390826 api_server.go:253] Checking apiserver healthz at https://192.168.39.249:8443/healthz ...
	I0819 18:04:37.578518  390826 api_server.go:279] https://192.168.39.249:8443/healthz returned 200:
	ok
	I0819 18:04:37.578596  390826 round_trippers.go:463] GET https://192.168.39.249:8443/version
	I0819 18:04:37.578605  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:37.578613  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:37.578619  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:37.579424  390826 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0819 18:04:37.579486  390826 api_server.go:141] control plane version: v1.31.0
	I0819 18:04:37.579499  390826 api_server.go:131] duration metric: took 5.540572ms to wait for apiserver health ...
	I0819 18:04:37.579507  390826 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 18:04:37.746950  390826 request.go:632] Waited for 167.353562ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0819 18:04:37.747044  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0819 18:04:37.747052  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:37.747064  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:37.747070  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:37.752732  390826 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0819 18:04:37.759988  390826 system_pods.go:59] 24 kube-system pods found
	I0819 18:04:37.760020  390826 system_pods.go:61] "coredns-6f6b679f8f-8fjpd" [4bedb900-107a-4f7e-aae7-391b18da4a26] Running
	I0819 18:04:37.760026  390826 system_pods.go:61] "coredns-6f6b679f8f-p65cb" [7f30449e-d4ea-4d6f-a63a-08551024bd04] Running
	I0819 18:04:37.760030  390826 system_pods.go:61] "etcd-ha-086149" [0dc3ab02-31e8-4110-accd-85d2e18db232] Running
	I0819 18:04:37.760033  390826 system_pods.go:61] "etcd-ha-086149-m02" [06fcadf6-a4b1-40c8-8ce8-bc1df1fad746] Running
	I0819 18:04:37.760036  390826 system_pods.go:61] "etcd-ha-086149-m03" [244fa866-cb01-4b01-b0a8-68081b70e0e7] Running
	I0819 18:04:37.760039  390826 system_pods.go:61] "kindnet-dgj9c" [142f260c-d74e-411f-ac87-f4398f573b94] Running
	I0819 18:04:37.760042  390826 system_pods.go:61] "kindnet-vb66s" [9322737a-5f8a-4d5a-a7d1-ba076bc8f2d8] Running
	I0819 18:04:37.760045  390826 system_pods.go:61] "kindnet-x87ch" [aa623766-8f51-4570-822c-c2efc1ce338c] Running
	I0819 18:04:37.760048  390826 system_pods.go:61] "kube-apiserver-ha-086149" [98466e03-c8b3-4d70-97b0-ba24afe776a9] Running
	I0819 18:04:37.760052  390826 system_pods.go:61] "kube-apiserver-ha-086149-m02" [afbc7c61-72ec-4571-9a5e-3d8afd08ae6b] Running
	I0819 18:04:37.760055  390826 system_pods.go:61] "kube-apiserver-ha-086149-m03" [1732b952-982b-4744-86a2-0b0bcad77b83] Running
	I0819 18:04:37.760058  390826 system_pods.go:61] "kube-controller-manager-ha-086149" [910295fd-3d2e-4390-b9cd-9e1169813375] Running
	I0819 18:04:37.760062  390826 system_pods.go:61] "kube-controller-manager-ha-086149-m02" [dad58fc3-85d8-444c-bfb8-3a74c5016f32] Running
	I0819 18:04:37.760065  390826 system_pods.go:61] "kube-controller-manager-ha-086149-m03" [3b251cc7-f532-47e4-9dd5-44d7bf8a51b6] Running
	I0819 18:04:37.760068  390826 system_pods.go:61] "kube-proxy-8snb5" [a79f5f3e-c2e0-4d5c-a603-623dab860fa5] Running
	I0819 18:04:37.760072  390826 system_pods.go:61] "kube-proxy-fwkf2" [001a3fe7-633c-44f8-9a8c-7401cec7af54] Running
	I0819 18:04:37.760075  390826 system_pods.go:61] "kube-proxy-vx94r" [8960702f-2f02-4e67-9d4f-02860491e5f2] Running
	I0819 18:04:37.760079  390826 system_pods.go:61] "kube-scheduler-ha-086149" [6d113319-d44e-4a5a-8e0a-f0a890e13e43] Running
	I0819 18:04:37.760083  390826 system_pods.go:61] "kube-scheduler-ha-086149-m02" [5d64ff86-a24d-4836-a7d7-ebb968bb39c8] Running
	I0819 18:04:37.760086  390826 system_pods.go:61] "kube-scheduler-ha-086149-m03" [fcd18473-942f-4ced-ae57-46ac80a0f60f] Running
	I0819 18:04:37.760088  390826 system_pods.go:61] "kube-vip-ha-086149" [25176ed4-e5b0-4e5e-9835-736c856d2643] Running
	I0819 18:04:37.760091  390826 system_pods.go:61] "kube-vip-ha-086149-m02" [8c6b400d-f73e-44b5-a31f-3607329360be] Running
	I0819 18:04:37.760094  390826 system_pods.go:61] "kube-vip-ha-086149-m03" [09c25237-cadd-43b1-95ab-212c2d47a20d] Running
	I0819 18:04:37.760097  390826 system_pods.go:61] "storage-provisioner" [c12159a8-5f84-4d19-aa54-7b56a9669f6c] Running
	I0819 18:04:37.760104  390826 system_pods.go:74] duration metric: took 180.589003ms to wait for pod list to return data ...
	I0819 18:04:37.760114  390826 default_sa.go:34] waiting for default service account to be created ...
	I0819 18:04:37.946508  390826 request.go:632] Waited for 186.293544ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/default/serviceaccounts
	I0819 18:04:37.946580  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/default/serviceaccounts
	I0819 18:04:37.946587  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:37.946598  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:37.946607  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:37.950571  390826 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:04:37.950729  390826 default_sa.go:45] found service account: "default"
	I0819 18:04:37.950746  390826 default_sa.go:55] duration metric: took 190.624862ms for default service account to be created ...
	I0819 18:04:37.950760  390826 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 18:04:38.147151  390826 request.go:632] Waited for 196.282924ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0819 18:04:38.147247  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0819 18:04:38.147259  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:38.147271  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:38.147283  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:38.154184  390826 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0819 18:04:38.163017  390826 system_pods.go:86] 24 kube-system pods found
	I0819 18:04:38.163051  390826 system_pods.go:89] "coredns-6f6b679f8f-8fjpd" [4bedb900-107a-4f7e-aae7-391b18da4a26] Running
	I0819 18:04:38.163057  390826 system_pods.go:89] "coredns-6f6b679f8f-p65cb" [7f30449e-d4ea-4d6f-a63a-08551024bd04] Running
	I0819 18:04:38.163062  390826 system_pods.go:89] "etcd-ha-086149" [0dc3ab02-31e8-4110-accd-85d2e18db232] Running
	I0819 18:04:38.163066  390826 system_pods.go:89] "etcd-ha-086149-m02" [06fcadf6-a4b1-40c8-8ce8-bc1df1fad746] Running
	I0819 18:04:38.163071  390826 system_pods.go:89] "etcd-ha-086149-m03" [244fa866-cb01-4b01-b0a8-68081b70e0e7] Running
	I0819 18:04:38.163075  390826 system_pods.go:89] "kindnet-dgj9c" [142f260c-d74e-411f-ac87-f4398f573b94] Running
	I0819 18:04:38.163079  390826 system_pods.go:89] "kindnet-vb66s" [9322737a-5f8a-4d5a-a7d1-ba076bc8f2d8] Running
	I0819 18:04:38.163083  390826 system_pods.go:89] "kindnet-x87ch" [aa623766-8f51-4570-822c-c2efc1ce338c] Running
	I0819 18:04:38.163089  390826 system_pods.go:89] "kube-apiserver-ha-086149" [98466e03-c8b3-4d70-97b0-ba24afe776a9] Running
	I0819 18:04:38.163094  390826 system_pods.go:89] "kube-apiserver-ha-086149-m02" [afbc7c61-72ec-4571-9a5e-3d8afd08ae6b] Running
	I0819 18:04:38.163100  390826 system_pods.go:89] "kube-apiserver-ha-086149-m03" [1732b952-982b-4744-86a2-0b0bcad77b83] Running
	I0819 18:04:38.163105  390826 system_pods.go:89] "kube-controller-manager-ha-086149" [910295fd-3d2e-4390-b9cd-9e1169813375] Running
	I0819 18:04:38.163110  390826 system_pods.go:89] "kube-controller-manager-ha-086149-m02" [dad58fc3-85d8-444c-bfb8-3a74c5016f32] Running
	I0819 18:04:38.163116  390826 system_pods.go:89] "kube-controller-manager-ha-086149-m03" [3b251cc7-f532-47e4-9dd5-44d7bf8a51b6] Running
	I0819 18:04:38.163126  390826 system_pods.go:89] "kube-proxy-8snb5" [a79f5f3e-c2e0-4d5c-a603-623dab860fa5] Running
	I0819 18:04:38.163130  390826 system_pods.go:89] "kube-proxy-fwkf2" [001a3fe7-633c-44f8-9a8c-7401cec7af54] Running
	I0819 18:04:38.163134  390826 system_pods.go:89] "kube-proxy-vx94r" [8960702f-2f02-4e67-9d4f-02860491e5f2] Running
	I0819 18:04:38.163137  390826 system_pods.go:89] "kube-scheduler-ha-086149" [6d113319-d44e-4a5a-8e0a-f0a890e13e43] Running
	I0819 18:04:38.163141  390826 system_pods.go:89] "kube-scheduler-ha-086149-m02" [5d64ff86-a24d-4836-a7d7-ebb968bb39c8] Running
	I0819 18:04:38.163144  390826 system_pods.go:89] "kube-scheduler-ha-086149-m03" [fcd18473-942f-4ced-ae57-46ac80a0f60f] Running
	I0819 18:04:38.163151  390826 system_pods.go:89] "kube-vip-ha-086149" [25176ed4-e5b0-4e5e-9835-736c856d2643] Running
	I0819 18:04:38.163156  390826 system_pods.go:89] "kube-vip-ha-086149-m02" [8c6b400d-f73e-44b5-a31f-3607329360be] Running
	I0819 18:04:38.163161  390826 system_pods.go:89] "kube-vip-ha-086149-m03" [09c25237-cadd-43b1-95ab-212c2d47a20d] Running
	I0819 18:04:38.163166  390826 system_pods.go:89] "storage-provisioner" [c12159a8-5f84-4d19-aa54-7b56a9669f6c] Running
	I0819 18:04:38.163176  390826 system_pods.go:126] duration metric: took 212.405865ms to wait for k8s-apps to be running ...
	I0819 18:04:38.163189  390826 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 18:04:38.163249  390826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:04:38.179201  390826 system_svc.go:56] duration metric: took 15.999867ms WaitForService to wait for kubelet
	I0819 18:04:38.179238  390826 kubeadm.go:582] duration metric: took 24.609302326s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 18:04:38.179260  390826 node_conditions.go:102] verifying NodePressure condition ...
	I0819 18:04:38.347453  390826 request.go:632] Waited for 168.074628ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes
	I0819 18:04:38.347523  390826 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes
	I0819 18:04:38.347528  390826 round_trippers.go:469] Request Headers:
	I0819 18:04:38.347536  390826 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:04:38.347542  390826 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:04:38.351853  390826 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 18:04:38.353202  390826 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 18:04:38.353234  390826 node_conditions.go:123] node cpu capacity is 2
	I0819 18:04:38.353250  390826 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 18:04:38.353255  390826 node_conditions.go:123] node cpu capacity is 2
	I0819 18:04:38.353261  390826 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 18:04:38.353265  390826 node_conditions.go:123] node cpu capacity is 2
	I0819 18:04:38.353271  390826 node_conditions.go:105] duration metric: took 174.004921ms to run NodePressure ...
	I0819 18:04:38.353284  390826 start.go:241] waiting for startup goroutines ...
	I0819 18:04:38.353313  390826 start.go:255] writing updated cluster config ...
	I0819 18:04:38.353807  390826 ssh_runner.go:195] Run: rm -f paused
	I0819 18:04:38.407159  390826 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 18:04:38.409063  390826 out.go:177] * Done! kubectl is now configured to use "ha-086149" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 19 18:09:22 ha-086149 crio[686]: time="2024-08-19 18:09:22.825717819Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090962825683533,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ab15c468-620a-46ae-8559-d9b647b3a211 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:09:22 ha-086149 crio[686]: time="2024-08-19 18:09:22.826685632Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f3b78c8d-84a6-41d7-8e85-401c426edfd8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:09:22 ha-086149 crio[686]: time="2024-08-19 18:09:22.826755356Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f3b78c8d-84a6-41d7-8e85-401c426edfd8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:09:22 ha-086149 crio[686]: time="2024-08-19 18:09:22.827356479Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ef0b28473496e4ab21e3f86bc64eb662e5c22e59e4a56f80f7bdad009460c73d,PodSandboxId:0f784aeccda9e0bff51a30b97a310813be1e271fdaae54f30006645ed5ae31b1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724090682352148658,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-fd2dw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f5e2f831-487f-4edb-b6c1-b391906a6d5b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4208b72f7684106eeabb79597e9a16912d86fddf552d810668e52ee86e4cacf,PodSandboxId:5b83e59b0dd3110115fa51715b6d8f6d29e006636ab031766095bcb6200ff245,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724090536333628873,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-p65cb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f30449e-d4ea-4d6f-a63a-08551024bd04,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86aec3b9357709107938f07e57e09bef332ea9baea288a18bb10389d5108084b,PodSandboxId:86507aaa25957ebc7ff023a8f042b236a729503785cd3163a2a44e79daf28a80,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724090536330177075,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-8fjpd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
4bedb900-107a-4f7e-aae7-391b18da4a26,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de3b095c19e3f3ff1bf0fb76700cc09513b591cda7c219c31dee7842602944b4,PodSandboxId:537bb09282b606b44a00c1c617ce2ce8f82082247274da7d8632728cdecd594d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1724090536201546220,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c12159a8-5f84-4d19-aa54-7b56a9669f6c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66fd9c9b32e5e0294c89ebc2ee3c443fda85c40c3ad5b05d42357b4968e8d305,PodSandboxId:3c6e833618ab7965e295c1f82164c28a64e619a82a0a8a90542c16f004e32954,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1724090524117954693,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vb66s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9322737a-5f8a-4d5a-a7d1-ba076bc8f2d8,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb8cccc1568bbb207d2c7c285f3897a7a425cba60f4dfcf3e8daa8082fc38ef0,PodSandboxId:dc27fd8c8c4a6cec062f5420b6ed3489f5b075fb1eb4e02074e5505c76d238e5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172409052
0283716084,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fwkf2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 001a3fe7-633c-44f8-9a8c-7401cec7af54,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cbf110391a2708b365a6d117cd1facf1a5820add049c9338b5eaa12f02254e4,PodSandboxId:14b36b352300967c929247cec1ddcb31ac17615e8281918ab214b49a770c21a1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172409051219
3183830,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8cce7fff82cf979e3ad7d68f6f416e8,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5e746178ed6a3645979a5bd617a6d9f408bb3e6af232f31409c7e79a0c4f6b2,PodSandboxId:9b826611f7fb43dc5f6fb5c26f55533ebe177f1d584f77bd7a2a32978c1478e5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724090509187324635,Labels:map[strin
g]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab6b0fe91f166a5c05b58933ead885f6,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:426a12b48132d73e1b93e6a7fb5b3420868e384eb280274c6ee81ae6f6bcea12,PodSandboxId:4cd25796bc67e8c9b4a666188feb3addfa806bf372a40c47a0ed8a3e3576c9a2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724090509150960473,Labels:map[string]string{io.k
ubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcf0b1666b512c678d4309e6a2bd2773,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f729929f59edc9bd3c0ec7e99f4b984f94d6b6ec06edf83cf6dc3efba7a1fe5,PodSandboxId:d0637e1ac222cb0d4d6abc71c2af0485d7935e0b55308bdb6a1af649031fef39,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724090509066737802,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,
io.kubernetes.pod.name: kube-apiserver-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9269a2cf31966e0bbf30b6554fa311ee,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0e66231bf791048a9932068b5f28d8479613545885bea8e42cf9c79913ffccd,PodSandboxId:1f46f8e2ba79c3a9b9a7f9729c154fc9c495e280d0a9fac6dc4fdf837a2e0b73,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724090509024741947,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.nam
e: kube-scheduler-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 465e756b61a05a6f1c4dfeba2adbdeeb,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f3b78c8d-84a6-41d7-8e85-401c426edfd8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:09:22 ha-086149 crio[686]: time="2024-08-19 18:09:22.874845693Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cbaa9179-89ac-4cd9-a993-7202606e0168 name=/runtime.v1.RuntimeService/Version
	Aug 19 18:09:22 ha-086149 crio[686]: time="2024-08-19 18:09:22.874946004Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cbaa9179-89ac-4cd9-a993-7202606e0168 name=/runtime.v1.RuntimeService/Version
	Aug 19 18:09:22 ha-086149 crio[686]: time="2024-08-19 18:09:22.878296299Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c695df94-f63b-483b-8189-e29bb6000cea name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:09:22 ha-086149 crio[686]: time="2024-08-19 18:09:22.878967093Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090962878933323,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c695df94-f63b-483b-8189-e29bb6000cea name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:09:22 ha-086149 crio[686]: time="2024-08-19 18:09:22.879664583Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=72e0538b-ceee-4e9c-8bd4-ae671a572403 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:09:22 ha-086149 crio[686]: time="2024-08-19 18:09:22.879805599Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=72e0538b-ceee-4e9c-8bd4-ae671a572403 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:09:22 ha-086149 crio[686]: time="2024-08-19 18:09:22.880255605Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ef0b28473496e4ab21e3f86bc64eb662e5c22e59e4a56f80f7bdad009460c73d,PodSandboxId:0f784aeccda9e0bff51a30b97a310813be1e271fdaae54f30006645ed5ae31b1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724090682352148658,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-fd2dw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f5e2f831-487f-4edb-b6c1-b391906a6d5b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4208b72f7684106eeabb79597e9a16912d86fddf552d810668e52ee86e4cacf,PodSandboxId:5b83e59b0dd3110115fa51715b6d8f6d29e006636ab031766095bcb6200ff245,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724090536333628873,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-p65cb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f30449e-d4ea-4d6f-a63a-08551024bd04,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86aec3b9357709107938f07e57e09bef332ea9baea288a18bb10389d5108084b,PodSandboxId:86507aaa25957ebc7ff023a8f042b236a729503785cd3163a2a44e79daf28a80,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724090536330177075,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-8fjpd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
4bedb900-107a-4f7e-aae7-391b18da4a26,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de3b095c19e3f3ff1bf0fb76700cc09513b591cda7c219c31dee7842602944b4,PodSandboxId:537bb09282b606b44a00c1c617ce2ce8f82082247274da7d8632728cdecd594d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1724090536201546220,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c12159a8-5f84-4d19-aa54-7b56a9669f6c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66fd9c9b32e5e0294c89ebc2ee3c443fda85c40c3ad5b05d42357b4968e8d305,PodSandboxId:3c6e833618ab7965e295c1f82164c28a64e619a82a0a8a90542c16f004e32954,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1724090524117954693,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vb66s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9322737a-5f8a-4d5a-a7d1-ba076bc8f2d8,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb8cccc1568bbb207d2c7c285f3897a7a425cba60f4dfcf3e8daa8082fc38ef0,PodSandboxId:dc27fd8c8c4a6cec062f5420b6ed3489f5b075fb1eb4e02074e5505c76d238e5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172409052
0283716084,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fwkf2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 001a3fe7-633c-44f8-9a8c-7401cec7af54,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cbf110391a2708b365a6d117cd1facf1a5820add049c9338b5eaa12f02254e4,PodSandboxId:14b36b352300967c929247cec1ddcb31ac17615e8281918ab214b49a770c21a1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172409051219
3183830,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8cce7fff82cf979e3ad7d68f6f416e8,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5e746178ed6a3645979a5bd617a6d9f408bb3e6af232f31409c7e79a0c4f6b2,PodSandboxId:9b826611f7fb43dc5f6fb5c26f55533ebe177f1d584f77bd7a2a32978c1478e5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724090509187324635,Labels:map[strin
g]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab6b0fe91f166a5c05b58933ead885f6,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:426a12b48132d73e1b93e6a7fb5b3420868e384eb280274c6ee81ae6f6bcea12,PodSandboxId:4cd25796bc67e8c9b4a666188feb3addfa806bf372a40c47a0ed8a3e3576c9a2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724090509150960473,Labels:map[string]string{io.k
ubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcf0b1666b512c678d4309e6a2bd2773,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f729929f59edc9bd3c0ec7e99f4b984f94d6b6ec06edf83cf6dc3efba7a1fe5,PodSandboxId:d0637e1ac222cb0d4d6abc71c2af0485d7935e0b55308bdb6a1af649031fef39,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724090509066737802,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,
io.kubernetes.pod.name: kube-apiserver-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9269a2cf31966e0bbf30b6554fa311ee,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0e66231bf791048a9932068b5f28d8479613545885bea8e42cf9c79913ffccd,PodSandboxId:1f46f8e2ba79c3a9b9a7f9729c154fc9c495e280d0a9fac6dc4fdf837a2e0b73,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724090509024741947,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.nam
e: kube-scheduler-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 465e756b61a05a6f1c4dfeba2adbdeeb,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=72e0538b-ceee-4e9c-8bd4-ae671a572403 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:09:22 ha-086149 crio[686]: time="2024-08-19 18:09:22.926164256Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a65a6caa-61ed-4ecb-8f59-e872759627a0 name=/runtime.v1.RuntimeService/Version
	Aug 19 18:09:22 ha-086149 crio[686]: time="2024-08-19 18:09:22.926293846Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a65a6caa-61ed-4ecb-8f59-e872759627a0 name=/runtime.v1.RuntimeService/Version
	Aug 19 18:09:22 ha-086149 crio[686]: time="2024-08-19 18:09:22.932202526Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ef4dbcad-ae9f-4fef-8cbf-cbdb1e3d3619 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:09:22 ha-086149 crio[686]: time="2024-08-19 18:09:22.932698026Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090962932673489,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ef4dbcad-ae9f-4fef-8cbf-cbdb1e3d3619 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:09:22 ha-086149 crio[686]: time="2024-08-19 18:09:22.933883021Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4d9b6d70-099b-42a1-9ec8-188576e372d8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:09:22 ha-086149 crio[686]: time="2024-08-19 18:09:22.933938527Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4d9b6d70-099b-42a1-9ec8-188576e372d8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:09:22 ha-086149 crio[686]: time="2024-08-19 18:09:22.934226635Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ef0b28473496e4ab21e3f86bc64eb662e5c22e59e4a56f80f7bdad009460c73d,PodSandboxId:0f784aeccda9e0bff51a30b97a310813be1e271fdaae54f30006645ed5ae31b1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724090682352148658,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-fd2dw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f5e2f831-487f-4edb-b6c1-b391906a6d5b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4208b72f7684106eeabb79597e9a16912d86fddf552d810668e52ee86e4cacf,PodSandboxId:5b83e59b0dd3110115fa51715b6d8f6d29e006636ab031766095bcb6200ff245,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724090536333628873,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-p65cb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f30449e-d4ea-4d6f-a63a-08551024bd04,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86aec3b9357709107938f07e57e09bef332ea9baea288a18bb10389d5108084b,PodSandboxId:86507aaa25957ebc7ff023a8f042b236a729503785cd3163a2a44e79daf28a80,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724090536330177075,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-8fjpd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
4bedb900-107a-4f7e-aae7-391b18da4a26,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de3b095c19e3f3ff1bf0fb76700cc09513b591cda7c219c31dee7842602944b4,PodSandboxId:537bb09282b606b44a00c1c617ce2ce8f82082247274da7d8632728cdecd594d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1724090536201546220,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c12159a8-5f84-4d19-aa54-7b56a9669f6c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66fd9c9b32e5e0294c89ebc2ee3c443fda85c40c3ad5b05d42357b4968e8d305,PodSandboxId:3c6e833618ab7965e295c1f82164c28a64e619a82a0a8a90542c16f004e32954,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1724090524117954693,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vb66s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9322737a-5f8a-4d5a-a7d1-ba076bc8f2d8,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb8cccc1568bbb207d2c7c285f3897a7a425cba60f4dfcf3e8daa8082fc38ef0,PodSandboxId:dc27fd8c8c4a6cec062f5420b6ed3489f5b075fb1eb4e02074e5505c76d238e5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172409052
0283716084,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fwkf2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 001a3fe7-633c-44f8-9a8c-7401cec7af54,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cbf110391a2708b365a6d117cd1facf1a5820add049c9338b5eaa12f02254e4,PodSandboxId:14b36b352300967c929247cec1ddcb31ac17615e8281918ab214b49a770c21a1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172409051219
3183830,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8cce7fff82cf979e3ad7d68f6f416e8,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5e746178ed6a3645979a5bd617a6d9f408bb3e6af232f31409c7e79a0c4f6b2,PodSandboxId:9b826611f7fb43dc5f6fb5c26f55533ebe177f1d584f77bd7a2a32978c1478e5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724090509187324635,Labels:map[strin
g]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab6b0fe91f166a5c05b58933ead885f6,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:426a12b48132d73e1b93e6a7fb5b3420868e384eb280274c6ee81ae6f6bcea12,PodSandboxId:4cd25796bc67e8c9b4a666188feb3addfa806bf372a40c47a0ed8a3e3576c9a2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724090509150960473,Labels:map[string]string{io.k
ubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcf0b1666b512c678d4309e6a2bd2773,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f729929f59edc9bd3c0ec7e99f4b984f94d6b6ec06edf83cf6dc3efba7a1fe5,PodSandboxId:d0637e1ac222cb0d4d6abc71c2af0485d7935e0b55308bdb6a1af649031fef39,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724090509066737802,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,
io.kubernetes.pod.name: kube-apiserver-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9269a2cf31966e0bbf30b6554fa311ee,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0e66231bf791048a9932068b5f28d8479613545885bea8e42cf9c79913ffccd,PodSandboxId:1f46f8e2ba79c3a9b9a7f9729c154fc9c495e280d0a9fac6dc4fdf837a2e0b73,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724090509024741947,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.nam
e: kube-scheduler-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 465e756b61a05a6f1c4dfeba2adbdeeb,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4d9b6d70-099b-42a1-9ec8-188576e372d8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:09:22 ha-086149 crio[686]: time="2024-08-19 18:09:22.973986451Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f1165084-1bea-452a-bd24-79ea2f2eb6ff name=/runtime.v1.RuntimeService/Version
	Aug 19 18:09:22 ha-086149 crio[686]: time="2024-08-19 18:09:22.974300167Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f1165084-1bea-452a-bd24-79ea2f2eb6ff name=/runtime.v1.RuntimeService/Version
	Aug 19 18:09:22 ha-086149 crio[686]: time="2024-08-19 18:09:22.975724302Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f73aabb8-a21f-4e43-91de-1c680b3804bd name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:09:22 ha-086149 crio[686]: time="2024-08-19 18:09:22.976240357Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090962976212028,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f73aabb8-a21f-4e43-91de-1c680b3804bd name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:09:22 ha-086149 crio[686]: time="2024-08-19 18:09:22.976841862Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=84c300f5-940e-45af-b678-f54e540f28c5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:09:22 ha-086149 crio[686]: time="2024-08-19 18:09:22.976904834Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=84c300f5-940e-45af-b678-f54e540f28c5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:09:22 ha-086149 crio[686]: time="2024-08-19 18:09:22.977202718Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ef0b28473496e4ab21e3f86bc64eb662e5c22e59e4a56f80f7bdad009460c73d,PodSandboxId:0f784aeccda9e0bff51a30b97a310813be1e271fdaae54f30006645ed5ae31b1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724090682352148658,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-fd2dw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f5e2f831-487f-4edb-b6c1-b391906a6d5b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4208b72f7684106eeabb79597e9a16912d86fddf552d810668e52ee86e4cacf,PodSandboxId:5b83e59b0dd3110115fa51715b6d8f6d29e006636ab031766095bcb6200ff245,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724090536333628873,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-p65cb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f30449e-d4ea-4d6f-a63a-08551024bd04,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86aec3b9357709107938f07e57e09bef332ea9baea288a18bb10389d5108084b,PodSandboxId:86507aaa25957ebc7ff023a8f042b236a729503785cd3163a2a44e79daf28a80,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724090536330177075,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-8fjpd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
4bedb900-107a-4f7e-aae7-391b18da4a26,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de3b095c19e3f3ff1bf0fb76700cc09513b591cda7c219c31dee7842602944b4,PodSandboxId:537bb09282b606b44a00c1c617ce2ce8f82082247274da7d8632728cdecd594d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1724090536201546220,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c12159a8-5f84-4d19-aa54-7b56a9669f6c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66fd9c9b32e5e0294c89ebc2ee3c443fda85c40c3ad5b05d42357b4968e8d305,PodSandboxId:3c6e833618ab7965e295c1f82164c28a64e619a82a0a8a90542c16f004e32954,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1724090524117954693,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vb66s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9322737a-5f8a-4d5a-a7d1-ba076bc8f2d8,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb8cccc1568bbb207d2c7c285f3897a7a425cba60f4dfcf3e8daa8082fc38ef0,PodSandboxId:dc27fd8c8c4a6cec062f5420b6ed3489f5b075fb1eb4e02074e5505c76d238e5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172409052
0283716084,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fwkf2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 001a3fe7-633c-44f8-9a8c-7401cec7af54,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cbf110391a2708b365a6d117cd1facf1a5820add049c9338b5eaa12f02254e4,PodSandboxId:14b36b352300967c929247cec1ddcb31ac17615e8281918ab214b49a770c21a1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172409051219
3183830,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8cce7fff82cf979e3ad7d68f6f416e8,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5e746178ed6a3645979a5bd617a6d9f408bb3e6af232f31409c7e79a0c4f6b2,PodSandboxId:9b826611f7fb43dc5f6fb5c26f55533ebe177f1d584f77bd7a2a32978c1478e5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724090509187324635,Labels:map[strin
g]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab6b0fe91f166a5c05b58933ead885f6,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:426a12b48132d73e1b93e6a7fb5b3420868e384eb280274c6ee81ae6f6bcea12,PodSandboxId:4cd25796bc67e8c9b4a666188feb3addfa806bf372a40c47a0ed8a3e3576c9a2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724090509150960473,Labels:map[string]string{io.k
ubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcf0b1666b512c678d4309e6a2bd2773,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f729929f59edc9bd3c0ec7e99f4b984f94d6b6ec06edf83cf6dc3efba7a1fe5,PodSandboxId:d0637e1ac222cb0d4d6abc71c2af0485d7935e0b55308bdb6a1af649031fef39,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724090509066737802,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,
io.kubernetes.pod.name: kube-apiserver-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9269a2cf31966e0bbf30b6554fa311ee,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0e66231bf791048a9932068b5f28d8479613545885bea8e42cf9c79913ffccd,PodSandboxId:1f46f8e2ba79c3a9b9a7f9729c154fc9c495e280d0a9fac6dc4fdf837a2e0b73,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724090509024741947,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.nam
e: kube-scheduler-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 465e756b61a05a6f1c4dfeba2adbdeeb,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=84c300f5-940e-45af-b678-f54e540f28c5 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ef0b28473496e       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   0f784aeccda9e       busybox-7dff88458-fd2dw
	d4208b72f7684       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago       Running             coredns                   0                   5b83e59b0dd31       coredns-6f6b679f8f-p65cb
	86aec3b935770       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago       Running             coredns                   0                   86507aaa25957       coredns-6f6b679f8f-8fjpd
	de3b095c19e3f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago       Running             storage-provisioner       0                   537bb09282b60       storage-provisioner
	66fd9c9b32e5e       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    7 minutes ago       Running             kindnet-cni               0                   3c6e833618ab7       kindnet-vb66s
	eb8cccc1568bb       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      7 minutes ago       Running             kube-proxy                0                   dc27fd8c8c4a6       kube-proxy-fwkf2
	0cbf110391a27       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     7 minutes ago       Running             kube-vip                  0                   14b36b3523009       kube-vip-ha-086149
	f5e746178ed6a       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      7 minutes ago       Running             kube-controller-manager   0                   9b826611f7fb4       kube-controller-manager-ha-086149
	426a12b48132d       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      7 minutes ago       Running             etcd                      0                   4cd25796bc67e       etcd-ha-086149
	2f729929f59ed       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      7 minutes ago       Running             kube-apiserver            0                   d0637e1ac222c       kube-apiserver-ha-086149
	d0e66231bf791       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      7 minutes ago       Running             kube-scheduler            0                   1f46f8e2ba79c       kube-scheduler-ha-086149
	
	
	==> coredns [86aec3b9357709107938f07e57e09bef332ea9baea288a18bb10389d5108084b] <==
	[INFO] 10.244.2.2:36864 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000187071s
	[INFO] 10.244.2.2:48106 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000150405s
	[INFO] 10.244.2.2:53329 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000136079s
	[INFO] 10.244.0.4:48191 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014988s
	[INFO] 10.244.0.4:47708 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000096718s
	[INFO] 10.244.0.4:42128 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000149115s
	[INFO] 10.244.0.4:49211 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000058729s
	[INFO] 10.244.0.4:41169 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000147844s
	[INFO] 10.244.1.2:55021 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000105902s
	[INFO] 10.244.1.2:39523 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000197158s
	[INFO] 10.244.1.2:39402 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000068589s
	[INFO] 10.244.1.2:46940 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000086232s
	[INFO] 10.244.2.2:59049 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000177439s
	[INFO] 10.244.2.2:48370 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000103075s
	[INFO] 10.244.2.2:36161 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000110997s
	[INFO] 10.244.2.2:44839 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000079394s
	[INFO] 10.244.1.2:53636 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000153191s
	[INFO] 10.244.1.2:46986 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00014037s
	[INFO] 10.244.1.2:39517 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000205565s
	[INFO] 10.244.2.2:34630 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000217644s
	[INFO] 10.244.2.2:48208 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000175515s
	[INFO] 10.244.2.2:42420 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000305788s
	[INFO] 10.244.0.4:49746 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000082325s
	[INFO] 10.244.0.4:48461 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000222115s
	[INFO] 10.244.1.2:58589 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000263104s
	
	
	==> coredns [d4208b72f7684106eeabb79597e9a16912d86fddf552d810668e52ee86e4cacf] <==
	[INFO] 10.244.1.2:46929 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000103504s
	[INFO] 10.244.1.2:59220 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000514964s
	[INFO] 10.244.1.2:46564 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001814543s
	[INFO] 10.244.2.2:59912 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000139193s
	[INFO] 10.244.2.2:51495 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004077714s
	[INFO] 10.244.2.2:60503 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002804151s
	[INFO] 10.244.2.2:49027 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000124508s
	[INFO] 10.244.0.4:59229 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001769172s
	[INFO] 10.244.0.4:34487 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001315875s
	[INFO] 10.244.0.4:34657 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000124575s
	[INFO] 10.244.1.2:49809 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001830693s
	[INFO] 10.244.1.2:60513 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001456039s
	[INFO] 10.244.1.2:58099 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000201903s
	[INFO] 10.244.1.2:36863 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000108279s
	[INFO] 10.244.0.4:48767 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000119232s
	[INFO] 10.244.0.4:35383 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00018722s
	[INFO] 10.244.0.4:58993 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000063721s
	[INFO] 10.244.0.4:55887 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000059646s
	[INFO] 10.244.1.2:45536 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000124964s
	[INFO] 10.244.2.2:45976 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000160498s
	[INFO] 10.244.0.4:38315 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000146686s
	[INFO] 10.244.0.4:36553 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000130807s
	[INFO] 10.244.1.2:46657 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00022076s
	[INFO] 10.244.1.2:44650 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000123411s
	[INFO] 10.244.1.2:46585 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000089999s
	
	
	==> describe nodes <==
	Name:               ha-086149
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-086149
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9c2db9d51ec33b5c53a86e9ba3d384ee332e3411
	                    minikube.k8s.io/name=ha-086149
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T18_01_56_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 18:01:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-086149
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 18:09:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 18:04:58 +0000   Mon, 19 Aug 2024 18:01:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 18:04:58 +0000   Mon, 19 Aug 2024 18:01:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 18:04:58 +0000   Mon, 19 Aug 2024 18:01:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 18:04:58 +0000   Mon, 19 Aug 2024 18:02:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.249
	  Hostname:    ha-086149
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f2adf13588c04842be48ba7ffa571365
	  System UUID:                f2adf135-88c0-4842-be48-ba7ffa571365
	  Boot ID:                    affd916c-f074-4dc0-bd43-4c71cd2f0b12
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-fd2dw              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m44s
	  kube-system                 coredns-6f6b679f8f-8fjpd             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     7m24s
	  kube-system                 coredns-6f6b679f8f-p65cb             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     7m24s
	  kube-system                 etcd-ha-086149                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         7m28s
	  kube-system                 kindnet-vb66s                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      7m24s
	  kube-system                 kube-apiserver-ha-086149             250m (12%)    0 (0%)      0 (0%)           0 (0%)         7m29s
	  kube-system                 kube-controller-manager-ha-086149    200m (10%)    0 (0%)      0 (0%)           0 (0%)         7m28s
	  kube-system                 kube-proxy-fwkf2                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m24s
	  kube-system                 kube-scheduler-ha-086149             100m (5%)     0 (0%)      0 (0%)           0 (0%)         7m28s
	  kube-system                 kube-vip-ha-086149                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m30s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 7m22s  kube-proxy       
	  Normal  Starting                 7m28s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m28s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m28s  kubelet          Node ha-086149 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m28s  kubelet          Node ha-086149 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m28s  kubelet          Node ha-086149 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m24s  node-controller  Node ha-086149 event: Registered Node ha-086149 in Controller
	  Normal  NodeReady                7m8s   kubelet          Node ha-086149 status is now: NodeReady
	  Normal  RegisteredNode           6m21s  node-controller  Node ha-086149 event: Registered Node ha-086149 in Controller
	  Normal  RegisteredNode           5m5s   node-controller  Node ha-086149 event: Registered Node ha-086149 in Controller
	
	
	Name:               ha-086149-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-086149-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9c2db9d51ec33b5c53a86e9ba3d384ee332e3411
	                    minikube.k8s.io/name=ha-086149
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_19T18_02_56_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 18:02:53 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-086149-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 18:05:47 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 19 Aug 2024 18:04:56 +0000   Mon, 19 Aug 2024 18:06:28 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 19 Aug 2024 18:04:56 +0000   Mon, 19 Aug 2024 18:06:28 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 19 Aug 2024 18:04:56 +0000   Mon, 19 Aug 2024 18:06:28 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 19 Aug 2024 18:04:56 +0000   Mon, 19 Aug 2024 18:06:28 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.167
	  Hostname:    ha-086149-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 db74a62099694214b3e6abfad40c4b33
	  System UUID:                db74a620-9969-4214-b3e6-abfad40c4b33
	  Boot ID:                    717bec9d-0b44-49c0-8d52-7d87d4c1f6a1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-vgcdh                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m44s
	  kube-system                 etcd-ha-086149-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m28s
	  kube-system                 kindnet-dgj9c                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m30s
	  kube-system                 kube-apiserver-ha-086149-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m28s
	  kube-system                 kube-controller-manager-ha-086149-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m27s
	  kube-system                 kube-proxy-vx94r                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m30s
	  kube-system                 kube-scheduler-ha-086149-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m21s
	  kube-system                 kube-vip-ha-086149-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m24s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  6m30s (x8 over 6m30s)  kubelet          Node ha-086149-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m30s (x8 over 6m30s)  kubelet          Node ha-086149-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m30s (x7 over 6m30s)  kubelet          Node ha-086149-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m30s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m29s                  node-controller  Node ha-086149-m02 event: Registered Node ha-086149-m02 in Controller
	  Normal  RegisteredNode           6m21s                  node-controller  Node ha-086149-m02 event: Registered Node ha-086149-m02 in Controller
	  Normal  RegisteredNode           5m5s                   node-controller  Node ha-086149-m02 event: Registered Node ha-086149-m02 in Controller
	  Normal  NodeNotReady             2m55s                  node-controller  Node ha-086149-m02 status is now: NodeNotReady
	
	
	Name:               ha-086149-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-086149-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9c2db9d51ec33b5c53a86e9ba3d384ee332e3411
	                    minikube.k8s.io/name=ha-086149
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_19T18_04_13_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 18:04:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-086149-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 18:09:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 18:05:11 +0000   Mon, 19 Aug 2024 18:04:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 18:05:11 +0000   Mon, 19 Aug 2024 18:04:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 18:05:11 +0000   Mon, 19 Aug 2024 18:04:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 18:05:11 +0000   Mon, 19 Aug 2024 18:04:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.121
	  Hostname:    ha-086149-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8eb7138e4a844547bcac8ac690757488
	  System UUID:                8eb7138e-4a84-4547-bcac-8ac690757488
	  Boot ID:                    3282c69f-1237-46cf-afad-b3a07c2459cf
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-7t5wq                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m44s
	  kube-system                 etcd-ha-086149-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m12s
	  kube-system                 kindnet-x87ch                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m13s
	  kube-system                 kube-apiserver-ha-086149-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m12s
	  kube-system                 kube-controller-manager-ha-086149-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m12s
	  kube-system                 kube-proxy-8snb5                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m13s
	  kube-system                 kube-scheduler-ha-086149-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m4s
	  kube-system                 kube-vip-ha-086149-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m8s                   kube-proxy       
	  Normal  NodeAllocatableEnforced  5m14s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m13s (x8 over 5m14s)  kubelet          Node ha-086149-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m13s (x8 over 5m14s)  kubelet          Node ha-086149-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m13s (x7 over 5m14s)  kubelet          Node ha-086149-m03 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m11s                  node-controller  Node ha-086149-m03 event: Registered Node ha-086149-m03 in Controller
	  Normal  RegisteredNode           5m9s                   node-controller  Node ha-086149-m03 event: Registered Node ha-086149-m03 in Controller
	  Normal  RegisteredNode           5m5s                   node-controller  Node ha-086149-m03 event: Registered Node ha-086149-m03 in Controller
	
	
	Name:               ha-086149-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-086149-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9c2db9d51ec33b5c53a86e9ba3d384ee332e3411
	                    minikube.k8s.io/name=ha-086149
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_19T18_05_16_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 18:05:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-086149-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 18:09:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 18:05:46 +0000   Mon, 19 Aug 2024 18:05:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 18:05:46 +0000   Mon, 19 Aug 2024 18:05:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 18:05:46 +0000   Mon, 19 Aug 2024 18:05:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 18:05:46 +0000   Mon, 19 Aug 2024 18:05:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.173
	  Hostname:    ha-086149-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e1e9d0d713474980a7c895cb88752846
	  System UUID:                e1e9d0d7-1347-4980-a7c8-95cb88752846
	  Boot ID:                    5dee1daa-7e00-4357-ab41-d48951f73e60
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-gvr65       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m8s
	  kube-system                 kube-proxy-9t8vw    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m2s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  4m8s (x2 over 4m8s)  kubelet          Node ha-086149-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m8s (x2 over 4m8s)  kubelet          Node ha-086149-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m8s (x2 over 4m8s)  kubelet          Node ha-086149-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m6s                 node-controller  Node ha-086149-m04 event: Registered Node ha-086149-m04 in Controller
	  Normal  RegisteredNode           4m5s                 node-controller  Node ha-086149-m04 event: Registered Node ha-086149-m04 in Controller
	  Normal  RegisteredNode           4m4s                 node-controller  Node ha-086149-m04 event: Registered Node ha-086149-m04 in Controller
	  Normal  NodeReady                3m47s                kubelet          Node ha-086149-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Aug19 18:01] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050961] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040140] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.785825] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.527631] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.633566] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.178691] systemd-fstab-generator[603]: Ignoring "noauto" option for root device
	[  +0.057166] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065842] systemd-fstab-generator[615]: Ignoring "noauto" option for root device
	[  +0.172283] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.148890] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.254962] systemd-fstab-generator[671]: Ignoring "noauto" option for root device
	[  +4.015563] systemd-fstab-generator[771]: Ignoring "noauto" option for root device
	[  +4.054508] systemd-fstab-generator[906]: Ignoring "noauto" option for root device
	[  +0.063854] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.951467] systemd-fstab-generator[1326]: Ignoring "noauto" option for root device
	[  +0.096986] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.046961] kauditd_printk_skb: 21 callbacks suppressed
	[Aug19 18:02] kauditd_printk_skb: 37 callbacks suppressed
	[ +54.874778] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [426a12b48132d73e1b93e6a7fb5b3420868e384eb280274c6ee81ae6f6bcea12] <==
	{"level":"warn","ts":"2024-08-19T18:09:23.031242Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"d67143b3afdcc30","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T18:09:23.046364Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"d67143b3afdcc30","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T18:09:23.130989Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"d67143b3afdcc30","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T18:09:23.146803Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"d67143b3afdcc30","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T18:09:23.167251Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"d67143b3afdcc30","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T18:09:23.230262Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"d67143b3afdcc30","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T18:09:23.237264Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"d67143b3afdcc30","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T18:09:23.241409Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"d67143b3afdcc30","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T18:09:23.273471Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"d67143b3afdcc30","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T18:09:23.277017Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"d67143b3afdcc30","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T18:09:23.284688Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"d67143b3afdcc30","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T18:09:23.291356Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"d67143b3afdcc30","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T18:09:23.297715Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"d67143b3afdcc30","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T18:09:23.300732Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"d67143b3afdcc30","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T18:09:23.305178Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"d67143b3afdcc30","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T18:09:23.312819Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"d67143b3afdcc30","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T18:09:23.318750Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"d67143b3afdcc30","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T18:09:23.325933Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"d67143b3afdcc30","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T18:09:23.329357Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"d67143b3afdcc30","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T18:09:23.332667Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"d67143b3afdcc30","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T18:09:23.335740Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"d67143b3afdcc30","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T18:09:23.341622Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"d67143b3afdcc30","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T18:09:23.346795Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"d67143b3afdcc30","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T18:09:23.347731Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"d67143b3afdcc30","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T18:09:23.353699Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"d67143b3afdcc30","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 18:09:23 up 8 min,  0 users,  load average: 0.20, 0.20, 0.10
	Linux ha-086149 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [66fd9c9b32e5e0294c89ebc2ee3c443fda85c40c3ad5b05d42357b4968e8d305] <==
	I0819 18:08:45.262911       1 main.go:322] Node ha-086149-m04 has CIDR [10.244.3.0/24] 
	I0819 18:08:55.262628       1 main.go:295] Handling node with IPs: map[192.168.39.249:{}]
	I0819 18:08:55.262739       1 main.go:299] handling current node
	I0819 18:08:55.262772       1 main.go:295] Handling node with IPs: map[192.168.39.167:{}]
	I0819 18:08:55.262792       1 main.go:322] Node ha-086149-m02 has CIDR [10.244.1.0/24] 
	I0819 18:08:55.262951       1 main.go:295] Handling node with IPs: map[192.168.39.121:{}]
	I0819 18:08:55.263351       1 main.go:322] Node ha-086149-m03 has CIDR [10.244.2.0/24] 
	I0819 18:08:55.263598       1 main.go:295] Handling node with IPs: map[192.168.39.173:{}]
	I0819 18:08:55.263651       1 main.go:322] Node ha-086149-m04 has CIDR [10.244.3.0/24] 
	I0819 18:09:05.253273       1 main.go:295] Handling node with IPs: map[192.168.39.249:{}]
	I0819 18:09:05.253421       1 main.go:299] handling current node
	I0819 18:09:05.253520       1 main.go:295] Handling node with IPs: map[192.168.39.167:{}]
	I0819 18:09:05.253566       1 main.go:322] Node ha-086149-m02 has CIDR [10.244.1.0/24] 
	I0819 18:09:05.253789       1 main.go:295] Handling node with IPs: map[192.168.39.121:{}]
	I0819 18:09:05.253844       1 main.go:322] Node ha-086149-m03 has CIDR [10.244.2.0/24] 
	I0819 18:09:05.253963       1 main.go:295] Handling node with IPs: map[192.168.39.173:{}]
	I0819 18:09:05.254008       1 main.go:322] Node ha-086149-m04 has CIDR [10.244.3.0/24] 
	I0819 18:09:15.262245       1 main.go:295] Handling node with IPs: map[192.168.39.249:{}]
	I0819 18:09:15.262484       1 main.go:299] handling current node
	I0819 18:09:15.262564       1 main.go:295] Handling node with IPs: map[192.168.39.167:{}]
	I0819 18:09:15.262603       1 main.go:322] Node ha-086149-m02 has CIDR [10.244.1.0/24] 
	I0819 18:09:15.262835       1 main.go:295] Handling node with IPs: map[192.168.39.121:{}]
	I0819 18:09:15.262874       1 main.go:322] Node ha-086149-m03 has CIDR [10.244.2.0/24] 
	I0819 18:09:15.263027       1 main.go:295] Handling node with IPs: map[192.168.39.173:{}]
	I0819 18:09:15.263057       1 main.go:322] Node ha-086149-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [2f729929f59edc9bd3c0ec7e99f4b984f94d6b6ec06edf83cf6dc3efba7a1fe5] <==
	I0819 18:01:54.114421       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0819 18:01:55.357825       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0819 18:01:55.375126       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0819 18:01:55.391682       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0819 18:01:59.566453       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0819 18:01:59.635965       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0819 18:02:54.657315       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 7.711µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E0819 18:02:54.657529       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="8.713µs" method="POST" path="/api/v1/namespaces/kube-system/events" result=null
	E0819 18:02:54.657674       1 wrap.go:53] "Timeout or abort while handling" logger="UnhandledError" method="POST" URI="/api/v1/namespaces/kube-system/events" auditID="ec643271-d886-4350-b64a-766e1fc4aac6"
	E0819 18:04:43.672292       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33690: use of closed network connection
	E0819 18:04:43.862868       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33702: use of closed network connection
	E0819 18:04:44.053642       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33720: use of closed network connection
	E0819 18:04:44.256373       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33748: use of closed network connection
	E0819 18:04:44.435625       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33778: use of closed network connection
	E0819 18:04:44.622757       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33798: use of closed network connection
	E0819 18:04:44.807275       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33818: use of closed network connection
	E0819 18:04:44.990252       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33842: use of closed network connection
	E0819 18:04:45.184405       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33870: use of closed network connection
	E0819 18:04:45.500574       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33894: use of closed network connection
	E0819 18:04:45.689892       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33914: use of closed network connection
	E0819 18:04:45.870462       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33918: use of closed network connection
	E0819 18:04:46.059994       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33928: use of closed network connection
	E0819 18:04:46.259491       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33946: use of closed network connection
	E0819 18:04:46.442269       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33962: use of closed network connection
	W0819 18:06:03.904029       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.121 192.168.39.249]
	
	
	==> kube-controller-manager [f5e746178ed6a3645979a5bd617a6d9f408bb3e6af232f31409c7e79a0c4f6b2] <==
	I0819 18:05:15.883996       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-086149-m04" podCIDRs=["10.244.3.0/24"]
	I0819 18:05:15.884215       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-086149-m04"
	I0819 18:05:15.884406       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-086149-m04"
	I0819 18:05:15.895063       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-086149-m04"
	I0819 18:05:16.210617       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-086149-m04"
	I0819 18:05:16.612403       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-086149-m04"
	I0819 18:05:17.233376       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-086149-m04"
	I0819 18:05:18.647419       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-086149-m04"
	I0819 18:05:18.700575       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-086149-m04"
	I0819 18:05:19.116237       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-086149-m04"
	I0819 18:05:19.117622       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-086149-m04"
	I0819 18:05:19.198317       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-086149-m04"
	I0819 18:05:26.055961       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-086149-m04"
	I0819 18:05:36.586361       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-086149-m04"
	I0819 18:05:36.586548       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-086149-m04"
	I0819 18:05:36.601512       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-086149-m04"
	I0819 18:05:37.235419       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-086149-m04"
	I0819 18:05:46.781675       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-086149-m04"
	I0819 18:06:28.676790       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-086149-m02"
	I0819 18:06:28.676967       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-086149-m04"
	I0819 18:06:28.701434       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-086149-m02"
	I0819 18:06:28.792920       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="13.179919ms"
	I0819 18:06:28.793219       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="48.676µs"
	I0819 18:06:29.186490       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-086149-m02"
	I0819 18:06:33.901862       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-086149-m02"
	
	
	==> kube-proxy [eb8cccc1568bbb207d2c7c285f3897a7a425cba60f4dfcf3e8daa8082fc38ef0] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 18:02:00.704338       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0819 18:02:00.716483       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.249"]
	E0819 18:02:00.716614       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 18:02:00.779410       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 18:02:00.779529       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 18:02:00.779616       1 server_linux.go:169] "Using iptables Proxier"
	I0819 18:02:00.785947       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 18:02:00.786306       1 server.go:483] "Version info" version="v1.31.0"
	I0819 18:02:00.786337       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 18:02:00.787880       1 config.go:197] "Starting service config controller"
	I0819 18:02:00.787929       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 18:02:00.787952       1 config.go:104] "Starting endpoint slice config controller"
	I0819 18:02:00.787959       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 18:02:00.792516       1 config.go:326] "Starting node config controller"
	I0819 18:02:00.792546       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 18:02:00.888032       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0819 18:02:00.888046       1 shared_informer.go:320] Caches are synced for service config
	I0819 18:02:00.892575       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [d0e66231bf791048a9932068b5f28d8479613545885bea8e42cf9c79913ffccd] <==
	E0819 18:01:53.080178       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0819 18:01:53.109516       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0819 18:01:53.109646       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0819 18:01:53.176564       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0819 18:01:53.177436       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 18:01:53.432036       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0819 18:01:53.432205       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 18:01:53.436293       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0819 18:01:53.436338       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 18:01:53.438806       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0819 18:01:53.438845       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 18:01:53.498849       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0819 18:01:53.498955       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0819 18:01:55.179206       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0819 18:04:39.265941       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="6d2582b5-3fba-47da-8195-8e19e60aa593" pod="default/busybox-7dff88458-7t5wq" assumedNode="ha-086149-m03" currentNode="ha-086149-m02"
	E0819 18:04:39.285591       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-7t5wq\": pod busybox-7dff88458-7t5wq is already assigned to node \"ha-086149-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-7t5wq" node="ha-086149-m02"
	E0819 18:04:39.285704       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 6d2582b5-3fba-47da-8195-8e19e60aa593(default/busybox-7dff88458-7t5wq) was assumed on ha-086149-m02 but assigned to ha-086149-m03" pod="default/busybox-7dff88458-7t5wq"
	E0819 18:04:39.285739       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-7t5wq\": pod busybox-7dff88458-7t5wq is already assigned to node \"ha-086149-m03\"" pod="default/busybox-7dff88458-7t5wq"
	I0819 18:04:39.285788       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-7t5wq" node="ha-086149-m03"
	E0819 18:04:39.322665       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-fd2dw\": pod busybox-7dff88458-fd2dw is already assigned to node \"ha-086149\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-fd2dw" node="ha-086149"
	E0819 18:04:39.322837       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod f5e2f831-487f-4edb-b6c1-b391906a6d5b(default/busybox-7dff88458-fd2dw) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-fd2dw"
	E0819 18:04:39.322857       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-fd2dw\": pod busybox-7dff88458-fd2dw is already assigned to node \"ha-086149\"" pod="default/busybox-7dff88458-fd2dw"
	I0819 18:04:39.322879       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-fd2dw" node="ha-086149"
	E0819 18:04:39.328354       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-vgcdh\": pod busybox-7dff88458-vgcdh is already assigned to node \"ha-086149-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-vgcdh" node="ha-086149-m02"
	E0819 18:04:39.328444       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-vgcdh\": pod busybox-7dff88458-vgcdh is already assigned to node \"ha-086149-m02\"" pod="default/busybox-7dff88458-vgcdh"
	
	
	==> kubelet <==
	Aug 19 18:07:55 ha-086149 kubelet[1333]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 18:07:55 ha-086149 kubelet[1333]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 18:07:55 ha-086149 kubelet[1333]: E0819 18:07:55.433545    1333 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090875433297701,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:07:55 ha-086149 kubelet[1333]: E0819 18:07:55.433588    1333 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090875433297701,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:08:05 ha-086149 kubelet[1333]: E0819 18:08:05.436021    1333 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090885435512201,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:08:05 ha-086149 kubelet[1333]: E0819 18:08:05.436420    1333 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090885435512201,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:08:15 ha-086149 kubelet[1333]: E0819 18:08:15.438228    1333 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090895437746904,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:08:15 ha-086149 kubelet[1333]: E0819 18:08:15.438622    1333 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090895437746904,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:08:25 ha-086149 kubelet[1333]: E0819 18:08:25.440789    1333 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090905440475013,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:08:25 ha-086149 kubelet[1333]: E0819 18:08:25.440813    1333 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090905440475013,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:08:35 ha-086149 kubelet[1333]: E0819 18:08:35.442619    1333 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090915441995492,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:08:35 ha-086149 kubelet[1333]: E0819 18:08:35.442902    1333 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090915441995492,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:08:45 ha-086149 kubelet[1333]: E0819 18:08:45.445194    1333 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090925444727426,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:08:45 ha-086149 kubelet[1333]: E0819 18:08:45.445539    1333 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090925444727426,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:08:55 ha-086149 kubelet[1333]: E0819 18:08:55.295937    1333 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 18:08:55 ha-086149 kubelet[1333]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 18:08:55 ha-086149 kubelet[1333]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 18:08:55 ha-086149 kubelet[1333]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 18:08:55 ha-086149 kubelet[1333]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 18:08:55 ha-086149 kubelet[1333]: E0819 18:08:55.447505    1333 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090935446988091,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:08:55 ha-086149 kubelet[1333]: E0819 18:08:55.447529    1333 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090935446988091,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:09:05 ha-086149 kubelet[1333]: E0819 18:09:05.449577    1333 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090945449059953,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:09:05 ha-086149 kubelet[1333]: E0819 18:09:05.449610    1333 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090945449059953,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:09:15 ha-086149 kubelet[1333]: E0819 18:09:15.452033    1333 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090955451620506,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:09:15 ha-086149 kubelet[1333]: E0819 18:09:15.452569    1333 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090955451620506,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-086149 -n ha-086149
helpers_test.go:261: (dbg) Run:  kubectl --context ha-086149 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (61.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (412.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-086149 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-086149 -v=7 --alsologtostderr
E0819 18:10:24.365099  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/functional-499773/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:10:52.068761  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/functional-499773/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-086149 -v=7 --alsologtostderr: exit status 82 (2m1.924418538s)

                                                
                                                
-- stdout --
	* Stopping node "ha-086149-m04"  ...
	* Stopping node "ha-086149-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 18:09:24.863185  396645 out.go:345] Setting OutFile to fd 1 ...
	I0819 18:09:24.863311  396645 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:09:24.863320  396645 out.go:358] Setting ErrFile to fd 2...
	I0819 18:09:24.863324  396645 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:09:24.863553  396645 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19468-372744/.minikube/bin
	I0819 18:09:24.863843  396645 out.go:352] Setting JSON to false
	I0819 18:09:24.863932  396645 mustload.go:65] Loading cluster: ha-086149
	I0819 18:09:24.864386  396645 config.go:182] Loaded profile config "ha-086149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:09:24.864506  396645 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/config.json ...
	I0819 18:09:24.864745  396645 mustload.go:65] Loading cluster: ha-086149
	I0819 18:09:24.864894  396645 config.go:182] Loaded profile config "ha-086149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:09:24.864921  396645 stop.go:39] StopHost: ha-086149-m04
	I0819 18:09:24.865385  396645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:09:24.865434  396645 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:09:24.880603  396645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40519
	I0819 18:09:24.881159  396645 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:09:24.881802  396645 main.go:141] libmachine: Using API Version  1
	I0819 18:09:24.881832  396645 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:09:24.882206  396645 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:09:24.884637  396645 out.go:177] * Stopping node "ha-086149-m04"  ...
	I0819 18:09:24.886367  396645 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0819 18:09:24.886405  396645 main.go:141] libmachine: (ha-086149-m04) Calling .DriverName
	I0819 18:09:24.886632  396645 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0819 18:09:24.886656  396645 main.go:141] libmachine: (ha-086149-m04) Calling .GetSSHHostname
	I0819 18:09:24.889515  396645 main.go:141] libmachine: (ha-086149-m04) DBG | domain ha-086149-m04 has defined MAC address 52:54:00:03:a4:7a in network mk-ha-086149
	I0819 18:09:24.889972  396645 main.go:141] libmachine: (ha-086149-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:a4:7a", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:05:01 +0000 UTC Type:0 Mac:52:54:00:03:a4:7a Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-086149-m04 Clientid:01:52:54:00:03:a4:7a}
	I0819 18:09:24.890005  396645 main.go:141] libmachine: (ha-086149-m04) DBG | domain ha-086149-m04 has defined IP address 192.168.39.173 and MAC address 52:54:00:03:a4:7a in network mk-ha-086149
	I0819 18:09:24.890157  396645 main.go:141] libmachine: (ha-086149-m04) Calling .GetSSHPort
	I0819 18:09:24.890366  396645 main.go:141] libmachine: (ha-086149-m04) Calling .GetSSHKeyPath
	I0819 18:09:24.890539  396645 main.go:141] libmachine: (ha-086149-m04) Calling .GetSSHUsername
	I0819 18:09:24.890673  396645 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m04/id_rsa Username:docker}
	I0819 18:09:24.978223  396645 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0819 18:09:25.031945  396645 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0819 18:09:25.085790  396645 main.go:141] libmachine: Stopping "ha-086149-m04"...
	I0819 18:09:25.085841  396645 main.go:141] libmachine: (ha-086149-m04) Calling .GetState
	I0819 18:09:25.087292  396645 main.go:141] libmachine: (ha-086149-m04) Calling .Stop
	I0819 18:09:25.090688  396645 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 0/120
	I0819 18:09:26.309073  396645 main.go:141] libmachine: (ha-086149-m04) Calling .GetState
	I0819 18:09:26.310342  396645 main.go:141] libmachine: Machine "ha-086149-m04" was stopped.
	I0819 18:09:26.310364  396645 stop.go:75] duration metric: took 1.423998877s to stop
	I0819 18:09:26.310386  396645 stop.go:39] StopHost: ha-086149-m03
	I0819 18:09:26.310702  396645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:09:26.310750  396645 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:09:26.326947  396645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40201
	I0819 18:09:26.327440  396645 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:09:26.327954  396645 main.go:141] libmachine: Using API Version  1
	I0819 18:09:26.327977  396645 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:09:26.328371  396645 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:09:26.330640  396645 out.go:177] * Stopping node "ha-086149-m03"  ...
	I0819 18:09:26.332060  396645 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0819 18:09:26.332093  396645 main.go:141] libmachine: (ha-086149-m03) Calling .DriverName
	I0819 18:09:26.332394  396645 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0819 18:09:26.332425  396645 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHHostname
	I0819 18:09:26.335551  396645 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:09:26.336079  396645 main.go:141] libmachine: (ha-086149-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:29:16", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:03:35 +0000 UTC Type:0 Mac:52:54:00:dc:29:16 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-086149-m03 Clientid:01:52:54:00:dc:29:16}
	I0819 18:09:26.336113  396645 main.go:141] libmachine: (ha-086149-m03) DBG | domain ha-086149-m03 has defined IP address 192.168.39.121 and MAC address 52:54:00:dc:29:16 in network mk-ha-086149
	I0819 18:09:26.336253  396645 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHPort
	I0819 18:09:26.336427  396645 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHKeyPath
	I0819 18:09:26.336573  396645 main.go:141] libmachine: (ha-086149-m03) Calling .GetSSHUsername
	I0819 18:09:26.336746  396645 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m03/id_rsa Username:docker}
	I0819 18:09:26.420198  396645 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0819 18:09:26.474447  396645 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0819 18:09:26.533536  396645 main.go:141] libmachine: Stopping "ha-086149-m03"...
	I0819 18:09:26.533573  396645 main.go:141] libmachine: (ha-086149-m03) Calling .GetState
	I0819 18:09:26.535526  396645 main.go:141] libmachine: (ha-086149-m03) Calling .Stop
	I0819 18:09:26.539560  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 0/120
	I0819 18:09:27.541121  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 1/120
	I0819 18:09:28.542505  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 2/120
	I0819 18:09:29.543835  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 3/120
	I0819 18:09:30.545099  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 4/120
	I0819 18:09:31.547095  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 5/120
	I0819 18:09:32.548724  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 6/120
	I0819 18:09:33.550488  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 7/120
	I0819 18:09:34.551911  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 8/120
	I0819 18:09:35.554360  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 9/120
	I0819 18:09:36.556351  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 10/120
	I0819 18:09:37.558212  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 11/120
	I0819 18:09:38.559518  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 12/120
	I0819 18:09:39.561175  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 13/120
	I0819 18:09:40.562652  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 14/120
	I0819 18:09:41.564628  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 15/120
	I0819 18:09:42.566415  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 16/120
	I0819 18:09:43.567919  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 17/120
	I0819 18:09:44.569572  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 18/120
	I0819 18:09:45.570903  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 19/120
	I0819 18:09:46.573027  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 20/120
	I0819 18:09:47.574786  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 21/120
	I0819 18:09:48.576680  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 22/120
	I0819 18:09:49.578418  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 23/120
	I0819 18:09:50.580050  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 24/120
	I0819 18:09:51.582174  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 25/120
	I0819 18:09:52.583518  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 26/120
	I0819 18:09:53.585141  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 27/120
	I0819 18:09:54.586831  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 28/120
	I0819 18:09:55.588531  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 29/120
	I0819 18:09:56.590458  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 30/120
	I0819 18:09:57.591913  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 31/120
	I0819 18:09:58.593422  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 32/120
	I0819 18:09:59.594790  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 33/120
	I0819 18:10:00.596191  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 34/120
	I0819 18:10:01.598056  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 35/120
	I0819 18:10:02.599408  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 36/120
	I0819 18:10:03.601665  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 37/120
	I0819 18:10:04.603145  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 38/120
	I0819 18:10:05.604611  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 39/120
	I0819 18:10:06.606800  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 40/120
	I0819 18:10:07.608279  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 41/120
	I0819 18:10:08.610073  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 42/120
	I0819 18:10:09.611709  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 43/120
	I0819 18:10:10.613280  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 44/120
	I0819 18:10:11.614789  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 45/120
	I0819 18:10:12.616318  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 46/120
	I0819 18:10:13.617798  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 47/120
	I0819 18:10:14.619174  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 48/120
	I0819 18:10:15.620529  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 49/120
	I0819 18:10:16.622344  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 50/120
	I0819 18:10:17.623768  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 51/120
	I0819 18:10:18.625229  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 52/120
	I0819 18:10:19.626517  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 53/120
	I0819 18:10:20.627908  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 54/120
	I0819 18:10:21.629233  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 55/120
	I0819 18:10:22.630545  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 56/120
	I0819 18:10:23.632007  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 57/120
	I0819 18:10:24.633349  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 58/120
	I0819 18:10:25.634713  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 59/120
	I0819 18:10:26.636668  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 60/120
	I0819 18:10:27.637977  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 61/120
	I0819 18:10:28.639351  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 62/120
	I0819 18:10:29.640861  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 63/120
	I0819 18:10:30.642307  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 64/120
	I0819 18:10:31.644206  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 65/120
	I0819 18:10:32.646268  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 66/120
	I0819 18:10:33.647801  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 67/120
	I0819 18:10:34.649386  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 68/120
	I0819 18:10:35.650869  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 69/120
	I0819 18:10:36.652734  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 70/120
	I0819 18:10:37.654215  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 71/120
	I0819 18:10:38.655640  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 72/120
	I0819 18:10:39.657105  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 73/120
	I0819 18:10:40.658596  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 74/120
	I0819 18:10:41.660527  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 75/120
	I0819 18:10:42.662210  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 76/120
	I0819 18:10:43.663506  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 77/120
	I0819 18:10:44.664914  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 78/120
	I0819 18:10:45.667221  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 79/120
	I0819 18:10:46.669130  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 80/120
	I0819 18:10:47.670396  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 81/120
	I0819 18:10:48.671693  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 82/120
	I0819 18:10:49.672898  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 83/120
	I0819 18:10:50.674218  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 84/120
	I0819 18:10:51.676092  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 85/120
	I0819 18:10:52.677582  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 86/120
	I0819 18:10:53.679002  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 87/120
	I0819 18:10:54.680282  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 88/120
	I0819 18:10:55.681821  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 89/120
	I0819 18:10:56.683615  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 90/120
	I0819 18:10:57.685050  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 91/120
	I0819 18:10:58.686306  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 92/120
	I0819 18:10:59.687761  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 93/120
	I0819 18:11:00.689378  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 94/120
	I0819 18:11:01.691279  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 95/120
	I0819 18:11:02.693529  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 96/120
	I0819 18:11:03.694953  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 97/120
	I0819 18:11:04.696398  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 98/120
	I0819 18:11:05.698317  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 99/120
	I0819 18:11:06.700189  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 100/120
	I0819 18:11:07.701720  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 101/120
	I0819 18:11:08.703474  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 102/120
	I0819 18:11:09.704974  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 103/120
	I0819 18:11:10.706819  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 104/120
	I0819 18:11:11.708795  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 105/120
	I0819 18:11:12.710229  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 106/120
	I0819 18:11:13.711855  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 107/120
	I0819 18:11:14.713287  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 108/120
	I0819 18:11:15.715081  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 109/120
	I0819 18:11:16.717207  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 110/120
	I0819 18:11:17.718715  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 111/120
	I0819 18:11:18.720036  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 112/120
	I0819 18:11:19.721490  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 113/120
	I0819 18:11:20.722896  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 114/120
	I0819 18:11:21.724861  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 115/120
	I0819 18:11:22.726568  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 116/120
	I0819 18:11:23.728023  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 117/120
	I0819 18:11:24.730246  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 118/120
	I0819 18:11:25.731593  396645 main.go:141] libmachine: (ha-086149-m03) Waiting for machine to stop 119/120
	I0819 18:11:26.732582  396645 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0819 18:11:26.732675  396645 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0819 18:11:26.734945  396645 out.go:201] 
	W0819 18:11:26.736352  396645 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0819 18:11:26.736383  396645 out.go:270] * 
	* 
	W0819 18:11:26.739897  396645 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 18:11:26.741305  396645 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-086149 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-086149 --wait=true -v=7 --alsologtostderr
E0819 18:12:10.115452  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:13:33.181449  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:15:24.366099  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/functional-499773/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-086149 --wait=true -v=7 --alsologtostderr: (4m47.925461201s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-086149
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-086149 -n ha-086149
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-086149 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-086149 logs -n 25: (2.08661632s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-086149 cp ha-086149-m03:/home/docker/cp-test.txt                              | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:05 UTC |
	|         | ha-086149-m02:/home/docker/cp-test_ha-086149-m03_ha-086149-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-086149 ssh -n                                                                 | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:05 UTC |
	|         | ha-086149-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-086149 ssh -n ha-086149-m02 sudo cat                                          | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:05 UTC |
	|         | /home/docker/cp-test_ha-086149-m03_ha-086149-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-086149 cp ha-086149-m03:/home/docker/cp-test.txt                              | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:05 UTC |
	|         | ha-086149-m04:/home/docker/cp-test_ha-086149-m03_ha-086149-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-086149 ssh -n                                                                 | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:05 UTC |
	|         | ha-086149-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-086149 ssh -n ha-086149-m04 sudo cat                                          | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:05 UTC |
	|         | /home/docker/cp-test_ha-086149-m03_ha-086149-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-086149 cp testdata/cp-test.txt                                                | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:05 UTC |
	|         | ha-086149-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-086149 ssh -n                                                                 | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:05 UTC |
	|         | ha-086149-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-086149 cp ha-086149-m04:/home/docker/cp-test.txt                              | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:05 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3465103634/001/cp-test_ha-086149-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-086149 ssh -n                                                                 | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:05 UTC |
	|         | ha-086149-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-086149 cp ha-086149-m04:/home/docker/cp-test.txt                              | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:05 UTC |
	|         | ha-086149:/home/docker/cp-test_ha-086149-m04_ha-086149.txt                       |           |         |         |                     |                     |
	| ssh     | ha-086149 ssh -n                                                                 | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:05 UTC |
	|         | ha-086149-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-086149 ssh -n ha-086149 sudo cat                                              | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:05 UTC |
	|         | /home/docker/cp-test_ha-086149-m04_ha-086149.txt                                 |           |         |         |                     |                     |
	| cp      | ha-086149 cp ha-086149-m04:/home/docker/cp-test.txt                              | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:05 UTC |
	|         | ha-086149-m02:/home/docker/cp-test_ha-086149-m04_ha-086149-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-086149 ssh -n                                                                 | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:05 UTC |
	|         | ha-086149-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-086149 ssh -n ha-086149-m02 sudo cat                                          | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:05 UTC |
	|         | /home/docker/cp-test_ha-086149-m04_ha-086149-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-086149 cp ha-086149-m04:/home/docker/cp-test.txt                              | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:05 UTC |
	|         | ha-086149-m03:/home/docker/cp-test_ha-086149-m04_ha-086149-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-086149 ssh -n                                                                 | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:05 UTC |
	|         | ha-086149-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-086149 ssh -n ha-086149-m03 sudo cat                                          | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:05 UTC |
	|         | /home/docker/cp-test_ha-086149-m04_ha-086149-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-086149 node stop m02 -v=7                                                     | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-086149 node start m02 -v=7                                                    | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:08 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-086149 -v=7                                                           | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:09 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-086149 -v=7                                                                | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:09 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-086149 --wait=true -v=7                                                    | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:11 UTC | 19 Aug 24 18:16 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-086149                                                                | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:16 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 18:11:26
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 18:11:26.790163  397087 out.go:345] Setting OutFile to fd 1 ...
	I0819 18:11:26.790285  397087 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:11:26.790294  397087 out.go:358] Setting ErrFile to fd 2...
	I0819 18:11:26.790299  397087 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:11:26.790509  397087 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19468-372744/.minikube/bin
	I0819 18:11:26.791095  397087 out.go:352] Setting JSON to false
	I0819 18:11:26.792211  397087 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":6830,"bootTime":1724084257,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 18:11:26.792279  397087 start.go:139] virtualization: kvm guest
	I0819 18:11:26.794666  397087 out.go:177] * [ha-086149] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 18:11:26.796373  397087 out.go:177]   - MINIKUBE_LOCATION=19468
	I0819 18:11:26.796412  397087 notify.go:220] Checking for updates...
	I0819 18:11:26.799215  397087 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 18:11:26.800518  397087 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19468-372744/kubeconfig
	I0819 18:11:26.801734  397087 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19468-372744/.minikube
	I0819 18:11:26.802834  397087 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 18:11:26.803999  397087 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 18:11:26.805744  397087 config.go:182] Loaded profile config "ha-086149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:11:26.805842  397087 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 18:11:26.806227  397087 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:11:26.806287  397087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:11:26.821836  397087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43047
	I0819 18:11:26.822281  397087 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:11:26.822831  397087 main.go:141] libmachine: Using API Version  1
	I0819 18:11:26.822851  397087 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:11:26.823230  397087 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:11:26.823448  397087 main.go:141] libmachine: (ha-086149) Calling .DriverName
	I0819 18:11:26.859039  397087 out.go:177] * Using the kvm2 driver based on existing profile
	I0819 18:11:26.860288  397087 start.go:297] selected driver: kvm2
	I0819 18:11:26.860313  397087 start.go:901] validating driver "kvm2" against &{Name:ha-086149 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.0 ClusterName:ha-086149 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.167 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.121 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.173 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false e
fk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 18:11:26.860510  397087 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 18:11:26.860860  397087 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 18:11:26.860955  397087 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19468-372744/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 18:11:26.876215  397087 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 18:11:26.876931  397087 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 18:11:26.876975  397087 cni.go:84] Creating CNI manager for ""
	I0819 18:11:26.876984  397087 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0819 18:11:26.877047  397087 start.go:340] cluster config:
	{Name:ha-086149 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-086149 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.167 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.121 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.173 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-ti
ller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 18:11:26.877195  397087 iso.go:125] acquiring lock: {Name:mk4c0ac1c3202b1a296739df622960e7a0bd8566 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 18:11:26.880367  397087 out.go:177] * Starting "ha-086149" primary control-plane node in "ha-086149" cluster
	I0819 18:11:26.881677  397087 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 18:11:26.881723  397087 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0819 18:11:26.881734  397087 cache.go:56] Caching tarball of preloaded images
	I0819 18:11:26.881818  397087 preload.go:172] Found /home/jenkins/minikube-integration/19468-372744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 18:11:26.881830  397087 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 18:11:26.881964  397087 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/config.json ...
	I0819 18:11:26.882188  397087 start.go:360] acquireMachinesLock for ha-086149: {Name:mk24ba67a747357e9ce40f1e460d2bb0bc59cc75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 18:11:26.882247  397087 start.go:364] duration metric: took 37.695µs to acquireMachinesLock for "ha-086149"
	I0819 18:11:26.882268  397087 start.go:96] Skipping create...Using existing machine configuration
	I0819 18:11:26.882285  397087 fix.go:54] fixHost starting: 
	I0819 18:11:26.882566  397087 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:11:26.882619  397087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:11:26.897044  397087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46039
	I0819 18:11:26.897553  397087 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:11:26.898124  397087 main.go:141] libmachine: Using API Version  1
	I0819 18:11:26.898162  397087 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:11:26.898472  397087 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:11:26.898657  397087 main.go:141] libmachine: (ha-086149) Calling .DriverName
	I0819 18:11:26.898848  397087 main.go:141] libmachine: (ha-086149) Calling .GetState
	I0819 18:11:26.900433  397087 fix.go:112] recreateIfNeeded on ha-086149: state=Running err=<nil>
	W0819 18:11:26.900453  397087 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 18:11:26.902377  397087 out.go:177] * Updating the running kvm2 "ha-086149" VM ...
	I0819 18:11:26.903765  397087 machine.go:93] provisionDockerMachine start ...
	I0819 18:11:26.903790  397087 main.go:141] libmachine: (ha-086149) Calling .DriverName
	I0819 18:11:26.904074  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHHostname
	I0819 18:11:26.906649  397087 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:11:26.907111  397087 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:11:26.907140  397087 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:11:26.907303  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHPort
	I0819 18:11:26.907480  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHKeyPath
	I0819 18:11:26.907634  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHKeyPath
	I0819 18:11:26.907769  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHUsername
	I0819 18:11:26.907932  397087 main.go:141] libmachine: Using SSH client type: native
	I0819 18:11:26.908147  397087 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0819 18:11:26.908159  397087 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 18:11:27.013130  397087 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-086149
	
	I0819 18:11:27.013172  397087 main.go:141] libmachine: (ha-086149) Calling .GetMachineName
	I0819 18:11:27.013473  397087 buildroot.go:166] provisioning hostname "ha-086149"
	I0819 18:11:27.013504  397087 main.go:141] libmachine: (ha-086149) Calling .GetMachineName
	I0819 18:11:27.013719  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHHostname
	I0819 18:11:27.016426  397087 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:11:27.016835  397087 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:11:27.016863  397087 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:11:27.017018  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHPort
	I0819 18:11:27.017231  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHKeyPath
	I0819 18:11:27.017381  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHKeyPath
	I0819 18:11:27.017515  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHUsername
	I0819 18:11:27.017672  397087 main.go:141] libmachine: Using SSH client type: native
	I0819 18:11:27.017906  397087 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0819 18:11:27.017923  397087 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-086149 && echo "ha-086149" | sudo tee /etc/hostname
	I0819 18:11:27.141797  397087 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-086149
	
	I0819 18:11:27.141826  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHHostname
	I0819 18:11:27.144440  397087 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:11:27.144771  397087 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:11:27.144804  397087 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:11:27.145009  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHPort
	I0819 18:11:27.145202  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHKeyPath
	I0819 18:11:27.145361  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHKeyPath
	I0819 18:11:27.145536  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHUsername
	I0819 18:11:27.145701  397087 main.go:141] libmachine: Using SSH client type: native
	I0819 18:11:27.145879  397087 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0819 18:11:27.145895  397087 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-086149' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-086149/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-086149' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 18:11:27.257185  397087 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 18:11:27.257235  397087 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19468-372744/.minikube CaCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19468-372744/.minikube}
	I0819 18:11:27.257274  397087 buildroot.go:174] setting up certificates
	I0819 18:11:27.257283  397087 provision.go:84] configureAuth start
	I0819 18:11:27.257296  397087 main.go:141] libmachine: (ha-086149) Calling .GetMachineName
	I0819 18:11:27.257578  397087 main.go:141] libmachine: (ha-086149) Calling .GetIP
	I0819 18:11:27.260335  397087 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:11:27.260693  397087 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:11:27.260718  397087 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:11:27.260865  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHHostname
	I0819 18:11:27.263806  397087 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:11:27.264249  397087 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:11:27.264279  397087 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:11:27.264389  397087 provision.go:143] copyHostCerts
	I0819 18:11:27.264425  397087 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem
	I0819 18:11:27.264504  397087 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem, removing ...
	I0819 18:11:27.264524  397087 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem
	I0819 18:11:27.264609  397087 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem (1082 bytes)
	I0819 18:11:27.264740  397087 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem
	I0819 18:11:27.264771  397087 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem, removing ...
	I0819 18:11:27.264778  397087 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem
	I0819 18:11:27.264827  397087 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem (1123 bytes)
	I0819 18:11:27.264907  397087 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem
	I0819 18:11:27.264925  397087 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem, removing ...
	I0819 18:11:27.264932  397087 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem
	I0819 18:11:27.264957  397087 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem (1675 bytes)
	I0819 18:11:27.265023  397087 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem org=jenkins.ha-086149 san=[127.0.0.1 192.168.39.249 ha-086149 localhost minikube]
	I0819 18:11:27.390873  397087 provision.go:177] copyRemoteCerts
	I0819 18:11:27.390944  397087 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 18:11:27.390970  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHHostname
	I0819 18:11:27.393739  397087 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:11:27.394178  397087 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:11:27.394216  397087 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:11:27.394341  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHPort
	I0819 18:11:27.394539  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHKeyPath
	I0819 18:11:27.394735  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHUsername
	I0819 18:11:27.394832  397087 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149/id_rsa Username:docker}
	I0819 18:11:27.478663  397087 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 18:11:27.478751  397087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 18:11:27.505660  397087 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 18:11:27.505762  397087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0819 18:11:27.533045  397087 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 18:11:27.533128  397087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 18:11:27.564294  397087 provision.go:87] duration metric: took 306.994273ms to configureAuth
	I0819 18:11:27.564324  397087 buildroot.go:189] setting minikube options for container-runtime
	I0819 18:11:27.564601  397087 config.go:182] Loaded profile config "ha-086149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:11:27.564711  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHHostname
	I0819 18:11:27.567394  397087 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:11:27.567789  397087 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:11:27.567818  397087 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:11:27.567989  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHPort
	I0819 18:11:27.568204  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHKeyPath
	I0819 18:11:27.568381  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHKeyPath
	I0819 18:11:27.568533  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHUsername
	I0819 18:11:27.568694  397087 main.go:141] libmachine: Using SSH client type: native
	I0819 18:11:27.568911  397087 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0819 18:11:27.568938  397087 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 18:12:58.395407  397087 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 18:12:58.395460  397087 machine.go:96] duration metric: took 1m31.491678222s to provisionDockerMachine
	I0819 18:12:58.395481  397087 start.go:293] postStartSetup for "ha-086149" (driver="kvm2")
	I0819 18:12:58.395496  397087 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 18:12:58.395525  397087 main.go:141] libmachine: (ha-086149) Calling .DriverName
	I0819 18:12:58.395908  397087 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 18:12:58.395946  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHHostname
	I0819 18:12:58.399108  397087 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:12:58.399509  397087 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:12:58.399538  397087 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:12:58.399716  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHPort
	I0819 18:12:58.399897  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHKeyPath
	I0819 18:12:58.400179  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHUsername
	I0819 18:12:58.400335  397087 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149/id_rsa Username:docker}
	I0819 18:12:58.483619  397087 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 18:12:58.487929  397087 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 18:12:58.487955  397087 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-372744/.minikube/addons for local assets ...
	I0819 18:12:58.488013  397087 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-372744/.minikube/files for local assets ...
	I0819 18:12:58.488091  397087 filesync.go:149] local asset: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem -> 3800092.pem in /etc/ssl/certs
	I0819 18:12:58.488107  397087 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem -> /etc/ssl/certs/3800092.pem
	I0819 18:12:58.488191  397087 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 18:12:58.497553  397087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem --> /etc/ssl/certs/3800092.pem (1708 bytes)
	I0819 18:12:58.522327  397087 start.go:296] duration metric: took 126.830228ms for postStartSetup
	I0819 18:12:58.522380  397087 main.go:141] libmachine: (ha-086149) Calling .DriverName
	I0819 18:12:58.522680  397087 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0819 18:12:58.522722  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHHostname
	I0819 18:12:58.525614  397087 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:12:58.525952  397087 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:12:58.525977  397087 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:12:58.526186  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHPort
	I0819 18:12:58.526376  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHKeyPath
	I0819 18:12:58.526538  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHUsername
	I0819 18:12:58.526687  397087 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149/id_rsa Username:docker}
	W0819 18:12:58.606786  397087 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0819 18:12:58.606817  397087 fix.go:56] duration metric: took 1m31.724542331s for fixHost
	I0819 18:12:58.606841  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHHostname
	I0819 18:12:58.609477  397087 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:12:58.609879  397087 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:12:58.609905  397087 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:12:58.610052  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHPort
	I0819 18:12:58.610266  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHKeyPath
	I0819 18:12:58.610412  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHKeyPath
	I0819 18:12:58.610556  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHUsername
	I0819 18:12:58.610697  397087 main.go:141] libmachine: Using SSH client type: native
	I0819 18:12:58.610881  397087 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0819 18:12:58.610892  397087 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 18:12:58.712750  397087 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724091178.663822557
	
	I0819 18:12:58.712775  397087 fix.go:216] guest clock: 1724091178.663822557
	I0819 18:12:58.712782  397087 fix.go:229] Guest: 2024-08-19 18:12:58.663822557 +0000 UTC Remote: 2024-08-19 18:12:58.606825553 +0000 UTC m=+91.854126584 (delta=56.997004ms)
	I0819 18:12:58.712802  397087 fix.go:200] guest clock delta is within tolerance: 56.997004ms
	I0819 18:12:58.712807  397087 start.go:83] releasing machines lock for "ha-086149", held for 1m31.830548944s
	I0819 18:12:58.712825  397087 main.go:141] libmachine: (ha-086149) Calling .DriverName
	I0819 18:12:58.713130  397087 main.go:141] libmachine: (ha-086149) Calling .GetIP
	I0819 18:12:58.715596  397087 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:12:58.715988  397087 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:12:58.716015  397087 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:12:58.716186  397087 main.go:141] libmachine: (ha-086149) Calling .DriverName
	I0819 18:12:58.716784  397087 main.go:141] libmachine: (ha-086149) Calling .DriverName
	I0819 18:12:58.716968  397087 main.go:141] libmachine: (ha-086149) Calling .DriverName
	I0819 18:12:58.717063  397087 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 18:12:58.717134  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHHostname
	I0819 18:12:58.717199  397087 ssh_runner.go:195] Run: cat /version.json
	I0819 18:12:58.717219  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHHostname
	I0819 18:12:58.719832  397087 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:12:58.720111  397087 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:12:58.720146  397087 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:12:58.720164  397087 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:12:58.720278  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHPort
	I0819 18:12:58.720538  397087 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:12:58.720540  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHKeyPath
	I0819 18:12:58.720563  397087 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:12:58.720715  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHUsername
	I0819 18:12:58.720739  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHPort
	I0819 18:12:58.720883  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHKeyPath
	I0819 18:12:58.720908  397087 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149/id_rsa Username:docker}
	I0819 18:12:58.721013  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHUsername
	I0819 18:12:58.721220  397087 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149/id_rsa Username:docker}
	I0819 18:12:58.827300  397087 ssh_runner.go:195] Run: systemctl --version
	I0819 18:12:58.833614  397087 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 18:12:59.001910  397087 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 18:12:59.011040  397087 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 18:12:59.011116  397087 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 18:12:59.020702  397087 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0819 18:12:59.020736  397087 start.go:495] detecting cgroup driver to use...
	I0819 18:12:59.020803  397087 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 18:12:59.036530  397087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 18:12:59.050394  397087 docker.go:217] disabling cri-docker service (if available) ...
	I0819 18:12:59.050475  397087 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 18:12:59.063866  397087 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 18:12:59.076972  397087 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 18:12:59.230890  397087 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 18:12:59.380358  397087 docker.go:233] disabling docker service ...
	I0819 18:12:59.380448  397087 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 18:12:59.396879  397087 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 18:12:59.411168  397087 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 18:12:59.560874  397087 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 18:12:59.707454  397087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 18:12:59.721622  397087 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 18:12:59.740982  397087 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 18:12:59.741039  397087 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:12:59.751763  397087 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 18:12:59.751862  397087 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:12:59.762338  397087 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:12:59.772603  397087 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:12:59.782855  397087 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 18:12:59.793221  397087 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:12:59.803640  397087 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:12:59.815181  397087 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:12:59.825280  397087 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 18:12:59.834950  397087 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 18:12:59.844552  397087 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 18:12:59.986845  397087 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 18:13:05.808418  397087 ssh_runner.go:235] Completed: sudo systemctl restart crio: (5.821526153s)
	I0819 18:13:05.808456  397087 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 18:13:05.808515  397087 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 18:13:05.813721  397087 start.go:563] Will wait 60s for crictl version
	I0819 18:13:05.813792  397087 ssh_runner.go:195] Run: which crictl
	I0819 18:13:05.818030  397087 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 18:13:05.855021  397087 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 18:13:05.855114  397087 ssh_runner.go:195] Run: crio --version
	I0819 18:13:05.883731  397087 ssh_runner.go:195] Run: crio --version
	I0819 18:13:05.915398  397087 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 18:13:05.916896  397087 main.go:141] libmachine: (ha-086149) Calling .GetIP
	I0819 18:13:05.919751  397087 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:13:05.920125  397087 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:13:05.920154  397087 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:13:05.920388  397087 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 18:13:05.925474  397087 kubeadm.go:883] updating cluster {Name:ha-086149 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:ha-086149 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.167 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.121 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.173 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fr
eshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mo
untGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 18:13:05.925636  397087 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 18:13:05.925689  397087 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 18:13:05.971864  397087 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 18:13:05.971892  397087 crio.go:433] Images already preloaded, skipping extraction
	I0819 18:13:05.971984  397087 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 18:13:06.018045  397087 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 18:13:06.018077  397087 cache_images.go:84] Images are preloaded, skipping loading
	I0819 18:13:06.018093  397087 kubeadm.go:934] updating node { 192.168.39.249 8443 v1.31.0 crio true true} ...
	I0819 18:13:06.018218  397087 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-086149 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.249
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-086149 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 18:13:06.018305  397087 ssh_runner.go:195] Run: crio config
	I0819 18:13:06.069464  397087 cni.go:84] Creating CNI manager for ""
	I0819 18:13:06.069488  397087 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0819 18:13:06.069502  397087 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 18:13:06.069524  397087 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.249 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-086149 NodeName:ha-086149 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.249"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.249 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 18:13:06.069658  397087 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.249
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-086149"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.249
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.249"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 18:13:06.069681  397087 kube-vip.go:115] generating kube-vip config ...
	I0819 18:13:06.069733  397087 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0819 18:13:06.081760  397087 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0819 18:13:06.081881  397087 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0819 18:13:06.081937  397087 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 18:13:06.092095  397087 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 18:13:06.092161  397087 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0819 18:13:06.101668  397087 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0819 18:13:06.119159  397087 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 18:13:06.135928  397087 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0819 18:13:06.152409  397087 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0819 18:13:06.168938  397087 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0819 18:13:06.173820  397087 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 18:13:06.325226  397087 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 18:13:06.339965  397087 certs.go:68] Setting up /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149 for IP: 192.168.39.249
	I0819 18:13:06.339996  397087 certs.go:194] generating shared ca certs ...
	I0819 18:13:06.340020  397087 certs.go:226] acquiring lock for ca certs: {Name:mk639e03f593e0bccac045f6e9f5ba3b96cc81e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:13:06.340217  397087 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key
	I0819 18:13:06.340299  397087 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key
	I0819 18:13:06.340318  397087 certs.go:256] generating profile certs ...
	I0819 18:13:06.340424  397087 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/client.key
	I0819 18:13:06.340461  397087 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.key.0421d9d8
	I0819 18:13:06.340482  397087 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.crt.0421d9d8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.249 192.168.39.167 192.168.39.121 192.168.39.254]
	I0819 18:13:06.530153  397087 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.crt.0421d9d8 ...
	I0819 18:13:06.530189  397087 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.crt.0421d9d8: {Name:mk99868fe2b76b367216e96c32af4ec27110846d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:13:06.530368  397087 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.key.0421d9d8 ...
	I0819 18:13:06.530382  397087 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.key.0421d9d8: {Name:mk2c0f96ce4c77a08f0c0939f37c4fbbed2e333d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:13:06.530454  397087 certs.go:381] copying /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.crt.0421d9d8 -> /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.crt
	I0819 18:13:06.530641  397087 certs.go:385] copying /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.key.0421d9d8 -> /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.key
	I0819 18:13:06.530778  397087 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/proxy-client.key
	I0819 18:13:06.530809  397087 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 18:13:06.530826  397087 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 18:13:06.530841  397087 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 18:13:06.530853  397087 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 18:13:06.530866  397087 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0819 18:13:06.530883  397087 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0819 18:13:06.530901  397087 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0819 18:13:06.530911  397087 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0819 18:13:06.530956  397087 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem (1338 bytes)
	W0819 18:13:06.530986  397087 certs.go:480] ignoring /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009_empty.pem, impossibly tiny 0 bytes
	I0819 18:13:06.530995  397087 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 18:13:06.531019  397087 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem (1082 bytes)
	I0819 18:13:06.531041  397087 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem (1123 bytes)
	I0819 18:13:06.531062  397087 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem (1675 bytes)
	I0819 18:13:06.531098  397087 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem (1708 bytes)
	I0819 18:13:06.531124  397087 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem -> /usr/share/ca-certificates/3800092.pem
	I0819 18:13:06.531139  397087 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:13:06.531152  397087 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem -> /usr/share/ca-certificates/380009.pem
	I0819 18:13:06.531741  397087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 18:13:06.558053  397087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 18:13:06.583054  397087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 18:13:06.607651  397087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 18:13:06.634144  397087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0819 18:13:06.658012  397087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 18:13:06.682158  397087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 18:13:06.706631  397087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 18:13:06.730952  397087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem --> /usr/share/ca-certificates/3800092.pem (1708 bytes)
	I0819 18:13:06.755061  397087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 18:13:06.779260  397087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem --> /usr/share/ca-certificates/380009.pem (1338 bytes)
	I0819 18:13:06.805310  397087 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 18:13:06.832120  397087 ssh_runner.go:195] Run: openssl version
	I0819 18:13:06.845693  397087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/380009.pem && ln -fs /usr/share/ca-certificates/380009.pem /etc/ssl/certs/380009.pem"
	I0819 18:13:06.864029  397087 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/380009.pem
	I0819 18:13:06.876056  397087 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 17:56 /usr/share/ca-certificates/380009.pem
	I0819 18:13:06.876116  397087 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/380009.pem
	I0819 18:13:06.883958  397087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/380009.pem /etc/ssl/certs/51391683.0"
	I0819 18:13:06.912663  397087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3800092.pem && ln -fs /usr/share/ca-certificates/3800092.pem /etc/ssl/certs/3800092.pem"
	I0819 18:13:06.938548  397087 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3800092.pem
	I0819 18:13:06.945514  397087 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 17:56 /usr/share/ca-certificates/3800092.pem
	I0819 18:13:06.945576  397087 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3800092.pem
	I0819 18:13:06.953177  397087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3800092.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 18:13:06.972664  397087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 18:13:06.986664  397087 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:13:06.991565  397087 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:13:06.991624  397087 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:13:06.997458  397087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 18:13:07.008703  397087 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 18:13:07.018016  397087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 18:13:07.024635  397087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 18:13:07.030997  397087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 18:13:07.037077  397087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 18:13:07.043463  397087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 18:13:07.049164  397087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 18:13:07.055094  397087 kubeadm.go:392] StartCluster: {Name:ha-086149 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clust
erName:ha-086149 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.167 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.121 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.173 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fresh
pod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 18:13:07.055231  397087 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 18:13:07.055287  397087 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 18:13:07.115004  397087 cri.go:89] found id: "ebc9aecd46ff7854b20e6b5fd38c6125d892096e8032a7c50445c7130f92158f"
	I0819 18:13:07.115030  397087 cri.go:89] found id: "6dd571ecf979fd6b33d2d3a930406edcad4fc4673aef14b144d3919400614448"
	I0819 18:13:07.115034  397087 cri.go:89] found id: "ccb73fc4640a2d71e367fe2751278531cdb9da26a96f1e3f5450f2dd052cef48"
	I0819 18:13:07.115036  397087 cri.go:89] found id: "d4208b72f7684106eeabb79597e9a16912d86fddf552d810668e52ee86e4cacf"
	I0819 18:13:07.115039  397087 cri.go:89] found id: "86aec3b9357709107938f07e57e09bef332ea9baea288a18bb10389d5108084b"
	I0819 18:13:07.115042  397087 cri.go:89] found id: "de3b095c19e3f3ff1bf0fb76700cc09513b591cda7c219c31dee7842602944b4"
	I0819 18:13:07.115045  397087 cri.go:89] found id: "66fd9c9b32e5e0294c89ebc2ee3c443fda85c40c3ad5b05d42357b4968e8d305"
	I0819 18:13:07.115047  397087 cri.go:89] found id: "eb8cccc1568bbb207d2c7c285f3897a7a425cba60f4dfcf3e8daa8082fc38ef0"
	I0819 18:13:07.115050  397087 cri.go:89] found id: "0cbf110391a2708b365a6d117cd1facf1a5820add049c9338b5eaa12f02254e4"
	I0819 18:13:07.115056  397087 cri.go:89] found id: "f5e746178ed6a3645979a5bd617a6d9f408bb3e6af232f31409c7e79a0c4f6b2"
	I0819 18:13:07.115060  397087 cri.go:89] found id: "426a12b48132d73e1b93e6a7fb5b3420868e384eb280274c6ee81ae6f6bcea12"
	I0819 18:13:07.115062  397087 cri.go:89] found id: "2f729929f59edc9bd3c0ec7e99f4b984f94d6b6ec06edf83cf6dc3efba7a1fe5"
	I0819 18:13:07.115065  397087 cri.go:89] found id: "d0e66231bf791048a9932068b5f28d8479613545885bea8e42cf9c79913ffccd"
	I0819 18:13:07.115067  397087 cri.go:89] found id: ""
	I0819 18:13:07.115111  397087 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 19 18:16:15 ha-086149 crio[3620]: time="2024-08-19 18:16:15.605300765Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=0a13ecd6-a3a7-4b1b-a0bf-f00edfe305f7 name=/runtime.v1.RuntimeService/Status
	Aug 19 18:16:15 ha-086149 crio[3620]: time="2024-08-19 18:16:15.605138493Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=484a2ab7-853f-4a2d-8645-c5703046cb9c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:16:15 ha-086149 crio[3620]: time="2024-08-19 18:16:15.606668468Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724091375606648711,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=484a2ab7-853f-4a2d-8645-c5703046cb9c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:16:15 ha-086149 crio[3620]: time="2024-08-19 18:16:15.607522102Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=84bf38d7-8dad-499e-aaa6-d93591c465f0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:16:15 ha-086149 crio[3620]: time="2024-08-19 18:16:15.607716028Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=84bf38d7-8dad-499e-aaa6-d93591c465f0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:16:15 ha-086149 crio[3620]: time="2024-08-19 18:16:15.608205541Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7fc48458ae307ff361499fde833d54b89f7ed1cc124b8e2e4c5e623d5b59f5cf,PodSandboxId:4a10374978122541ed15e7f43ce8d30cc1d0cd85f051271ab88948bcb2c57a79,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724091270295934817,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c12159a8-5f84-4d19-aa54-7b56a9669f6c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b110ed1c7e4de28f673ba115eee8636180545973d22374de2fefcc11c697539,PodSandboxId:834b78e6f8c8ae9b6949554d2864db66ec486375169d8a29f441745a6c13a6c7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724091234291807173,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab6b0fe91f166a5c05b58933ead885f6,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8eb0bee9a15dccc2d82fb1b3ac35c0edda4dfaf7f15f58e06a340bf55e8f26ab,PodSandboxId:54b9254dbd54ee93a0df9ad92074a813df205bbacf4bf2950d47a6955ebf62e1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724091232293434047,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9269a2cf31966e0bbf30b6554fa311ee,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01d3428fa47f18423ed50d84a758bf632905445652b3088c079a7522697a5d53,PodSandboxId:4a10374978122541ed15e7f43ce8d30cc1d0cd85f051271ab88948bcb2c57a79,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724091227289314593,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c12159a8-5f84-4d19-aa54-7b56a9669f6c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31b792a184ef5bf6c6881a58559ac54b18794b0d2bdb0f213f9015a19c994ff0,PodSandboxId:5fdc0c51659ed4b89dc97e11cbfa487bd8403121cd95beb25e6735b3c83aa363,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724091226319223025,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-fd2dw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f5e2f831-487f-4edb-b6c1-b391906a6d5b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9e6f6fd570fa5f5a880753efc7f1228506acc92db408df6bf6ed9b5f34cfe93,PodSandboxId:c78d69ffa13ba1619670b2ce62d5e954ea916933dc86eecc61164297731d3363,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1724091207063154781,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71315ae10c82422e3efaca00d9b232cb,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2deb18dfc60e51eddc31befa21ffc0090ce2abc67b4511b62104ca5e8342f60,PodSandboxId:c7bab8d9969d8f2b85807cc8e16e713161cd1c353dfcfd272f167836b340da0c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724091193216378555,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vb66s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9322737a-5f8a-4d5a-a7d1-ba076bc8f2d8,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:7421b967684844bf1fe8f4abc52f1cd8635544a588cbdb2b910b55bf74594619,PodSandboxId:d4370b1d6fcb3ede7b9a41e432046068b76ec99429ad0424f03c801cdfedc7c1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724091193106196622,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fwkf2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 001a3fe7-633c-44f8-9a8c-7401cec7af54,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62c3f84d9
e207c67a86a14e215f591225047843d0d3d8ff01470104c28ec3372,PodSandboxId:47c6aecb02b827d90fa98d560860dcb29069184c64ee4755d4c1f590c6ad5989,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724091193019658925,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-p65cb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f30449e-d4ea-4d6f-a63a-08551024bd04,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a1b7fec3f151c3ebd32ce721f81861e00daf06da360b6bad7a4c99a4b3c71d5,PodSandboxId:a8bc21a4e7d10603f5f44f0819a6baf14dda6ea43bd4a34b0756f711804ae455,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724091192823979246,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcf0b1666b512c678d4309e6a2bd2773,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea2f2cfbcacac8b9d0f716fc5bf8be816dac486447f26b5969f1d79a9031f7ca,PodSandboxId:834b78e6f8c8ae9b6949554d2864db66ec486375169d8a29f441745a6c13a6c7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724091192904907760,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab6b0fe91f166a5c05b58933ead885f6,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4760d8a0d8843fa04600f76c7a9e2b2ba5c4212e748492168d8c00d31ea0d515,PodSandboxId:15a37a0b36621b359e14b4e497dcf9a8bee8c5d328dee2de0d16ab4c727f8823,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724091192778643283,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 465e756b61a05a6f1c4dfeba2adbdeeb,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dbebbcf5b28297a583e961cdeb22de8d630ca8836e6f0ffcca3c4fe28b9a104,PodSandboxId:54b9254dbd54ee93a0df9ad92074a813df205bbacf4bf2950d47a6955ebf62e1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724091192765986526,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9269a2cf31966e0bbf30b6554fa311ee,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1560513c5f2f2c855e651ca853a37a183c2c92361d4db5001d39a783f9bf1dec,PodSandboxId:2a5484a378f88550bea42ad9cd40a477fefe20e700d36531a230194dd27918f7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724091186996673333,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-8fjpd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bedb900-107a-4f7e-aae7-391b18da4a26,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"
name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef0b28473496e4ab21e3f86bc64eb662e5c22e59e4a56f80f7bdad009460c73d,PodSandboxId:0f784aeccda9e0bff51a30b97a310813be1e271fdaae54f30006645ed5ae31b1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724090682352298631,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-fd2dw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f5e2f831-487f-4edb-b6c1-b391906a6d5b,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4208b72f7684106eeabb79597e9a16912d86fddf552d810668e52ee86e4cacf,PodSandboxId:5b83e59b0dd3110115fa51715b6d8f6d29e006636ab031766095bcb6200ff245,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724090536333833063,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-p65cb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f30449e-d4ea-4d6f-a63a-08551024bd04,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86aec3b9357709107938f07e57e09bef332ea9baea288a18bb10389d5108084b,PodSandboxId:86507aaa25957ebc7ff023a8f042b236a729503785cd3163a2a44e79daf28a80,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724090536330243526,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-8fjpd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bedb900-107a-4f7e-aae7-391b18da4a26,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66fd9c9b32e5e0294c89ebc2ee3c443fda85c40c3ad5b05d42357b4968e8d305,PodSandboxId:3c6e833618ab7965e295c1f82164c28a64e619a82a0a8a90542c16f004e32954,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724090524118032785,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vb66s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9322737a-5f8a-4d5a-a7d1-ba076bc8f2d8,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb8cccc1568bbb207d2c7c285f3897a7a425cba60f4dfcf3e8daa8082fc38ef0,PodSandboxId:dc27fd8c8c4a6cec062f5420b6ed3489f5b075fb1eb4e02074e5505c76d238e5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724090520283730684,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fwkf2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 001a3fe7-633c-44f8-9a8c-7401cec7af54,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:426a12b48132d73e1b93e6a7fb5b3420868e384eb280274c6ee81ae6f6bcea12,PodSandboxId:4cd25796bc67e8c9b4a666188feb3addfa806bf372a40c47a0ed8a3e3576c9a2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915a
f3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724090509151262837,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcf0b1666b512c678d4309e6a2bd2773,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0e66231bf791048a9932068b5f28d8479613545885bea8e42cf9c79913ffccd,PodSandboxId:1f46f8e2ba79c3a9b9a7f9729c154fc9c495e280d0a9fac6dc4fdf837a2e0b73,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
,State:CONTAINER_EXITED,CreatedAt:1724090509024852872,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 465e756b61a05a6f1c4dfeba2adbdeeb,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=84bf38d7-8dad-499e-aaa6-d93591c465f0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:16:15 ha-086149 crio[3620]: time="2024-08-19 18:16:15.658006219Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=72959d3a-df3e-4e00-8106-f8c1ddd11d86 name=/runtime.v1.RuntimeService/Version
	Aug 19 18:16:15 ha-086149 crio[3620]: time="2024-08-19 18:16:15.658133886Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=72959d3a-df3e-4e00-8106-f8c1ddd11d86 name=/runtime.v1.RuntimeService/Version
	Aug 19 18:16:15 ha-086149 crio[3620]: time="2024-08-19 18:16:15.659408851Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cca691a6-06ce-4b9b-b99e-f6bd2f01ba6c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:16:15 ha-086149 crio[3620]: time="2024-08-19 18:16:15.659921833Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724091375659894748,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cca691a6-06ce-4b9b-b99e-f6bd2f01ba6c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:16:15 ha-086149 crio[3620]: time="2024-08-19 18:16:15.660455935Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6f9e215f-51f0-4c01-8a73-e7e465aa11bb name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:16:15 ha-086149 crio[3620]: time="2024-08-19 18:16:15.660529908Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6f9e215f-51f0-4c01-8a73-e7e465aa11bb name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:16:15 ha-086149 crio[3620]: time="2024-08-19 18:16:15.660946817Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7fc48458ae307ff361499fde833d54b89f7ed1cc124b8e2e4c5e623d5b59f5cf,PodSandboxId:4a10374978122541ed15e7f43ce8d30cc1d0cd85f051271ab88948bcb2c57a79,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724091270295934817,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c12159a8-5f84-4d19-aa54-7b56a9669f6c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b110ed1c7e4de28f673ba115eee8636180545973d22374de2fefcc11c697539,PodSandboxId:834b78e6f8c8ae9b6949554d2864db66ec486375169d8a29f441745a6c13a6c7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724091234291807173,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab6b0fe91f166a5c05b58933ead885f6,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8eb0bee9a15dccc2d82fb1b3ac35c0edda4dfaf7f15f58e06a340bf55e8f26ab,PodSandboxId:54b9254dbd54ee93a0df9ad92074a813df205bbacf4bf2950d47a6955ebf62e1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724091232293434047,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9269a2cf31966e0bbf30b6554fa311ee,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01d3428fa47f18423ed50d84a758bf632905445652b3088c079a7522697a5d53,PodSandboxId:4a10374978122541ed15e7f43ce8d30cc1d0cd85f051271ab88948bcb2c57a79,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724091227289314593,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c12159a8-5f84-4d19-aa54-7b56a9669f6c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31b792a184ef5bf6c6881a58559ac54b18794b0d2bdb0f213f9015a19c994ff0,PodSandboxId:5fdc0c51659ed4b89dc97e11cbfa487bd8403121cd95beb25e6735b3c83aa363,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724091226319223025,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-fd2dw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f5e2f831-487f-4edb-b6c1-b391906a6d5b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9e6f6fd570fa5f5a880753efc7f1228506acc92db408df6bf6ed9b5f34cfe93,PodSandboxId:c78d69ffa13ba1619670b2ce62d5e954ea916933dc86eecc61164297731d3363,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1724091207063154781,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71315ae10c82422e3efaca00d9b232cb,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2deb18dfc60e51eddc31befa21ffc0090ce2abc67b4511b62104ca5e8342f60,PodSandboxId:c7bab8d9969d8f2b85807cc8e16e713161cd1c353dfcfd272f167836b340da0c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724091193216378555,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vb66s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9322737a-5f8a-4d5a-a7d1-ba076bc8f2d8,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:7421b967684844bf1fe8f4abc52f1cd8635544a588cbdb2b910b55bf74594619,PodSandboxId:d4370b1d6fcb3ede7b9a41e432046068b76ec99429ad0424f03c801cdfedc7c1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724091193106196622,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fwkf2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 001a3fe7-633c-44f8-9a8c-7401cec7af54,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62c3f84d9
e207c67a86a14e215f591225047843d0d3d8ff01470104c28ec3372,PodSandboxId:47c6aecb02b827d90fa98d560860dcb29069184c64ee4755d4c1f590c6ad5989,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724091193019658925,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-p65cb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f30449e-d4ea-4d6f-a63a-08551024bd04,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a1b7fec3f151c3ebd32ce721f81861e00daf06da360b6bad7a4c99a4b3c71d5,PodSandboxId:a8bc21a4e7d10603f5f44f0819a6baf14dda6ea43bd4a34b0756f711804ae455,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724091192823979246,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcf0b1666b512c678d4309e6a2bd2773,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea2f2cfbcacac8b9d0f716fc5bf8be816dac486447f26b5969f1d79a9031f7ca,PodSandboxId:834b78e6f8c8ae9b6949554d2864db66ec486375169d8a29f441745a6c13a6c7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724091192904907760,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab6b0fe91f166a5c05b58933ead885f6,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4760d8a0d8843fa04600f76c7a9e2b2ba5c4212e748492168d8c00d31ea0d515,PodSandboxId:15a37a0b36621b359e14b4e497dcf9a8bee8c5d328dee2de0d16ab4c727f8823,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724091192778643283,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 465e756b61a05a6f1c4dfeba2adbdeeb,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dbebbcf5b28297a583e961cdeb22de8d630ca8836e6f0ffcca3c4fe28b9a104,PodSandboxId:54b9254dbd54ee93a0df9ad92074a813df205bbacf4bf2950d47a6955ebf62e1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724091192765986526,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9269a2cf31966e0bbf30b6554fa311ee,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1560513c5f2f2c855e651ca853a37a183c2c92361d4db5001d39a783f9bf1dec,PodSandboxId:2a5484a378f88550bea42ad9cd40a477fefe20e700d36531a230194dd27918f7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724091186996673333,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-8fjpd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bedb900-107a-4f7e-aae7-391b18da4a26,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"
name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef0b28473496e4ab21e3f86bc64eb662e5c22e59e4a56f80f7bdad009460c73d,PodSandboxId:0f784aeccda9e0bff51a30b97a310813be1e271fdaae54f30006645ed5ae31b1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724090682352298631,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-fd2dw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f5e2f831-487f-4edb-b6c1-b391906a6d5b,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4208b72f7684106eeabb79597e9a16912d86fddf552d810668e52ee86e4cacf,PodSandboxId:5b83e59b0dd3110115fa51715b6d8f6d29e006636ab031766095bcb6200ff245,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724090536333833063,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-p65cb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f30449e-d4ea-4d6f-a63a-08551024bd04,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86aec3b9357709107938f07e57e09bef332ea9baea288a18bb10389d5108084b,PodSandboxId:86507aaa25957ebc7ff023a8f042b236a729503785cd3163a2a44e79daf28a80,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724090536330243526,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-8fjpd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bedb900-107a-4f7e-aae7-391b18da4a26,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66fd9c9b32e5e0294c89ebc2ee3c443fda85c40c3ad5b05d42357b4968e8d305,PodSandboxId:3c6e833618ab7965e295c1f82164c28a64e619a82a0a8a90542c16f004e32954,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724090524118032785,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vb66s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9322737a-5f8a-4d5a-a7d1-ba076bc8f2d8,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb8cccc1568bbb207d2c7c285f3897a7a425cba60f4dfcf3e8daa8082fc38ef0,PodSandboxId:dc27fd8c8c4a6cec062f5420b6ed3489f5b075fb1eb4e02074e5505c76d238e5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724090520283730684,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fwkf2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 001a3fe7-633c-44f8-9a8c-7401cec7af54,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:426a12b48132d73e1b93e6a7fb5b3420868e384eb280274c6ee81ae6f6bcea12,PodSandboxId:4cd25796bc67e8c9b4a666188feb3addfa806bf372a40c47a0ed8a3e3576c9a2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915a
f3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724090509151262837,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcf0b1666b512c678d4309e6a2bd2773,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0e66231bf791048a9932068b5f28d8479613545885bea8e42cf9c79913ffccd,PodSandboxId:1f46f8e2ba79c3a9b9a7f9729c154fc9c495e280d0a9fac6dc4fdf837a2e0b73,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
,State:CONTAINER_EXITED,CreatedAt:1724090509024852872,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 465e756b61a05a6f1c4dfeba2adbdeeb,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6f9e215f-51f0-4c01-8a73-e7e465aa11bb name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:16:15 ha-086149 crio[3620]: time="2024-08-19 18:16:15.693491032Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=7d3dcc6a-73ca-4ac2-bdf4-0854ad16587d name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 19 18:16:15 ha-086149 crio[3620]: time="2024-08-19 18:16:15.693866059Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:5fdc0c51659ed4b89dc97e11cbfa487bd8403121cd95beb25e6735b3c83aa363,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-fd2dw,Uid:f5e2f831-487f-4edb-b6c1-b391906a6d5b,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724091226174393886,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-fd2dw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f5e2f831-487f-4edb-b6c1-b391906a6d5b,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-19T18:04:39.305897898Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c78d69ffa13ba1619670b2ce62d5e954ea916933dc86eecc61164297731d3363,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-086149,Uid:71315ae10c82422e3efaca00d9b232cb,Namespace:kube-system,Attempt:0,},State:SANDBOX_RE
ADY,CreatedAt:1724091206950252087,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71315ae10c82422e3efaca00d9b232cb,},Annotations:map[string]string{kubernetes.io/config.hash: 71315ae10c82422e3efaca00d9b232cb,kubernetes.io/config.seen: 2024-08-19T18:13:06.120897728Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:47c6aecb02b827d90fa98d560860dcb29069184c64ee4755d4c1f590c6ad5989,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-p65cb,Uid:7f30449e-d4ea-4d6f-a63a-08551024bd04,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724091192507559671,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-p65cb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f30449e-d4ea-4d6f-a63a-08551024bd04,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08
-19T18:02:15.716637556Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4a10374978122541ed15e7f43ce8d30cc1d0cd85f051271ab88948bcb2c57a79,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:c12159a8-5f84-4d19-aa54-7b56a9669f6c,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724091192454482968,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c12159a8-5f84-4d19-aa54-7b56a9669f6c,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":
\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-08-19T18:02:15.714649096Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:834b78e6f8c8ae9b6949554d2864db66ec486375169d8a29f441745a6c13a6c7,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-086149,Uid:ab6b0fe91f166a5c05b58933ead885f6,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724091192451168797,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab6b0fe91f166a5c05b58933ead885f6,tier: control-plane,},Annotations:map[string]string{ku
bernetes.io/config.hash: ab6b0fe91f166a5c05b58933ead885f6,kubernetes.io/config.seen: 2024-08-19T18:01:55.259302866Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c7bab8d9969d8f2b85807cc8e16e713161cd1c353dfcfd272f167836b340da0c,Metadata:&PodSandboxMetadata{Name:kindnet-vb66s,Uid:9322737a-5f8a-4d5a-a7d1-ba076bc8f2d8,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724091192448244848,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-vb66s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9322737a-5f8a-4d5a-a7d1-ba076bc8f2d8,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-19T18:01:59.638862448Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d4370b1d6fcb3ede7b9a41e432046068b76ec99429ad0424f03c801cdfedc7c1,Metadata:&PodSandboxMetadata{Name:kube-proxy-fwkf2,Uid:001a3fe7-633c-44f8-9
a8c-7401cec7af54,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724091192442412326,Labels:map[string]string{controller-revision-hash: 5976bc5f75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-fwkf2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 001a3fe7-633c-44f8-9a8c-7401cec7af54,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-19T18:01:59.621803005Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:15a37a0b36621b359e14b4e497dcf9a8bee8c5d328dee2de0d16ab4c727f8823,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-086149,Uid:465e756b61a05a6f1c4dfeba2adbdeeb,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724091192432677280,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 465e756b61a05a6f1c4dfeba2adbdeeb,tier
: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 465e756b61a05a6f1c4dfeba2adbdeeb,kubernetes.io/config.seen: 2024-08-19T18:01:55.259304197Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:54b9254dbd54ee93a0df9ad92074a813df205bbacf4bf2950d47a6955ebf62e1,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-086149,Uid:9269a2cf31966e0bbf30b6554fa311ee,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724091192414005450,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9269a2cf31966e0bbf30b6554fa311ee,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.249:8443,kubernetes.io/config.hash: 9269a2cf31966e0bbf30b6554fa311ee,kubernetes.io/config.seen: 2024-08-19T18:01:55.259301356Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&Pod
Sandbox{Id:a8bc21a4e7d10603f5f44f0819a6baf14dda6ea43bd4a34b0756f711804ae455,Metadata:&PodSandboxMetadata{Name:etcd-ha-086149,Uid:fcf0b1666b512c678d4309e6a2bd2773,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724091192412270165,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcf0b1666b512c678d4309e6a2bd2773,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.249:2379,kubernetes.io/config.hash: fcf0b1666b512c678d4309e6a2bd2773,kubernetes.io/config.seen: 2024-08-19T18:01:55.259297293Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2a5484a378f88550bea42ad9cd40a477fefe20e700d36531a230194dd27918f7,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-8fjpd,Uid:4bedb900-107a-4f7e-aae7-391b18da4a26,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724091186848649465,L
abels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-8fjpd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bedb900-107a-4f7e-aae7-391b18da4a26,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-19T18:02:15.706597552Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0f784aeccda9e0bff51a30b97a310813be1e271fdaae54f30006645ed5ae31b1,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-fd2dw,Uid:f5e2f831-487f-4edb-b6c1-b391906a6d5b,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1724090679628413901,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-fd2dw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f5e2f831-487f-4edb-b6c1-b391906a6d5b,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-19T18:04:39.305897898Z,kubernetes.io/config.source
: api,},RuntimeHandler:,},&PodSandbox{Id:5b83e59b0dd3110115fa51715b6d8f6d29e006636ab031766095bcb6200ff245,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-p65cb,Uid:7f30449e-d4ea-4d6f-a63a-08551024bd04,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1724090536030019332,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-p65cb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f30449e-d4ea-4d6f-a63a-08551024bd04,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-19T18:02:15.716637556Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:86507aaa25957ebc7ff023a8f042b236a729503785cd3163a2a44e79daf28a80,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-8fjpd,Uid:4bedb900-107a-4f7e-aae7-391b18da4a26,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1724090536013411994,Labels:map[string]string{io.kubernetes.container.name: POD,io.ku
bernetes.pod.name: coredns-6f6b679f8f-8fjpd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bedb900-107a-4f7e-aae7-391b18da4a26,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-19T18:02:15.706597552Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3c6e833618ab7965e295c1f82164c28a64e619a82a0a8a90542c16f004e32954,Metadata:&PodSandboxMetadata{Name:kindnet-vb66s,Uid:9322737a-5f8a-4d5a-a7d1-ba076bc8f2d8,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1724090519956136506,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-vb66s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9322737a-5f8a-4d5a-a7d1-ba076bc8f2d8,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-19T18:01:59.638862448Z,kubernetes.io/config.source: api,},Runt
imeHandler:,},&PodSandbox{Id:dc27fd8c8c4a6cec062f5420b6ed3489f5b075fb1eb4e02074e5505c76d238e5,Metadata:&PodSandboxMetadata{Name:kube-proxy-fwkf2,Uid:001a3fe7-633c-44f8-9a8c-7401cec7af54,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1724090519931585463,Labels:map[string]string{controller-revision-hash: 5976bc5f75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-fwkf2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 001a3fe7-633c-44f8-9a8c-7401cec7af54,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-19T18:01:59.621803005Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1f46f8e2ba79c3a9b9a7f9729c154fc9c495e280d0a9fac6dc4fdf837a2e0b73,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-086149,Uid:465e756b61a05a6f1c4dfeba2adbdeeb,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1724090508883461385,Labels:map[string]string{component: kube-scheduler,io.kubern
etes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 465e756b61a05a6f1c4dfeba2adbdeeb,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 465e756b61a05a6f1c4dfeba2adbdeeb,kubernetes.io/config.seen: 2024-08-19T18:01:48.386787659Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4cd25796bc67e8c9b4a666188feb3addfa806bf372a40c47a0ed8a3e3576c9a2,Metadata:&PodSandboxMetadata{Name:etcd-ha-086149,Uid:fcf0b1666b512c678d4309e6a2bd2773,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1724090508860276959,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcf0b1666b512c678d4309e6a2bd2773,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.249:2379,kubernetes.io/config.hash: fcf0b166
6b512c678d4309e6a2bd2773,kubernetes.io/config.seen: 2024-08-19T18:01:48.386784095Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=7d3dcc6a-73ca-4ac2-bdf4-0854ad16587d name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 19 18:16:15 ha-086149 crio[3620]: time="2024-08-19 18:16:15.695556384Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5a05cc19-5e94-4834-b97f-3c9320a7da6c name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:16:15 ha-086149 crio[3620]: time="2024-08-19 18:16:15.695661420Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5a05cc19-5e94-4834-b97f-3c9320a7da6c name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:16:15 ha-086149 crio[3620]: time="2024-08-19 18:16:15.696072850Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7fc48458ae307ff361499fde833d54b89f7ed1cc124b8e2e4c5e623d5b59f5cf,PodSandboxId:4a10374978122541ed15e7f43ce8d30cc1d0cd85f051271ab88948bcb2c57a79,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724091270295934817,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c12159a8-5f84-4d19-aa54-7b56a9669f6c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b110ed1c7e4de28f673ba115eee8636180545973d22374de2fefcc11c697539,PodSandboxId:834b78e6f8c8ae9b6949554d2864db66ec486375169d8a29f441745a6c13a6c7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724091234291807173,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab6b0fe91f166a5c05b58933ead885f6,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8eb0bee9a15dccc2d82fb1b3ac35c0edda4dfaf7f15f58e06a340bf55e8f26ab,PodSandboxId:54b9254dbd54ee93a0df9ad92074a813df205bbacf4bf2950d47a6955ebf62e1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724091232293434047,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9269a2cf31966e0bbf30b6554fa311ee,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01d3428fa47f18423ed50d84a758bf632905445652b3088c079a7522697a5d53,PodSandboxId:4a10374978122541ed15e7f43ce8d30cc1d0cd85f051271ab88948bcb2c57a79,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724091227289314593,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c12159a8-5f84-4d19-aa54-7b56a9669f6c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31b792a184ef5bf6c6881a58559ac54b18794b0d2bdb0f213f9015a19c994ff0,PodSandboxId:5fdc0c51659ed4b89dc97e11cbfa487bd8403121cd95beb25e6735b3c83aa363,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724091226319223025,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-fd2dw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f5e2f831-487f-4edb-b6c1-b391906a6d5b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9e6f6fd570fa5f5a880753efc7f1228506acc92db408df6bf6ed9b5f34cfe93,PodSandboxId:c78d69ffa13ba1619670b2ce62d5e954ea916933dc86eecc61164297731d3363,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1724091207063154781,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71315ae10c82422e3efaca00d9b232cb,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2deb18dfc60e51eddc31befa21ffc0090ce2abc67b4511b62104ca5e8342f60,PodSandboxId:c7bab8d9969d8f2b85807cc8e16e713161cd1c353dfcfd272f167836b340da0c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724091193216378555,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vb66s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9322737a-5f8a-4d5a-a7d1-ba076bc8f2d8,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:7421b967684844bf1fe8f4abc52f1cd8635544a588cbdb2b910b55bf74594619,PodSandboxId:d4370b1d6fcb3ede7b9a41e432046068b76ec99429ad0424f03c801cdfedc7c1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724091193106196622,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fwkf2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 001a3fe7-633c-44f8-9a8c-7401cec7af54,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62c3f84d9
e207c67a86a14e215f591225047843d0d3d8ff01470104c28ec3372,PodSandboxId:47c6aecb02b827d90fa98d560860dcb29069184c64ee4755d4c1f590c6ad5989,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724091193019658925,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-p65cb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f30449e-d4ea-4d6f-a63a-08551024bd04,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a1b7fec3f151c3ebd32ce721f81861e00daf06da360b6bad7a4c99a4b3c71d5,PodSandboxId:a8bc21a4e7d10603f5f44f0819a6baf14dda6ea43bd4a34b0756f711804ae455,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724091192823979246,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcf0b1666b512c678d4309e6a2bd2773,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea2f2cfbcacac8b9d0f716fc5bf8be816dac486447f26b5969f1d79a9031f7ca,PodSandboxId:834b78e6f8c8ae9b6949554d2864db66ec486375169d8a29f441745a6c13a6c7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724091192904907760,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab6b0fe91f166a5c05b58933ead885f6,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4760d8a0d8843fa04600f76c7a9e2b2ba5c4212e748492168d8c00d31ea0d515,PodSandboxId:15a37a0b36621b359e14b4e497dcf9a8bee8c5d328dee2de0d16ab4c727f8823,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724091192778643283,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 465e756b61a05a6f1c4dfeba2adbdeeb,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dbebbcf5b28297a583e961cdeb22de8d630ca8836e6f0ffcca3c4fe28b9a104,PodSandboxId:54b9254dbd54ee93a0df9ad92074a813df205bbacf4bf2950d47a6955ebf62e1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724091192765986526,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9269a2cf31966e0bbf30b6554fa311ee,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1560513c5f2f2c855e651ca853a37a183c2c92361d4db5001d39a783f9bf1dec,PodSandboxId:2a5484a378f88550bea42ad9cd40a477fefe20e700d36531a230194dd27918f7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724091186996673333,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-8fjpd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bedb900-107a-4f7e-aae7-391b18da4a26,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"
name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef0b28473496e4ab21e3f86bc64eb662e5c22e59e4a56f80f7bdad009460c73d,PodSandboxId:0f784aeccda9e0bff51a30b97a310813be1e271fdaae54f30006645ed5ae31b1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724090682352298631,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-fd2dw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f5e2f831-487f-4edb-b6c1-b391906a6d5b,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4208b72f7684106eeabb79597e9a16912d86fddf552d810668e52ee86e4cacf,PodSandboxId:5b83e59b0dd3110115fa51715b6d8f6d29e006636ab031766095bcb6200ff245,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724090536333833063,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-p65cb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f30449e-d4ea-4d6f-a63a-08551024bd04,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86aec3b9357709107938f07e57e09bef332ea9baea288a18bb10389d5108084b,PodSandboxId:86507aaa25957ebc7ff023a8f042b236a729503785cd3163a2a44e79daf28a80,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724090536330243526,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-8fjpd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bedb900-107a-4f7e-aae7-391b18da4a26,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66fd9c9b32e5e0294c89ebc2ee3c443fda85c40c3ad5b05d42357b4968e8d305,PodSandboxId:3c6e833618ab7965e295c1f82164c28a64e619a82a0a8a90542c16f004e32954,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724090524118032785,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vb66s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9322737a-5f8a-4d5a-a7d1-ba076bc8f2d8,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb8cccc1568bbb207d2c7c285f3897a7a425cba60f4dfcf3e8daa8082fc38ef0,PodSandboxId:dc27fd8c8c4a6cec062f5420b6ed3489f5b075fb1eb4e02074e5505c76d238e5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724090520283730684,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fwkf2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 001a3fe7-633c-44f8-9a8c-7401cec7af54,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:426a12b48132d73e1b93e6a7fb5b3420868e384eb280274c6ee81ae6f6bcea12,PodSandboxId:4cd25796bc67e8c9b4a666188feb3addfa806bf372a40c47a0ed8a3e3576c9a2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915a
f3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724090509151262837,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcf0b1666b512c678d4309e6a2bd2773,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0e66231bf791048a9932068b5f28d8479613545885bea8e42cf9c79913ffccd,PodSandboxId:1f46f8e2ba79c3a9b9a7f9729c154fc9c495e280d0a9fac6dc4fdf837a2e0b73,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
,State:CONTAINER_EXITED,CreatedAt:1724090509024852872,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 465e756b61a05a6f1c4dfeba2adbdeeb,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5a05cc19-5e94-4834-b97f-3c9320a7da6c name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:16:15 ha-086149 crio[3620]: time="2024-08-19 18:16:15.712962534Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6bf7335b-37df-4e22-b17e-fb8087df11d9 name=/runtime.v1.RuntimeService/Version
	Aug 19 18:16:15 ha-086149 crio[3620]: time="2024-08-19 18:16:15.713035383Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6bf7335b-37df-4e22-b17e-fb8087df11d9 name=/runtime.v1.RuntimeService/Version
	Aug 19 18:16:15 ha-086149 crio[3620]: time="2024-08-19 18:16:15.714452158Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d5ab543c-4581-4e97-8199-f16af9183f00 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:16:15 ha-086149 crio[3620]: time="2024-08-19 18:16:15.714888600Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724091375714866223,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d5ab543c-4581-4e97-8199-f16af9183f00 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:16:15 ha-086149 crio[3620]: time="2024-08-19 18:16:15.715662972Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8e5ef29f-0e2b-4fc8-bc70-24f830bf5ace name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:16:15 ha-086149 crio[3620]: time="2024-08-19 18:16:15.715724789Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8e5ef29f-0e2b-4fc8-bc70-24f830bf5ace name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:16:15 ha-086149 crio[3620]: time="2024-08-19 18:16:15.716438174Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7fc48458ae307ff361499fde833d54b89f7ed1cc124b8e2e4c5e623d5b59f5cf,PodSandboxId:4a10374978122541ed15e7f43ce8d30cc1d0cd85f051271ab88948bcb2c57a79,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724091270295934817,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c12159a8-5f84-4d19-aa54-7b56a9669f6c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b110ed1c7e4de28f673ba115eee8636180545973d22374de2fefcc11c697539,PodSandboxId:834b78e6f8c8ae9b6949554d2864db66ec486375169d8a29f441745a6c13a6c7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724091234291807173,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab6b0fe91f166a5c05b58933ead885f6,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8eb0bee9a15dccc2d82fb1b3ac35c0edda4dfaf7f15f58e06a340bf55e8f26ab,PodSandboxId:54b9254dbd54ee93a0df9ad92074a813df205bbacf4bf2950d47a6955ebf62e1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724091232293434047,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9269a2cf31966e0bbf30b6554fa311ee,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01d3428fa47f18423ed50d84a758bf632905445652b3088c079a7522697a5d53,PodSandboxId:4a10374978122541ed15e7f43ce8d30cc1d0cd85f051271ab88948bcb2c57a79,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724091227289314593,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c12159a8-5f84-4d19-aa54-7b56a9669f6c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31b792a184ef5bf6c6881a58559ac54b18794b0d2bdb0f213f9015a19c994ff0,PodSandboxId:5fdc0c51659ed4b89dc97e11cbfa487bd8403121cd95beb25e6735b3c83aa363,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724091226319223025,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-fd2dw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f5e2f831-487f-4edb-b6c1-b391906a6d5b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9e6f6fd570fa5f5a880753efc7f1228506acc92db408df6bf6ed9b5f34cfe93,PodSandboxId:c78d69ffa13ba1619670b2ce62d5e954ea916933dc86eecc61164297731d3363,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1724091207063154781,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71315ae10c82422e3efaca00d9b232cb,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2deb18dfc60e51eddc31befa21ffc0090ce2abc67b4511b62104ca5e8342f60,PodSandboxId:c7bab8d9969d8f2b85807cc8e16e713161cd1c353dfcfd272f167836b340da0c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724091193216378555,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vb66s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9322737a-5f8a-4d5a-a7d1-ba076bc8f2d8,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:7421b967684844bf1fe8f4abc52f1cd8635544a588cbdb2b910b55bf74594619,PodSandboxId:d4370b1d6fcb3ede7b9a41e432046068b76ec99429ad0424f03c801cdfedc7c1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724091193106196622,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fwkf2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 001a3fe7-633c-44f8-9a8c-7401cec7af54,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62c3f84d9
e207c67a86a14e215f591225047843d0d3d8ff01470104c28ec3372,PodSandboxId:47c6aecb02b827d90fa98d560860dcb29069184c64ee4755d4c1f590c6ad5989,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724091193019658925,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-p65cb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f30449e-d4ea-4d6f-a63a-08551024bd04,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a1b7fec3f151c3ebd32ce721f81861e00daf06da360b6bad7a4c99a4b3c71d5,PodSandboxId:a8bc21a4e7d10603f5f44f0819a6baf14dda6ea43bd4a34b0756f711804ae455,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724091192823979246,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcf0b1666b512c678d4309e6a2bd2773,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea2f2cfbcacac8b9d0f716fc5bf8be816dac486447f26b5969f1d79a9031f7ca,PodSandboxId:834b78e6f8c8ae9b6949554d2864db66ec486375169d8a29f441745a6c13a6c7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724091192904907760,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab6b0fe91f166a5c05b58933ead885f6,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4760d8a0d8843fa04600f76c7a9e2b2ba5c4212e748492168d8c00d31ea0d515,PodSandboxId:15a37a0b36621b359e14b4e497dcf9a8bee8c5d328dee2de0d16ab4c727f8823,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724091192778643283,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 465e756b61a05a6f1c4dfeba2adbdeeb,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dbebbcf5b28297a583e961cdeb22de8d630ca8836e6f0ffcca3c4fe28b9a104,PodSandboxId:54b9254dbd54ee93a0df9ad92074a813df205bbacf4bf2950d47a6955ebf62e1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724091192765986526,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9269a2cf31966e0bbf30b6554fa311ee,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1560513c5f2f2c855e651ca853a37a183c2c92361d4db5001d39a783f9bf1dec,PodSandboxId:2a5484a378f88550bea42ad9cd40a477fefe20e700d36531a230194dd27918f7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724091186996673333,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-8fjpd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bedb900-107a-4f7e-aae7-391b18da4a26,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"
name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef0b28473496e4ab21e3f86bc64eb662e5c22e59e4a56f80f7bdad009460c73d,PodSandboxId:0f784aeccda9e0bff51a30b97a310813be1e271fdaae54f30006645ed5ae31b1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724090682352298631,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-fd2dw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f5e2f831-487f-4edb-b6c1-b391906a6d5b,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4208b72f7684106eeabb79597e9a16912d86fddf552d810668e52ee86e4cacf,PodSandboxId:5b83e59b0dd3110115fa51715b6d8f6d29e006636ab031766095bcb6200ff245,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724090536333833063,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-p65cb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f30449e-d4ea-4d6f-a63a-08551024bd04,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86aec3b9357709107938f07e57e09bef332ea9baea288a18bb10389d5108084b,PodSandboxId:86507aaa25957ebc7ff023a8f042b236a729503785cd3163a2a44e79daf28a80,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724090536330243526,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-8fjpd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bedb900-107a-4f7e-aae7-391b18da4a26,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66fd9c9b32e5e0294c89ebc2ee3c443fda85c40c3ad5b05d42357b4968e8d305,PodSandboxId:3c6e833618ab7965e295c1f82164c28a64e619a82a0a8a90542c16f004e32954,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724090524118032785,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vb66s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9322737a-5f8a-4d5a-a7d1-ba076bc8f2d8,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb8cccc1568bbb207d2c7c285f3897a7a425cba60f4dfcf3e8daa8082fc38ef0,PodSandboxId:dc27fd8c8c4a6cec062f5420b6ed3489f5b075fb1eb4e02074e5505c76d238e5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724090520283730684,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fwkf2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 001a3fe7-633c-44f8-9a8c-7401cec7af54,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:426a12b48132d73e1b93e6a7fb5b3420868e384eb280274c6ee81ae6f6bcea12,PodSandboxId:4cd25796bc67e8c9b4a666188feb3addfa806bf372a40c47a0ed8a3e3576c9a2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915a
f3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724090509151262837,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcf0b1666b512c678d4309e6a2bd2773,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0e66231bf791048a9932068b5f28d8479613545885bea8e42cf9c79913ffccd,PodSandboxId:1f46f8e2ba79c3a9b9a7f9729c154fc9c495e280d0a9fac6dc4fdf837a2e0b73,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
,State:CONTAINER_EXITED,CreatedAt:1724090509024852872,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 465e756b61a05a6f1c4dfeba2adbdeeb,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8e5ef29f-0e2b-4fc8-bc70-24f830bf5ace name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	7fc48458ae307       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       4                   4a10374978122       storage-provisioner
	0b110ed1c7e4d       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      2 minutes ago        Running             kube-controller-manager   2                   834b78e6f8c8a       kube-controller-manager-ha-086149
	8eb0bee9a15dc       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      2 minutes ago        Running             kube-apiserver            3                   54b9254dbd54e       kube-apiserver-ha-086149
	01d3428fa47f1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago        Exited              storage-provisioner       3                   4a10374978122       storage-provisioner
	31b792a184ef5       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      2 minutes ago        Running             busybox                   1                   5fdc0c51659ed       busybox-7dff88458-fd2dw
	a9e6f6fd570fa       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      2 minutes ago        Running             kube-vip                  0                   c78d69ffa13ba       kube-vip-ha-086149
	c2deb18dfc60e       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      3 minutes ago        Running             kindnet-cni               1                   c7bab8d9969d8       kindnet-vb66s
	7421b96768484       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      3 minutes ago        Running             kube-proxy                1                   d4370b1d6fcb3       kube-proxy-fwkf2
	62c3f84d9e207       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      3 minutes ago        Running             coredns                   1                   47c6aecb02b82       coredns-6f6b679f8f-p65cb
	ea2f2cfbcacac       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      3 minutes ago        Exited              kube-controller-manager   1                   834b78e6f8c8a       kube-controller-manager-ha-086149
	8a1b7fec3f151       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      3 minutes ago        Running             etcd                      1                   a8bc21a4e7d10       etcd-ha-086149
	4760d8a0d8843       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      3 minutes ago        Running             kube-scheduler            1                   15a37a0b36621       kube-scheduler-ha-086149
	3dbebbcf5b282       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      3 minutes ago        Exited              kube-apiserver            2                   54b9254dbd54e       kube-apiserver-ha-086149
	1560513c5f2f2       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      3 minutes ago        Running             coredns                   1                   2a5484a378f88       coredns-6f6b679f8f-8fjpd
	ef0b28473496e       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   11 minutes ago       Exited              busybox                   0                   0f784aeccda9e       busybox-7dff88458-fd2dw
	d4208b72f7684       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   5b83e59b0dd31       coredns-6f6b679f8f-p65cb
	86aec3b935770       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   86507aaa25957       coredns-6f6b679f8f-8fjpd
	66fd9c9b32e5e       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    14 minutes ago       Exited              kindnet-cni               0                   3c6e833618ab7       kindnet-vb66s
	eb8cccc1568bb       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      14 minutes ago       Exited              kube-proxy                0                   dc27fd8c8c4a6       kube-proxy-fwkf2
	426a12b48132d       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      14 minutes ago       Exited              etcd                      0                   4cd25796bc67e       etcd-ha-086149
	d0e66231bf791       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      14 minutes ago       Exited              kube-scheduler            0                   1f46f8e2ba79c       kube-scheduler-ha-086149
	
	
	==> coredns [1560513c5f2f2c855e651ca853a37a183c2c92361d4db5001d39a783f9bf1dec] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[390561824]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Aug-2024 18:13:21.023) (total time: 10001ms):
	Trace[390561824]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (18:13:31.025)
	Trace[390561824]: [10.001691472s] [10.001691472s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:45622->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:45622->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:45632->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:45632->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [62c3f84d9e207c67a86a14e215f591225047843d0d3d8ff01470104c28ec3372] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:45636->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:45636->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:42590->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1605571601]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Aug-2024 18:13:24.787) (total time: 12203ms):
	Trace[1605571601]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:42590->10.96.0.1:443: read: connection reset by peer 12203ms (18:13:36.990)
	Trace[1605571601]: [12.20334978s] [12.20334978s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:42590->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [86aec3b9357709107938f07e57e09bef332ea9baea288a18bb10389d5108084b] <==
	[INFO] 10.244.2.2:53329 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000136079s
	[INFO] 10.244.0.4:48191 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014988s
	[INFO] 10.244.0.4:47708 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000096718s
	[INFO] 10.244.0.4:42128 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000149115s
	[INFO] 10.244.0.4:49211 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000058729s
	[INFO] 10.244.0.4:41169 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000147844s
	[INFO] 10.244.1.2:55021 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000105902s
	[INFO] 10.244.1.2:39523 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000197158s
	[INFO] 10.244.1.2:39402 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000068589s
	[INFO] 10.244.1.2:46940 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000086232s
	[INFO] 10.244.2.2:59049 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000177439s
	[INFO] 10.244.2.2:48370 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000103075s
	[INFO] 10.244.2.2:36161 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000110997s
	[INFO] 10.244.2.2:44839 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000079394s
	[INFO] 10.244.1.2:53636 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000153191s
	[INFO] 10.244.1.2:46986 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00014037s
	[INFO] 10.244.1.2:39517 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000205565s
	[INFO] 10.244.2.2:34630 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000217644s
	[INFO] 10.244.2.2:48208 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000175515s
	[INFO] 10.244.2.2:42420 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000305788s
	[INFO] 10.244.0.4:49746 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000082325s
	[INFO] 10.244.0.4:48461 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000222115s
	[INFO] 10.244.1.2:58589 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000263104s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [d4208b72f7684106eeabb79597e9a16912d86fddf552d810668e52ee86e4cacf] <==
	[INFO] 10.244.2.2:60503 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002804151s
	[INFO] 10.244.2.2:49027 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000124508s
	[INFO] 10.244.0.4:59229 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001769172s
	[INFO] 10.244.0.4:34487 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001315875s
	[INFO] 10.244.0.4:34657 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000124575s
	[INFO] 10.244.1.2:49809 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001830693s
	[INFO] 10.244.1.2:60513 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001456039s
	[INFO] 10.244.1.2:58099 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000201903s
	[INFO] 10.244.1.2:36863 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000108279s
	[INFO] 10.244.0.4:48767 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000119232s
	[INFO] 10.244.0.4:35383 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00018722s
	[INFO] 10.244.0.4:58993 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000063721s
	[INFO] 10.244.0.4:55887 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000059646s
	[INFO] 10.244.1.2:45536 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000124964s
	[INFO] 10.244.2.2:45976 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000160498s
	[INFO] 10.244.0.4:38315 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000146686s
	[INFO] 10.244.0.4:36553 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000130807s
	[INFO] 10.244.1.2:46657 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00022076s
	[INFO] 10.244.1.2:44650 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000123411s
	[INFO] 10.244.1.2:46585 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000089999s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io)
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-086149
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-086149
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9c2db9d51ec33b5c53a86e9ba3d384ee332e3411
	                    minikube.k8s.io/name=ha-086149
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T18_01_56_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 18:01:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-086149
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 18:16:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 18:13:54 +0000   Mon, 19 Aug 2024 18:01:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 18:13:54 +0000   Mon, 19 Aug 2024 18:01:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 18:13:54 +0000   Mon, 19 Aug 2024 18:01:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 18:13:54 +0000   Mon, 19 Aug 2024 18:02:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.249
	  Hostname:    ha-086149
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f2adf13588c04842be48ba7ffa571365
	  System UUID:                f2adf135-88c0-4842-be48-ba7ffa571365
	  Boot ID:                    affd916c-f074-4dc0-bd43-4c71cd2f0b12
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-fd2dw              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-6f6b679f8f-8fjpd             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     14m
	  kube-system                 coredns-6f6b679f8f-p65cb             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     14m
	  kube-system                 etcd-ha-086149                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         14m
	  kube-system                 kindnet-vb66s                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      14m
	  kube-system                 kube-apiserver-ha-086149             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-ha-086149    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-fwkf2                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-ha-086149             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-vip-ha-086149                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m21s                  kube-proxy       
	  Normal   Starting                 14m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  14m                    kubelet          Node ha-086149 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     14m                    kubelet          Node ha-086149 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    14m                    kubelet          Node ha-086149 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 14m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           14m                    node-controller  Node ha-086149 event: Registered Node ha-086149 in Controller
	  Normal   NodeReady                14m                    kubelet          Node ha-086149 status is now: NodeReady
	  Normal   RegisteredNode           13m                    node-controller  Node ha-086149 event: Registered Node ha-086149 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-086149 event: Registered Node ha-086149 in Controller
	  Warning  ContainerGCFailed        3m21s (x2 over 4m21s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             3m16s (x3 over 4m6s)   kubelet          Node ha-086149 status is now: NodeNotReady
	  Normal   RegisteredNode           2m24s                  node-controller  Node ha-086149 event: Registered Node ha-086149 in Controller
	  Normal   RegisteredNode           2m19s                  node-controller  Node ha-086149 event: Registered Node ha-086149 in Controller
	  Normal   RegisteredNode           37s                    node-controller  Node ha-086149 event: Registered Node ha-086149 in Controller
	
	
	Name:               ha-086149-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-086149-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9c2db9d51ec33b5c53a86e9ba3d384ee332e3411
	                    minikube.k8s.io/name=ha-086149
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_19T18_02_56_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 18:02:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-086149-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 18:16:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 18:14:38 +0000   Mon, 19 Aug 2024 18:13:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 18:14:38 +0000   Mon, 19 Aug 2024 18:13:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 18:14:38 +0000   Mon, 19 Aug 2024 18:13:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 18:14:38 +0000   Mon, 19 Aug 2024 18:13:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.167
	  Hostname:    ha-086149-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 db74a62099694214b3e6abfad40c4b33
	  System UUID:                db74a620-9969-4214-b3e6-abfad40c4b33
	  Boot ID:                    caf8e9a8-08b4-4ee9-b12f-02973afc1d5b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-vgcdh                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ha-086149-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         13m
	  kube-system                 kindnet-dgj9c                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      13m
	  kube-system                 kube-apiserver-ha-086149-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-086149-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-vx94r                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-086149-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-086149-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m16s                  kube-proxy       
	  Normal  Starting                 13m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)      kubelet          Node ha-086149-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)      kubelet          Node ha-086149-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)      kubelet          Node ha-086149-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m                    node-controller  Node ha-086149-m02 event: Registered Node ha-086149-m02 in Controller
	  Normal  RegisteredNode           13m                    node-controller  Node ha-086149-m02 event: Registered Node ha-086149-m02 in Controller
	  Normal  RegisteredNode           11m                    node-controller  Node ha-086149-m02 event: Registered Node ha-086149-m02 in Controller
	  Normal  NodeNotReady             9m48s                  node-controller  Node ha-086149-m02 status is now: NodeNotReady
	  Normal  Starting                 2m47s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m47s (x8 over 2m47s)  kubelet          Node ha-086149-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m47s (x8 over 2m47s)  kubelet          Node ha-086149-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m47s (x7 over 2m47s)  kubelet          Node ha-086149-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m47s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m24s                  node-controller  Node ha-086149-m02 event: Registered Node ha-086149-m02 in Controller
	  Normal  RegisteredNode           2m19s                  node-controller  Node ha-086149-m02 event: Registered Node ha-086149-m02 in Controller
	  Normal  RegisteredNode           38s                    node-controller  Node ha-086149-m02 event: Registered Node ha-086149-m02 in Controller
	
	
	Name:               ha-086149-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-086149-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9c2db9d51ec33b5c53a86e9ba3d384ee332e3411
	                    minikube.k8s.io/name=ha-086149
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_19T18_04_13_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 18:04:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-086149-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 18:16:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 18:15:50 +0000   Mon, 19 Aug 2024 18:15:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 18:15:50 +0000   Mon, 19 Aug 2024 18:15:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 18:15:50 +0000   Mon, 19 Aug 2024 18:15:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 18:15:50 +0000   Mon, 19 Aug 2024 18:15:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.121
	  Hostname:    ha-086149-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8eb7138e4a844547bcac8ac690757488
	  System UUID:                8eb7138e-4a84-4547-bcac-8ac690757488
	  Boot ID:                    1400ea4b-86d5-4d48-bc78-4af5b4ffe01a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-7t5wq                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ha-086149-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         12m
	  kube-system                 kindnet-x87ch                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      12m
	  kube-system                 kube-apiserver-ha-086149-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-ha-086149-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-8snb5                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-ha-086149-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-vip-ha-086149-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   Starting                 39s                kube-proxy       
	  Normal   NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node ha-086149-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node ha-086149-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node ha-086149-m03 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                node-controller  Node ha-086149-m03 event: Registered Node ha-086149-m03 in Controller
	  Normal   RegisteredNode           12m                node-controller  Node ha-086149-m03 event: Registered Node ha-086149-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-086149-m03 event: Registered Node ha-086149-m03 in Controller
	  Normal   RegisteredNode           2m24s              node-controller  Node ha-086149-m03 event: Registered Node ha-086149-m03 in Controller
	  Normal   RegisteredNode           2m19s              node-controller  Node ha-086149-m03 event: Registered Node ha-086149-m03 in Controller
	  Normal   NodeNotReady             104s               node-controller  Node ha-086149-m03 status is now: NodeNotReady
	  Normal   Starting                 57s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  57s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeNotReady             57s                kubelet          Node ha-086149-m03 status is now: NodeNotReady
	  Warning  Rebooted                 56s (x2 over 57s)  kubelet          Node ha-086149-m03 has been rebooted, boot id: 1400ea4b-86d5-4d48-bc78-4af5b4ffe01a
	  Normal   NodeHasSufficientMemory  56s (x3 over 57s)  kubelet          Node ha-086149-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    56s (x3 over 57s)  kubelet          Node ha-086149-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     56s (x3 over 57s)  kubelet          Node ha-086149-m03 status is now: NodeHasSufficientPID
	  Normal   NodeReady                56s                kubelet          Node ha-086149-m03 status is now: NodeReady
	  Normal   RegisteredNode           38s                node-controller  Node ha-086149-m03 event: Registered Node ha-086149-m03 in Controller
	
	
	Name:               ha-086149-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-086149-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9c2db9d51ec33b5c53a86e9ba3d384ee332e3411
	                    minikube.k8s.io/name=ha-086149
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_19T18_05_16_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 18:05:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-086149-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 18:16:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 18:16:07 +0000   Mon, 19 Aug 2024 18:16:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 18:16:07 +0000   Mon, 19 Aug 2024 18:16:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 18:16:07 +0000   Mon, 19 Aug 2024 18:16:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 18:16:07 +0000   Mon, 19 Aug 2024 18:16:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.173
	  Hostname:    ha-086149-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e1e9d0d713474980a7c895cb88752846
	  System UUID:                e1e9d0d7-1347-4980-a7c8-95cb88752846
	  Boot ID:                    09f37e7f-8da3-4260-ba70-0b5b1342b6fc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-gvr65       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      11m
	  kube-system                 kube-proxy-9t8vw    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 4s                 kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   NodeHasSufficientMemory  11m (x2 over 11m)  kubelet          Node ha-086149-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x2 over 11m)  kubelet          Node ha-086149-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x2 over 11m)  kubelet          Node ha-086149-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node ha-086149-m04 event: Registered Node ha-086149-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-086149-m04 event: Registered Node ha-086149-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-086149-m04 event: Registered Node ha-086149-m04 in Controller
	  Normal   NodeReady                10m                kubelet          Node ha-086149-m04 status is now: NodeReady
	  Normal   RegisteredNode           2m24s              node-controller  Node ha-086149-m04 event: Registered Node ha-086149-m04 in Controller
	  Normal   RegisteredNode           2m19s              node-controller  Node ha-086149-m04 event: Registered Node ha-086149-m04 in Controller
	  Normal   NodeNotReady             104s               node-controller  Node ha-086149-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           38s                node-controller  Node ha-086149-m04 event: Registered Node ha-086149-m04 in Controller
	  Normal   Starting                 9s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  9s                 kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 9s                 kubelet          Node ha-086149-m04 has been rebooted, boot id: 09f37e7f-8da3-4260-ba70-0b5b1342b6fc
	  Normal   NodeHasSufficientMemory  9s (x2 over 9s)    kubelet          Node ha-086149-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9s (x2 over 9s)    kubelet          Node ha-086149-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9s (x2 over 9s)    kubelet          Node ha-086149-m04 status is now: NodeHasSufficientPID
	  Normal   NodeReady                9s                 kubelet          Node ha-086149-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +9.178691] systemd-fstab-generator[603]: Ignoring "noauto" option for root device
	[  +0.057166] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065842] systemd-fstab-generator[615]: Ignoring "noauto" option for root device
	[  +0.172283] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.148890] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.254962] systemd-fstab-generator[671]: Ignoring "noauto" option for root device
	[  +4.015563] systemd-fstab-generator[771]: Ignoring "noauto" option for root device
	[  +4.054508] systemd-fstab-generator[906]: Ignoring "noauto" option for root device
	[  +0.063854] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.951467] systemd-fstab-generator[1326]: Ignoring "noauto" option for root device
	[  +0.096986] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.046961] kauditd_printk_skb: 21 callbacks suppressed
	[Aug19 18:02] kauditd_printk_skb: 37 callbacks suppressed
	[ +54.874778] kauditd_printk_skb: 26 callbacks suppressed
	[Aug19 18:12] systemd-fstab-generator[3539]: Ignoring "noauto" option for root device
	[  +0.152453] systemd-fstab-generator[3551]: Ignoring "noauto" option for root device
	[  +0.178007] systemd-fstab-generator[3565]: Ignoring "noauto" option for root device
	[  +0.154410] systemd-fstab-generator[3577]: Ignoring "noauto" option for root device
	[  +0.275147] systemd-fstab-generator[3605]: Ignoring "noauto" option for root device
	[Aug19 18:13] systemd-fstab-generator[3707]: Ignoring "noauto" option for root device
	[  +0.091829] kauditd_printk_skb: 100 callbacks suppressed
	[  +6.154917] kauditd_printk_skb: 22 callbacks suppressed
	[  +6.555350] kauditd_printk_skb: 75 callbacks suppressed
	[ +32.934513] kauditd_printk_skb: 5 callbacks suppressed
	[  +8.279321] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [426a12b48132d73e1b93e6a7fb5b3420868e384eb280274c6ee81ae6f6bcea12] <==
	2024/08/19 18:11:27 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-19T18:11:27.737563Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15368412145618819627,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-08-19T18:11:27.763672Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.249:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-19T18:11:27.763729Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.249:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-19T18:11:27.763789Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"318ee90c3446d547","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-08-19T18:11:27.763948Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"d67143b3afdcc30"}
	{"level":"info","ts":"2024-08-19T18:11:27.763988Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"d67143b3afdcc30"}
	{"level":"info","ts":"2024-08-19T18:11:27.764030Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"d67143b3afdcc30"}
	{"level":"info","ts":"2024-08-19T18:11:27.764178Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"318ee90c3446d547","remote-peer-id":"d67143b3afdcc30"}
	{"level":"info","ts":"2024-08-19T18:11:27.764234Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"318ee90c3446d547","remote-peer-id":"d67143b3afdcc30"}
	{"level":"info","ts":"2024-08-19T18:11:27.764266Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"318ee90c3446d547","remote-peer-id":"d67143b3afdcc30"}
	{"level":"info","ts":"2024-08-19T18:11:27.764277Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"d67143b3afdcc30"}
	{"level":"info","ts":"2024-08-19T18:11:27.764282Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"15d98aedf6fb70a2"}
	{"level":"info","ts":"2024-08-19T18:11:27.764291Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"15d98aedf6fb70a2"}
	{"level":"info","ts":"2024-08-19T18:11:27.764327Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"15d98aedf6fb70a2"}
	{"level":"info","ts":"2024-08-19T18:11:27.764412Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"318ee90c3446d547","remote-peer-id":"15d98aedf6fb70a2"}
	{"level":"info","ts":"2024-08-19T18:11:27.764456Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"318ee90c3446d547","remote-peer-id":"15d98aedf6fb70a2"}
	{"level":"info","ts":"2024-08-19T18:11:27.764502Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"318ee90c3446d547","remote-peer-id":"15d98aedf6fb70a2"}
	{"level":"info","ts":"2024-08-19T18:11:27.764514Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"15d98aedf6fb70a2"}
	{"level":"info","ts":"2024-08-19T18:11:27.767384Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.249:2380"}
	{"level":"warn","ts":"2024-08-19T18:11:27.767499Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"9.032148929s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	{"level":"info","ts":"2024-08-19T18:11:27.767544Z","caller":"traceutil/trace.go:171","msg":"trace[1303976489] range","detail":"{range_begin:; range_end:; }","duration":"9.032205059s","start":"2024-08-19T18:11:18.735326Z","end":"2024-08-19T18:11:27.767531Z","steps":["trace[1303976489] 'agreement among raft nodes before linearized reading'  (duration: 9.032147733s)"],"step_count":1}
	{"level":"error","ts":"2024-08-19T18:11:27.767591Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[+]serializable_read ok\n[-]linearizable_read failed: etcdserver: server stopped\n[+]data_corruption ok\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	{"level":"info","ts":"2024-08-19T18:11:27.767772Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.249:2380"}
	{"level":"info","ts":"2024-08-19T18:11:27.767804Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-086149","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.249:2380"],"advertise-client-urls":["https://192.168.39.249:2379"]}
	
	
	==> etcd [8a1b7fec3f151c3ebd32ce721f81861e00daf06da360b6bad7a4c99a4b3c71d5] <==
	{"level":"warn","ts":"2024-08-19T18:15:15.659504Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"15d98aedf6fb70a2","error":"Get \"https://192.168.39.121:2380/version\": dial tcp 192.168.39.121:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-19T18:15:18.830770Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"15d98aedf6fb70a2","rtt":"0s","error":"dial tcp 192.168.39.121:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-19T18:15:18.835463Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"15d98aedf6fb70a2","rtt":"0s","error":"dial tcp 192.168.39.121:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-19T18:15:19.667052Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.121:2380/version","remote-member-id":"15d98aedf6fb70a2","error":"Get \"https://192.168.39.121:2380/version\": dial tcp 192.168.39.121:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-19T18:15:19.667997Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"15d98aedf6fb70a2","error":"Get \"https://192.168.39.121:2380/version\": dial tcp 192.168.39.121:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-19T18:15:23.670437Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.121:2380/version","remote-member-id":"15d98aedf6fb70a2","error":"Get \"https://192.168.39.121:2380/version\": dial tcp 192.168.39.121:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-19T18:15:23.670487Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"15d98aedf6fb70a2","error":"Get \"https://192.168.39.121:2380/version\": dial tcp 192.168.39.121:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-19T18:15:23.831324Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"15d98aedf6fb70a2","rtt":"0s","error":"dial tcp 192.168.39.121:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-19T18:15:23.835906Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"15d98aedf6fb70a2","rtt":"0s","error":"dial tcp 192.168.39.121:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-19T18:15:27.672030Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.121:2380/version","remote-member-id":"15d98aedf6fb70a2","error":"Get \"https://192.168.39.121:2380/version\": dial tcp 192.168.39.121:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-19T18:15:27.672139Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"15d98aedf6fb70a2","error":"Get \"https://192.168.39.121:2380/version\": dial tcp 192.168.39.121:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-19T18:15:28.831715Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"15d98aedf6fb70a2","rtt":"0s","error":"dial tcp 192.168.39.121:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-19T18:15:28.837182Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"15d98aedf6fb70a2","rtt":"0s","error":"dial tcp 192.168.39.121:2380: connect: connection refused"}
	{"level":"info","ts":"2024-08-19T18:15:29.677842Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"15d98aedf6fb70a2"}
	{"level":"info","ts":"2024-08-19T18:15:29.677901Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"318ee90c3446d547","remote-peer-id":"15d98aedf6fb70a2"}
	{"level":"info","ts":"2024-08-19T18:15:29.677960Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"318ee90c3446d547","remote-peer-id":"15d98aedf6fb70a2"}
	{"level":"info","ts":"2024-08-19T18:15:29.691646Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"318ee90c3446d547","to":"15d98aedf6fb70a2","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-08-19T18:15:29.691697Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"318ee90c3446d547","remote-peer-id":"15d98aedf6fb70a2"}
	{"level":"info","ts":"2024-08-19T18:15:29.703852Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"318ee90c3446d547","to":"15d98aedf6fb70a2","stream-type":"stream Message"}
	{"level":"info","ts":"2024-08-19T18:15:29.703908Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"318ee90c3446d547","remote-peer-id":"15d98aedf6fb70a2"}
	{"level":"info","ts":"2024-08-19T18:15:31.036989Z","caller":"traceutil/trace.go:171","msg":"trace[1006721948] transaction","detail":"{read_only:false; response_revision:2494; number_of_response:1; }","duration":"107.568279ms","start":"2024-08-19T18:15:30.929394Z","end":"2024-08-19T18:15:31.036962Z","steps":["trace[1006721948] 'process raft request'  (duration: 107.424392ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T18:15:35.491345Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"137.426574ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T18:15:35.491437Z","caller":"traceutil/trace.go:171","msg":"trace[111029861] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:2511; }","duration":"137.55958ms","start":"2024-08-19T18:15:35.353859Z","end":"2024-08-19T18:15:35.491418Z","steps":["trace[111029861] 'agreement among raft nodes before linearized reading'  (duration: 74.578777ms)","trace[111029861] 'range keys from in-memory index tree'  (duration: 62.831484ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-19T18:16:11.041700Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.556582ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-8snb5\" ","response":"range_response_count:1 size:4870"}
	{"level":"info","ts":"2024-08-19T18:16:11.041897Z","caller":"traceutil/trace.go:171","msg":"trace[521001956] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-8snb5; range_end:; response_count:1; response_revision:2652; }","duration":"101.818476ms","start":"2024-08-19T18:16:10.940057Z","end":"2024-08-19T18:16:11.041876Z","steps":["trace[521001956] 'agreement among raft nodes before linearized reading'  (duration: 99.332111ms)"],"step_count":1}
	
	
	==> kernel <==
	 18:16:16 up 14 min,  0 users,  load average: 0.27, 0.50, 0.30
	Linux ha-086149 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [66fd9c9b32e5e0294c89ebc2ee3c443fda85c40c3ad5b05d42357b4968e8d305] <==
	I0819 18:10:55.256725       1 main.go:322] Node ha-086149-m04 has CIDR [10.244.3.0/24] 
	I0819 18:11:05.253880       1 main.go:295] Handling node with IPs: map[192.168.39.173:{}]
	I0819 18:11:05.254010       1 main.go:322] Node ha-086149-m04 has CIDR [10.244.3.0/24] 
	I0819 18:11:05.254352       1 main.go:295] Handling node with IPs: map[192.168.39.249:{}]
	I0819 18:11:05.254496       1 main.go:299] handling current node
	I0819 18:11:05.254549       1 main.go:295] Handling node with IPs: map[192.168.39.167:{}]
	I0819 18:11:05.254571       1 main.go:322] Node ha-086149-m02 has CIDR [10.244.1.0/24] 
	I0819 18:11:05.254680       1 main.go:295] Handling node with IPs: map[192.168.39.121:{}]
	I0819 18:11:05.254702       1 main.go:322] Node ha-086149-m03 has CIDR [10.244.2.0/24] 
	I0819 18:11:15.261601       1 main.go:295] Handling node with IPs: map[192.168.39.249:{}]
	I0819 18:11:15.261806       1 main.go:299] handling current node
	I0819 18:11:15.261851       1 main.go:295] Handling node with IPs: map[192.168.39.167:{}]
	I0819 18:11:15.261870       1 main.go:322] Node ha-086149-m02 has CIDR [10.244.1.0/24] 
	I0819 18:11:15.262042       1 main.go:295] Handling node with IPs: map[192.168.39.121:{}]
	I0819 18:11:15.262067       1 main.go:322] Node ha-086149-m03 has CIDR [10.244.2.0/24] 
	I0819 18:11:15.262273       1 main.go:295] Handling node with IPs: map[192.168.39.173:{}]
	I0819 18:11:15.262306       1 main.go:322] Node ha-086149-m04 has CIDR [10.244.3.0/24] 
	I0819 18:11:25.253256       1 main.go:295] Handling node with IPs: map[192.168.39.249:{}]
	I0819 18:11:25.253371       1 main.go:299] handling current node
	I0819 18:11:25.253427       1 main.go:295] Handling node with IPs: map[192.168.39.167:{}]
	I0819 18:11:25.253446       1 main.go:322] Node ha-086149-m02 has CIDR [10.244.1.0/24] 
	I0819 18:11:25.253659       1 main.go:295] Handling node with IPs: map[192.168.39.121:{}]
	I0819 18:11:25.253683       1 main.go:322] Node ha-086149-m03 has CIDR [10.244.2.0/24] 
	I0819 18:11:25.253744       1 main.go:295] Handling node with IPs: map[192.168.39.173:{}]
	I0819 18:11:25.253762       1 main.go:322] Node ha-086149-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [c2deb18dfc60e51eddc31befa21ffc0090ce2abc67b4511b62104ca5e8342f60] <==
	I0819 18:15:44.454334       1 main.go:299] handling current node
	I0819 18:15:54.449653       1 main.go:295] Handling node with IPs: map[192.168.39.167:{}]
	I0819 18:15:54.449858       1 main.go:322] Node ha-086149-m02 has CIDR [10.244.1.0/24] 
	I0819 18:15:54.450220       1 main.go:295] Handling node with IPs: map[192.168.39.121:{}]
	I0819 18:15:54.450275       1 main.go:322] Node ha-086149-m03 has CIDR [10.244.2.0/24] 
	I0819 18:15:54.450374       1 main.go:295] Handling node with IPs: map[192.168.39.173:{}]
	I0819 18:15:54.450394       1 main.go:322] Node ha-086149-m04 has CIDR [10.244.3.0/24] 
	I0819 18:15:54.450476       1 main.go:295] Handling node with IPs: map[192.168.39.249:{}]
	I0819 18:15:54.450496       1 main.go:299] handling current node
	I0819 18:16:04.454274       1 main.go:295] Handling node with IPs: map[192.168.39.173:{}]
	I0819 18:16:04.454318       1 main.go:322] Node ha-086149-m04 has CIDR [10.244.3.0/24] 
	I0819 18:16:04.454467       1 main.go:295] Handling node with IPs: map[192.168.39.249:{}]
	I0819 18:16:04.454493       1 main.go:299] handling current node
	I0819 18:16:04.454505       1 main.go:295] Handling node with IPs: map[192.168.39.167:{}]
	I0819 18:16:04.454509       1 main.go:322] Node ha-086149-m02 has CIDR [10.244.1.0/24] 
	I0819 18:16:04.454565       1 main.go:295] Handling node with IPs: map[192.168.39.121:{}]
	I0819 18:16:04.454584       1 main.go:322] Node ha-086149-m03 has CIDR [10.244.2.0/24] 
	I0819 18:16:14.450230       1 main.go:295] Handling node with IPs: map[192.168.39.121:{}]
	I0819 18:16:14.450283       1 main.go:322] Node ha-086149-m03 has CIDR [10.244.2.0/24] 
	I0819 18:16:14.450519       1 main.go:295] Handling node with IPs: map[192.168.39.173:{}]
	I0819 18:16:14.450554       1 main.go:322] Node ha-086149-m04 has CIDR [10.244.3.0/24] 
	I0819 18:16:14.450642       1 main.go:295] Handling node with IPs: map[192.168.39.249:{}]
	I0819 18:16:14.450671       1 main.go:299] handling current node
	I0819 18:16:14.450685       1 main.go:295] Handling node with IPs: map[192.168.39.167:{}]
	I0819 18:16:14.450712       1 main.go:322] Node ha-086149-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [3dbebbcf5b28297a583e961cdeb22de8d630ca8836e6f0ffcca3c4fe28b9a104] <==
	I0819 18:13:13.608871       1 options.go:228] external host was not specified, using 192.168.39.249
	I0819 18:13:13.613712       1 server.go:142] Version: v1.31.0
	I0819 18:13:13.619756       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 18:13:14.538281       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0819 18:13:14.541682       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0819 18:13:14.546515       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0819 18:13:14.546607       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0819 18:13:14.546859       1 instance.go:232] Using reconciler: lease
	W0819 18:13:34.534239       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0819 18:13:34.534414       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0819 18:13:34.548163       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0819 18:13:34.548314       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [8eb0bee9a15dccc2d82fb1b3ac35c0edda4dfaf7f15f58e06a340bf55e8f26ab] <==
	I0819 18:13:54.134903       1 crdregistration_controller.go:114] Starting crd-autoregister controller
	I0819 18:13:54.134935       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0819 18:13:54.211004       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0819 18:13:54.211057       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0819 18:13:54.212805       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0819 18:13:54.213000       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0819 18:13:54.213127       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0819 18:13:54.218385       1 shared_informer.go:320] Caches are synced for configmaps
	I0819 18:13:54.218453       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0819 18:13:54.218460       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0819 18:13:54.220682       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0819 18:13:54.224192       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0819 18:13:54.224226       1 policy_source.go:224] refreshing policies
	W0819 18:13:54.227449       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.121 192.168.39.167]
	I0819 18:13:54.228676       1 controller.go:615] quota admission added evaluator for: endpoints
	I0819 18:13:54.237009       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0819 18:13:54.237056       1 aggregator.go:171] initial CRD sync complete...
	I0819 18:13:54.237075       1 autoregister_controller.go:144] Starting autoregister controller
	I0819 18:13:54.237121       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0819 18:13:54.237128       1 cache.go:39] Caches are synced for autoregister controller
	I0819 18:13:54.238876       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0819 18:13:54.241874       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0819 18:13:54.305856       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0819 18:13:55.146014       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0819 18:13:55.755504       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.121 192.168.39.167 192.168.39.249]
	
	
	==> kube-controller-manager [0b110ed1c7e4de28f673ba115eee8636180545973d22374de2fefcc11c697539] <==
	I0819 18:14:32.107891       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-086149-m03"
	I0819 18:14:32.132636       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="25.668701ms"
	I0819 18:14:32.132771       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="64.006µs"
	I0819 18:14:32.809287       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-086149-m03"
	I0819 18:14:37.367797       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-086149-m03"
	I0819 18:14:38.477554       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-086149-m02"
	I0819 18:14:42.887446       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-086149-m04"
	I0819 18:14:46.061325       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-2dwgm EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-2dwgm\": the object has been modified; please apply your changes to the latest version and try again"
	I0819 18:14:46.062202       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"76974bd6-6358-4f74-b56d-6f851ec737ae", APIVersion:"v1", ResourceVersion:"248", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-2dwgm EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-2dwgm": the object has been modified; please apply your changes to the latest version and try again
	I0819 18:14:46.104753       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="65.269612ms"
	I0819 18:14:46.104984       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="100.432µs"
	I0819 18:14:47.449209       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-086149-m04"
	I0819 18:15:20.019345       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-086149-m03"
	I0819 18:15:20.043396       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-086149-m03"
	I0819 18:15:20.888874       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="39.363µs"
	I0819 18:15:22.309153       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-086149-m03"
	I0819 18:15:38.986868       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-086149-m04"
	I0819 18:15:39.034988       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-086149-m04"
	I0819 18:15:40.120946       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="12.128713ms"
	I0819 18:15:40.121039       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="44.76µs"
	I0819 18:15:50.447730       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-086149-m03"
	I0819 18:16:07.560396       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-086149-m04"
	I0819 18:16:07.561448       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-086149-m04"
	I0819 18:16:07.592930       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-086149-m04"
	I0819 18:16:07.740557       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-086149-m04"
	
	
	==> kube-controller-manager [ea2f2cfbcacac8b9d0f716fc5bf8be816dac486447f26b5969f1d79a9031f7ca] <==
	I0819 18:13:14.057185       1 serving.go:386] Generated self-signed cert in-memory
	I0819 18:13:14.479168       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0819 18:13:14.479210       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 18:13:14.481055       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0819 18:13:14.481286       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0819 18:13:14.481813       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0819 18:13:14.481871       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0819 18:13:35.558373       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.249:8443/healthz\": dial tcp 192.168.39.249:8443: connect: connection refused"
	
	
	==> kube-proxy [7421b967684844bf1fe8f4abc52f1cd8635544a588cbdb2b910b55bf74594619] <==
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 18:13:14.654644       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-086149\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0819 18:13:17.727705       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-086149\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0819 18:13:20.798256       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-086149\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0819 18:13:26.944396       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-086149\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0819 18:13:36.157495       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-086149\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0819 18:13:54.660920       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.249"]
	E0819 18:13:54.663530       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 18:13:55.157974       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 18:13:55.158351       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 18:13:55.158489       1 server_linux.go:169] "Using iptables Proxier"
	I0819 18:13:55.164853       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 18:13:55.169656       1 server.go:483] "Version info" version="v1.31.0"
	I0819 18:13:55.169915       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 18:13:55.174365       1 config.go:197] "Starting service config controller"
	I0819 18:13:55.174478       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 18:13:55.176684       1 config.go:104] "Starting endpoint slice config controller"
	I0819 18:13:55.177212       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 18:13:55.178024       1 config.go:326] "Starting node config controller"
	I0819 18:13:55.178076       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 18:13:55.277940       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0819 18:13:55.278565       1 shared_informer.go:320] Caches are synced for service config
	I0819 18:13:55.278585       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [eb8cccc1568bbb207d2c7c285f3897a7a425cba60f4dfcf3e8daa8082fc38ef0] <==
	E0819 18:10:25.703221       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1900\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 18:10:25.704174       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-086149&resourceVersion=1867": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 18:10:25.707259       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-086149&resourceVersion=1867\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 18:10:25.718147       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1900": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 18:10:25.718292       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1900\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 18:10:28.766492       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1900": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 18:10:28.766573       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1900\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 18:10:31.839383       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1900": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 18:10:31.839529       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1900\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 18:10:31.840527       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-086149&resourceVersion=1867": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 18:10:31.840582       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-086149&resourceVersion=1867\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 18:10:34.910480       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1900": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 18:10:34.910735       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1900\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 18:10:41.054583       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-086149&resourceVersion=1867": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 18:10:41.054663       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-086149&resourceVersion=1867\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 18:10:44.126300       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1900": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 18:10:44.127077       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1900\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 18:10:47.198727       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1900": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 18:10:47.198928       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1900\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 18:10:56.414227       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-086149&resourceVersion=1867": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 18:10:56.414371       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-086149&resourceVersion=1867\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 18:11:05.630418       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1900": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 18:11:05.630530       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1900\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 18:11:08.701861       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1900": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 18:11:08.702426       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1900\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-scheduler [4760d8a0d8843fa04600f76c7a9e2b2ba5c4212e748492168d8c00d31ea0d515] <==
	W0819 18:13:44.327774       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.249:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0819 18:13:44.327848       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.168.39.249:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0819 18:13:44.705067       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.249:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0819 18:13:44.705213       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.249:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0819 18:13:45.943647       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.249:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0819 18:13:45.943806       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.39.249:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0819 18:13:46.072894       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.249:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0819 18:13:46.072954       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.249:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0819 18:13:50.753255       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.249:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0819 18:13:50.753431       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.39.249:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0819 18:13:50.775833       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.249:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0819 18:13:50.775953       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.249:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0819 18:13:54.190864       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0819 18:13:54.191062       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0819 18:13:54.191711       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0819 18:13:54.192990       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 18:13:54.193433       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0819 18:13:54.195199       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 18:13:54.195706       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0819 18:13:54.195817       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 18:13:54.196032       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0819 18:13:54.196148       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 18:13:54.196296       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0819 18:13:54.196459       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0819 18:14:15.865213       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [d0e66231bf791048a9932068b5f28d8479613545885bea8e42cf9c79913ffccd] <==
	E0819 18:04:39.322857       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-fd2dw\": pod busybox-7dff88458-fd2dw is already assigned to node \"ha-086149\"" pod="default/busybox-7dff88458-fd2dw"
	I0819 18:04:39.322879       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-fd2dw" node="ha-086149"
	E0819 18:04:39.328354       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-vgcdh\": pod busybox-7dff88458-vgcdh is already assigned to node \"ha-086149-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-vgcdh" node="ha-086149-m02"
	E0819 18:04:39.328444       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-vgcdh\": pod busybox-7dff88458-vgcdh is already assigned to node \"ha-086149-m02\"" pod="default/busybox-7dff88458-vgcdh"
	E0819 18:11:04.562137       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	E0819 18:11:10.701998       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	E0819 18:11:12.036397       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	E0819 18:11:12.392187       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E0819 18:11:13.316775       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E0819 18:11:14.080432       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E0819 18:11:15.559037       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)" logger="UnhandledError"
	E0819 18:11:16.554424       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	E0819 18:11:16.591958       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	E0819 18:11:16.676641       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)" logger="UnhandledError"
	E0819 18:11:18.981726       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces)" logger="UnhandledError"
	E0819 18:11:20.048874       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E0819 18:11:21.237680       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	E0819 18:11:23.228000       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	E0819 18:11:23.304646       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	W0819 18:11:26.223810       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0819 18:11:26.223879       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0819 18:11:27.688620       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0819 18:11:27.689515       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0819 18:11:27.689929       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0819 18:11:27.696259       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Aug 19 18:14:55 ha-086149 kubelet[1333]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 18:14:55 ha-086149 kubelet[1333]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 18:14:55 ha-086149 kubelet[1333]: E0819 18:14:55.526275    1333 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724091295525599757,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:14:55 ha-086149 kubelet[1333]: E0819 18:14:55.526378    1333 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724091295525599757,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:15:05 ha-086149 kubelet[1333]: E0819 18:15:05.530595    1333 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724091305530039381,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:15:05 ha-086149 kubelet[1333]: E0819 18:15:05.530692    1333 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724091305530039381,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:15:15 ha-086149 kubelet[1333]: E0819 18:15:15.536679    1333 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724091315532817188,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:15:15 ha-086149 kubelet[1333]: E0819 18:15:15.537367    1333 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724091315532817188,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:15:25 ha-086149 kubelet[1333]: E0819 18:15:25.539688    1333 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724091325539045608,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:15:25 ha-086149 kubelet[1333]: E0819 18:15:25.540158    1333 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724091325539045608,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:15:35 ha-086149 kubelet[1333]: E0819 18:15:35.543065    1333 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724091335541789030,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:15:35 ha-086149 kubelet[1333]: E0819 18:15:35.543168    1333 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724091335541789030,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:15:45 ha-086149 kubelet[1333]: E0819 18:15:45.545584    1333 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724091345544976641,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:15:45 ha-086149 kubelet[1333]: E0819 18:15:45.545881    1333 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724091345544976641,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:15:55 ha-086149 kubelet[1333]: E0819 18:15:55.298146    1333 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 18:15:55 ha-086149 kubelet[1333]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 18:15:55 ha-086149 kubelet[1333]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 18:15:55 ha-086149 kubelet[1333]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 18:15:55 ha-086149 kubelet[1333]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 18:15:55 ha-086149 kubelet[1333]: E0819 18:15:55.547527    1333 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724091355547248390,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:15:55 ha-086149 kubelet[1333]: E0819 18:15:55.547567    1333 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724091355547248390,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:16:05 ha-086149 kubelet[1333]: E0819 18:16:05.550132    1333 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724091365549773056,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:16:05 ha-086149 kubelet[1333]: E0819 18:16:05.550233    1333 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724091365549773056,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:16:15 ha-086149 kubelet[1333]: E0819 18:16:15.558000    1333 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724091375555877483,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:16:15 ha-086149 kubelet[1333]: E0819 18:16:15.559068    1333 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724091375555877483,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 18:16:15.200043  398546 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19468-372744/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-086149 -n ha-086149
helpers_test.go:261: (dbg) Run:  kubectl --context ha-086149 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (412.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-086149 stop -v=7 --alsologtostderr
E0819 18:17:10.114150  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-086149 stop -v=7 --alsologtostderr: exit status 82 (2m0.49150861s)

                                                
                                                
-- stdout --
	* Stopping node "ha-086149-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 18:16:34.846512  398956 out.go:345] Setting OutFile to fd 1 ...
	I0819 18:16:34.846761  398956 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:16:34.846771  398956 out.go:358] Setting ErrFile to fd 2...
	I0819 18:16:34.846776  398956 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:16:34.846969  398956 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19468-372744/.minikube/bin
	I0819 18:16:34.847210  398956 out.go:352] Setting JSON to false
	I0819 18:16:34.847293  398956 mustload.go:65] Loading cluster: ha-086149
	I0819 18:16:34.847636  398956 config.go:182] Loaded profile config "ha-086149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:16:34.847777  398956 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/config.json ...
	I0819 18:16:34.847964  398956 mustload.go:65] Loading cluster: ha-086149
	I0819 18:16:34.848112  398956 config.go:182] Loaded profile config "ha-086149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:16:34.848154  398956 stop.go:39] StopHost: ha-086149-m04
	I0819 18:16:34.848515  398956 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:16:34.848561  398956 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:16:34.866354  398956 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37019
	I0819 18:16:34.866877  398956 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:16:34.867482  398956 main.go:141] libmachine: Using API Version  1
	I0819 18:16:34.867506  398956 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:16:34.867910  398956 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:16:34.870284  398956 out.go:177] * Stopping node "ha-086149-m04"  ...
	I0819 18:16:34.871700  398956 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0819 18:16:34.871764  398956 main.go:141] libmachine: (ha-086149-m04) Calling .DriverName
	I0819 18:16:34.872070  398956 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0819 18:16:34.872097  398956 main.go:141] libmachine: (ha-086149-m04) Calling .GetSSHHostname
	I0819 18:16:34.875822  398956 main.go:141] libmachine: (ha-086149-m04) DBG | domain ha-086149-m04 has defined MAC address 52:54:00:03:a4:7a in network mk-ha-086149
	I0819 18:16:34.876317  398956 main.go:141] libmachine: (ha-086149-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:a4:7a", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:16:02 +0000 UTC Type:0 Mac:52:54:00:03:a4:7a Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-086149-m04 Clientid:01:52:54:00:03:a4:7a}
	I0819 18:16:34.876354  398956 main.go:141] libmachine: (ha-086149-m04) DBG | domain ha-086149-m04 has defined IP address 192.168.39.173 and MAC address 52:54:00:03:a4:7a in network mk-ha-086149
	I0819 18:16:34.876539  398956 main.go:141] libmachine: (ha-086149-m04) Calling .GetSSHPort
	I0819 18:16:34.876759  398956 main.go:141] libmachine: (ha-086149-m04) Calling .GetSSHKeyPath
	I0819 18:16:34.876948  398956 main.go:141] libmachine: (ha-086149-m04) Calling .GetSSHUsername
	I0819 18:16:34.877171  398956 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m04/id_rsa Username:docker}
	I0819 18:16:34.964209  398956 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0819 18:16:35.017721  398956 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0819 18:16:35.071195  398956 main.go:141] libmachine: Stopping "ha-086149-m04"...
	I0819 18:16:35.071224  398956 main.go:141] libmachine: (ha-086149-m04) Calling .GetState
	I0819 18:16:35.073240  398956 main.go:141] libmachine: (ha-086149-m04) Calling .Stop
	I0819 18:16:35.076912  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 0/120
	I0819 18:16:36.078771  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 1/120
	I0819 18:16:37.081134  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 2/120
	I0819 18:16:38.082543  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 3/120
	I0819 18:16:39.084163  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 4/120
	I0819 18:16:40.086272  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 5/120
	I0819 18:16:41.087967  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 6/120
	I0819 18:16:42.090295  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 7/120
	I0819 18:16:43.091745  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 8/120
	I0819 18:16:44.093200  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 9/120
	I0819 18:16:45.094608  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 10/120
	I0819 18:16:46.096168  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 11/120
	I0819 18:16:47.097766  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 12/120
	I0819 18:16:48.099317  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 13/120
	I0819 18:16:49.100739  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 14/120
	I0819 18:16:50.102761  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 15/120
	I0819 18:16:51.104161  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 16/120
	I0819 18:16:52.106461  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 17/120
	I0819 18:16:53.107839  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 18/120
	I0819 18:16:54.110287  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 19/120
	I0819 18:16:55.112110  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 20/120
	I0819 18:16:56.114428  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 21/120
	I0819 18:16:57.116112  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 22/120
	I0819 18:16:58.117394  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 23/120
	I0819 18:16:59.119000  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 24/120
	I0819 18:17:00.121310  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 25/120
	I0819 18:17:01.122846  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 26/120
	I0819 18:17:02.124349  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 27/120
	I0819 18:17:03.126293  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 28/120
	I0819 18:17:04.127868  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 29/120
	I0819 18:17:05.130199  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 30/120
	I0819 18:17:06.131602  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 31/120
	I0819 18:17:07.132956  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 32/120
	I0819 18:17:08.134472  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 33/120
	I0819 18:17:09.136510  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 34/120
	I0819 18:17:10.138538  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 35/120
	I0819 18:17:11.140268  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 36/120
	I0819 18:17:12.141611  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 37/120
	I0819 18:17:13.143260  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 38/120
	I0819 18:17:14.144834  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 39/120
	I0819 18:17:15.146953  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 40/120
	I0819 18:17:16.148447  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 41/120
	I0819 18:17:17.149934  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 42/120
	I0819 18:17:18.151321  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 43/120
	I0819 18:17:19.153185  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 44/120
	I0819 18:17:20.155419  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 45/120
	I0819 18:17:21.156893  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 46/120
	I0819 18:17:22.158414  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 47/120
	I0819 18:17:23.159971  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 48/120
	I0819 18:17:24.161550  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 49/120
	I0819 18:17:25.163990  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 50/120
	I0819 18:17:26.166454  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 51/120
	I0819 18:17:27.167882  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 52/120
	I0819 18:17:28.170380  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 53/120
	I0819 18:17:29.172555  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 54/120
	I0819 18:17:30.174355  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 55/120
	I0819 18:17:31.175843  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 56/120
	I0819 18:17:32.177241  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 57/120
	I0819 18:17:33.178578  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 58/120
	I0819 18:17:34.179970  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 59/120
	I0819 18:17:35.182260  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 60/120
	I0819 18:17:36.183691  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 61/120
	I0819 18:17:37.184949  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 62/120
	I0819 18:17:38.186449  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 63/120
	I0819 18:17:39.188019  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 64/120
	I0819 18:17:40.189363  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 65/120
	I0819 18:17:41.190831  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 66/120
	I0819 18:17:42.192264  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 67/120
	I0819 18:17:43.193927  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 68/120
	I0819 18:17:44.195309  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 69/120
	I0819 18:17:45.196718  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 70/120
	I0819 18:17:46.198112  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 71/120
	I0819 18:17:47.199697  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 72/120
	I0819 18:17:48.201087  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 73/120
	I0819 18:17:49.202721  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 74/120
	I0819 18:17:50.205139  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 75/120
	I0819 18:17:51.206951  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 76/120
	I0819 18:17:52.208731  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 77/120
	I0819 18:17:53.210025  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 78/120
	I0819 18:17:54.211504  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 79/120
	I0819 18:17:55.213705  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 80/120
	I0819 18:17:56.215487  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 81/120
	I0819 18:17:57.217024  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 82/120
	I0819 18:17:58.218629  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 83/120
	I0819 18:17:59.220161  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 84/120
	I0819 18:18:00.222242  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 85/120
	I0819 18:18:01.223819  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 86/120
	I0819 18:18:02.226255  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 87/120
	I0819 18:18:03.227950  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 88/120
	I0819 18:18:04.230131  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 89/120
	I0819 18:18:05.232290  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 90/120
	I0819 18:18:06.234429  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 91/120
	I0819 18:18:07.236427  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 92/120
	I0819 18:18:08.237868  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 93/120
	I0819 18:18:09.239337  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 94/120
	I0819 18:18:10.241110  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 95/120
	I0819 18:18:11.242475  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 96/120
	I0819 18:18:12.244112  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 97/120
	I0819 18:18:13.245539  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 98/120
	I0819 18:18:14.247112  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 99/120
	I0819 18:18:15.249477  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 100/120
	I0819 18:18:16.250811  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 101/120
	I0819 18:18:17.252480  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 102/120
	I0819 18:18:18.253982  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 103/120
	I0819 18:18:19.256374  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 104/120
	I0819 18:18:20.258424  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 105/120
	I0819 18:18:21.259977  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 106/120
	I0819 18:18:22.262196  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 107/120
	I0819 18:18:23.263914  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 108/120
	I0819 18:18:24.265353  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 109/120
	I0819 18:18:25.266739  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 110/120
	I0819 18:18:26.269049  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 111/120
	I0819 18:18:27.270477  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 112/120
	I0819 18:18:28.271886  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 113/120
	I0819 18:18:29.274284  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 114/120
	I0819 18:18:30.276173  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 115/120
	I0819 18:18:31.278050  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 116/120
	I0819 18:18:32.279792  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 117/120
	I0819 18:18:33.281332  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 118/120
	I0819 18:18:34.282675  398956 main.go:141] libmachine: (ha-086149-m04) Waiting for machine to stop 119/120
	I0819 18:18:35.283846  398956 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0819 18:18:35.283913  398956 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0819 18:18:35.285748  398956 out.go:201] 
	W0819 18:18:35.287145  398956 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0819 18:18:35.287170  398956 out.go:270] * 
	* 
	W0819 18:18:35.290274  398956 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 18:18:35.291665  398956 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-086149 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-086149 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-086149 status -v=7 --alsologtostderr: exit status 3 (19.046649697s)

                                                
                                                
-- stdout --
	ha-086149
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-086149-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-086149-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 18:18:35.342985  399826 out.go:345] Setting OutFile to fd 1 ...
	I0819 18:18:35.343104  399826 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:18:35.343113  399826 out.go:358] Setting ErrFile to fd 2...
	I0819 18:18:35.343117  399826 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:18:35.343326  399826 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19468-372744/.minikube/bin
	I0819 18:18:35.343560  399826 out.go:352] Setting JSON to false
	I0819 18:18:35.343593  399826 mustload.go:65] Loading cluster: ha-086149
	I0819 18:18:35.343721  399826 notify.go:220] Checking for updates...
	I0819 18:18:35.344102  399826 config.go:182] Loaded profile config "ha-086149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:18:35.344123  399826 status.go:255] checking status of ha-086149 ...
	I0819 18:18:35.344667  399826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:18:35.344720  399826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:18:35.360247  399826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33215
	I0819 18:18:35.360729  399826 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:18:35.361342  399826 main.go:141] libmachine: Using API Version  1
	I0819 18:18:35.361368  399826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:18:35.361708  399826 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:18:35.361964  399826 main.go:141] libmachine: (ha-086149) Calling .GetState
	I0819 18:18:35.364075  399826 status.go:330] ha-086149 host status = "Running" (err=<nil>)
	I0819 18:18:35.364106  399826 host.go:66] Checking if "ha-086149" exists ...
	I0819 18:18:35.364418  399826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:18:35.364471  399826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:18:35.380066  399826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37317
	I0819 18:18:35.380513  399826 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:18:35.381062  399826 main.go:141] libmachine: Using API Version  1
	I0819 18:18:35.381091  399826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:18:35.381475  399826 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:18:35.381784  399826 main.go:141] libmachine: (ha-086149) Calling .GetIP
	I0819 18:18:35.384666  399826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:18:35.385130  399826 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:18:35.385157  399826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:18:35.385264  399826 host.go:66] Checking if "ha-086149" exists ...
	I0819 18:18:35.385600  399826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:18:35.385654  399826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:18:35.401035  399826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38227
	I0819 18:18:35.401595  399826 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:18:35.402029  399826 main.go:141] libmachine: Using API Version  1
	I0819 18:18:35.402042  399826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:18:35.402306  399826 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:18:35.402509  399826 main.go:141] libmachine: (ha-086149) Calling .DriverName
	I0819 18:18:35.402685  399826 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 18:18:35.402706  399826 main.go:141] libmachine: (ha-086149) Calling .GetSSHHostname
	I0819 18:18:35.405907  399826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:18:35.406357  399826 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:18:35.406392  399826 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:18:35.406536  399826 main.go:141] libmachine: (ha-086149) Calling .GetSSHPort
	I0819 18:18:35.406710  399826 main.go:141] libmachine: (ha-086149) Calling .GetSSHKeyPath
	I0819 18:18:35.406971  399826 main.go:141] libmachine: (ha-086149) Calling .GetSSHUsername
	I0819 18:18:35.407178  399826 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149/id_rsa Username:docker}
	I0819 18:18:35.489512  399826 ssh_runner.go:195] Run: systemctl --version
	I0819 18:18:35.496353  399826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:18:35.514950  399826 kubeconfig.go:125] found "ha-086149" server: "https://192.168.39.254:8443"
	I0819 18:18:35.514997  399826 api_server.go:166] Checking apiserver status ...
	I0819 18:18:35.515057  399826 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:18:35.533305  399826 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4901/cgroup
	W0819 18:18:35.544459  399826 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/4901/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 18:18:35.544515  399826 ssh_runner.go:195] Run: ls
	I0819 18:18:35.550432  399826 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 18:18:35.557854  399826 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 18:18:35.557880  399826 status.go:422] ha-086149 apiserver status = Running (err=<nil>)
	I0819 18:18:35.557891  399826 status.go:257] ha-086149 status: &{Name:ha-086149 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 18:18:35.557908  399826 status.go:255] checking status of ha-086149-m02 ...
	I0819 18:18:35.558224  399826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:18:35.558268  399826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:18:35.573673  399826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36871
	I0819 18:18:35.574173  399826 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:18:35.574695  399826 main.go:141] libmachine: Using API Version  1
	I0819 18:18:35.574718  399826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:18:35.575087  399826 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:18:35.575272  399826 main.go:141] libmachine: (ha-086149-m02) Calling .GetState
	I0819 18:18:35.576784  399826 status.go:330] ha-086149-m02 host status = "Running" (err=<nil>)
	I0819 18:18:35.576814  399826 host.go:66] Checking if "ha-086149-m02" exists ...
	I0819 18:18:35.577144  399826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:18:35.577177  399826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:18:35.592277  399826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42093
	I0819 18:18:35.592728  399826 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:18:35.593266  399826 main.go:141] libmachine: Using API Version  1
	I0819 18:18:35.593290  399826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:18:35.593590  399826 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:18:35.593757  399826 main.go:141] libmachine: (ha-086149-m02) Calling .GetIP
	I0819 18:18:35.597088  399826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:18:35.597497  399826 main.go:141] libmachine: (ha-086149-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:44:0e", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:13:18 +0000 UTC Type:0 Mac:52:54:00:b9:44:0e Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-086149-m02 Clientid:01:52:54:00:b9:44:0e}
	I0819 18:18:35.597528  399826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined IP address 192.168.39.167 and MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:18:35.597689  399826 host.go:66] Checking if "ha-086149-m02" exists ...
	I0819 18:18:35.597993  399826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:18:35.598031  399826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:18:35.613769  399826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33035
	I0819 18:18:35.614284  399826 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:18:35.614765  399826 main.go:141] libmachine: Using API Version  1
	I0819 18:18:35.614788  399826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:18:35.615170  399826 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:18:35.615387  399826 main.go:141] libmachine: (ha-086149-m02) Calling .DriverName
	I0819 18:18:35.615604  399826 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 18:18:35.615631  399826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHHostname
	I0819 18:18:35.618503  399826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:18:35.618897  399826 main.go:141] libmachine: (ha-086149-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:44:0e", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:13:18 +0000 UTC Type:0 Mac:52:54:00:b9:44:0e Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-086149-m02 Clientid:01:52:54:00:b9:44:0e}
	I0819 18:18:35.618924  399826 main.go:141] libmachine: (ha-086149-m02) DBG | domain ha-086149-m02 has defined IP address 192.168.39.167 and MAC address 52:54:00:b9:44:0e in network mk-ha-086149
	I0819 18:18:35.619069  399826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHPort
	I0819 18:18:35.619240  399826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHKeyPath
	I0819 18:18:35.619393  399826 main.go:141] libmachine: (ha-086149-m02) Calling .GetSSHUsername
	I0819 18:18:35.619509  399826 sshutil.go:53] new ssh client: &{IP:192.168.39.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m02/id_rsa Username:docker}
	I0819 18:18:35.704621  399826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:18:35.722908  399826 kubeconfig.go:125] found "ha-086149" server: "https://192.168.39.254:8443"
	I0819 18:18:35.722938  399826 api_server.go:166] Checking apiserver status ...
	I0819 18:18:35.722991  399826 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:18:35.740896  399826 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1593/cgroup
	W0819 18:18:35.750983  399826 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1593/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 18:18:35.751043  399826 ssh_runner.go:195] Run: ls
	I0819 18:18:35.756179  399826 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 18:18:35.760568  399826 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 18:18:35.760594  399826 status.go:422] ha-086149-m02 apiserver status = Running (err=<nil>)
	I0819 18:18:35.760605  399826 status.go:257] ha-086149-m02 status: &{Name:ha-086149-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 18:18:35.760626  399826 status.go:255] checking status of ha-086149-m04 ...
	I0819 18:18:35.760997  399826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:18:35.761042  399826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:18:35.776798  399826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45851
	I0819 18:18:35.777295  399826 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:18:35.777872  399826 main.go:141] libmachine: Using API Version  1
	I0819 18:18:35.777893  399826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:18:35.778217  399826 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:18:35.778393  399826 main.go:141] libmachine: (ha-086149-m04) Calling .GetState
	I0819 18:18:35.780009  399826 status.go:330] ha-086149-m04 host status = "Running" (err=<nil>)
	I0819 18:18:35.780027  399826 host.go:66] Checking if "ha-086149-m04" exists ...
	I0819 18:18:35.780379  399826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:18:35.780422  399826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:18:35.795191  399826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35707
	I0819 18:18:35.795549  399826 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:18:35.796020  399826 main.go:141] libmachine: Using API Version  1
	I0819 18:18:35.796040  399826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:18:35.796365  399826 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:18:35.796542  399826 main.go:141] libmachine: (ha-086149-m04) Calling .GetIP
	I0819 18:18:35.799235  399826 main.go:141] libmachine: (ha-086149-m04) DBG | domain ha-086149-m04 has defined MAC address 52:54:00:03:a4:7a in network mk-ha-086149
	I0819 18:18:35.799585  399826 main.go:141] libmachine: (ha-086149-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:a4:7a", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:16:02 +0000 UTC Type:0 Mac:52:54:00:03:a4:7a Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-086149-m04 Clientid:01:52:54:00:03:a4:7a}
	I0819 18:18:35.799621  399826 main.go:141] libmachine: (ha-086149-m04) DBG | domain ha-086149-m04 has defined IP address 192.168.39.173 and MAC address 52:54:00:03:a4:7a in network mk-ha-086149
	I0819 18:18:35.799780  399826 host.go:66] Checking if "ha-086149-m04" exists ...
	I0819 18:18:35.800064  399826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:18:35.800102  399826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:18:35.815281  399826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40005
	I0819 18:18:35.815904  399826 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:18:35.816491  399826 main.go:141] libmachine: Using API Version  1
	I0819 18:18:35.816512  399826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:18:35.816925  399826 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:18:35.817150  399826 main.go:141] libmachine: (ha-086149-m04) Calling .DriverName
	I0819 18:18:35.817382  399826 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 18:18:35.817407  399826 main.go:141] libmachine: (ha-086149-m04) Calling .GetSSHHostname
	I0819 18:18:35.820533  399826 main.go:141] libmachine: (ha-086149-m04) DBG | domain ha-086149-m04 has defined MAC address 52:54:00:03:a4:7a in network mk-ha-086149
	I0819 18:18:35.821002  399826 main.go:141] libmachine: (ha-086149-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:a4:7a", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:16:02 +0000 UTC Type:0 Mac:52:54:00:03:a4:7a Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-086149-m04 Clientid:01:52:54:00:03:a4:7a}
	I0819 18:18:35.821039  399826 main.go:141] libmachine: (ha-086149-m04) DBG | domain ha-086149-m04 has defined IP address 192.168.39.173 and MAC address 52:54:00:03:a4:7a in network mk-ha-086149
	I0819 18:18:35.821235  399826 main.go:141] libmachine: (ha-086149-m04) Calling .GetSSHPort
	I0819 18:18:35.821404  399826 main.go:141] libmachine: (ha-086149-m04) Calling .GetSSHKeyPath
	I0819 18:18:35.821570  399826 main.go:141] libmachine: (ha-086149-m04) Calling .GetSSHUsername
	I0819 18:18:35.821716  399826 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149-m04/id_rsa Username:docker}
	W0819 18:18:54.339859  399826 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.173:22: connect: no route to host
	W0819 18:18:54.339988  399826 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.173:22: connect: no route to host
	E0819 18:18:54.340005  399826 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.173:22: connect: no route to host
	I0819 18:18:54.340013  399826 status.go:257] ha-086149-m04 status: &{Name:ha-086149-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0819 18:18:54.340042  399826 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.173:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-086149 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-086149 -n ha-086149
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-086149 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-086149 logs -n 25: (1.733639116s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-086149 ssh -n ha-086149-m02 sudo cat                                          | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:05 UTC |
	|         | /home/docker/cp-test_ha-086149-m03_ha-086149-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-086149 cp ha-086149-m03:/home/docker/cp-test.txt                              | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:05 UTC |
	|         | ha-086149-m04:/home/docker/cp-test_ha-086149-m03_ha-086149-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-086149 ssh -n                                                                 | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:05 UTC |
	|         | ha-086149-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-086149 ssh -n ha-086149-m04 sudo cat                                          | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:05 UTC |
	|         | /home/docker/cp-test_ha-086149-m03_ha-086149-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-086149 cp testdata/cp-test.txt                                                | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:05 UTC |
	|         | ha-086149-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-086149 ssh -n                                                                 | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:05 UTC |
	|         | ha-086149-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-086149 cp ha-086149-m04:/home/docker/cp-test.txt                              | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:05 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3465103634/001/cp-test_ha-086149-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-086149 ssh -n                                                                 | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:05 UTC |
	|         | ha-086149-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-086149 cp ha-086149-m04:/home/docker/cp-test.txt                              | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:05 UTC |
	|         | ha-086149:/home/docker/cp-test_ha-086149-m04_ha-086149.txt                       |           |         |         |                     |                     |
	| ssh     | ha-086149 ssh -n                                                                 | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:05 UTC |
	|         | ha-086149-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-086149 ssh -n ha-086149 sudo cat                                              | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:05 UTC |
	|         | /home/docker/cp-test_ha-086149-m04_ha-086149.txt                                 |           |         |         |                     |                     |
	| cp      | ha-086149 cp ha-086149-m04:/home/docker/cp-test.txt                              | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:05 UTC |
	|         | ha-086149-m02:/home/docker/cp-test_ha-086149-m04_ha-086149-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-086149 ssh -n                                                                 | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:05 UTC |
	|         | ha-086149-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-086149 ssh -n ha-086149-m02 sudo cat                                          | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:05 UTC |
	|         | /home/docker/cp-test_ha-086149-m04_ha-086149-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-086149 cp ha-086149-m04:/home/docker/cp-test.txt                              | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:05 UTC |
	|         | ha-086149-m03:/home/docker/cp-test_ha-086149-m04_ha-086149-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-086149 ssh -n                                                                 | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:05 UTC |
	|         | ha-086149-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-086149 ssh -n ha-086149-m03 sudo cat                                          | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:05 UTC |
	|         | /home/docker/cp-test_ha-086149-m04_ha-086149-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-086149 node stop m02 -v=7                                                     | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-086149 node start m02 -v=7                                                    | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:08 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-086149 -v=7                                                           | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:09 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-086149 -v=7                                                                | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:09 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-086149 --wait=true -v=7                                                    | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:11 UTC | 19 Aug 24 18:16 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-086149                                                                | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:16 UTC |                     |
	| node    | ha-086149 node delete m03 -v=7                                                   | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:16 UTC | 19 Aug 24 18:16 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-086149 stop -v=7                                                              | ha-086149 | jenkins | v1.33.1 | 19 Aug 24 18:16 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 18:11:26
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 18:11:26.790163  397087 out.go:345] Setting OutFile to fd 1 ...
	I0819 18:11:26.790285  397087 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:11:26.790294  397087 out.go:358] Setting ErrFile to fd 2...
	I0819 18:11:26.790299  397087 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:11:26.790509  397087 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19468-372744/.minikube/bin
	I0819 18:11:26.791095  397087 out.go:352] Setting JSON to false
	I0819 18:11:26.792211  397087 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":6830,"bootTime":1724084257,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 18:11:26.792279  397087 start.go:139] virtualization: kvm guest
	I0819 18:11:26.794666  397087 out.go:177] * [ha-086149] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 18:11:26.796373  397087 out.go:177]   - MINIKUBE_LOCATION=19468
	I0819 18:11:26.796412  397087 notify.go:220] Checking for updates...
	I0819 18:11:26.799215  397087 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 18:11:26.800518  397087 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19468-372744/kubeconfig
	I0819 18:11:26.801734  397087 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19468-372744/.minikube
	I0819 18:11:26.802834  397087 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 18:11:26.803999  397087 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 18:11:26.805744  397087 config.go:182] Loaded profile config "ha-086149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:11:26.805842  397087 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 18:11:26.806227  397087 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:11:26.806287  397087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:11:26.821836  397087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43047
	I0819 18:11:26.822281  397087 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:11:26.822831  397087 main.go:141] libmachine: Using API Version  1
	I0819 18:11:26.822851  397087 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:11:26.823230  397087 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:11:26.823448  397087 main.go:141] libmachine: (ha-086149) Calling .DriverName
	I0819 18:11:26.859039  397087 out.go:177] * Using the kvm2 driver based on existing profile
	I0819 18:11:26.860288  397087 start.go:297] selected driver: kvm2
	I0819 18:11:26.860313  397087 start.go:901] validating driver "kvm2" against &{Name:ha-086149 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.0 ClusterName:ha-086149 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.167 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.121 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.173 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false e
fk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 18:11:26.860510  397087 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 18:11:26.860860  397087 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 18:11:26.860955  397087 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19468-372744/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 18:11:26.876215  397087 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 18:11:26.876931  397087 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 18:11:26.876975  397087 cni.go:84] Creating CNI manager for ""
	I0819 18:11:26.876984  397087 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0819 18:11:26.877047  397087 start.go:340] cluster config:
	{Name:ha-086149 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-086149 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.167 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.121 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.173 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-ti
ller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 18:11:26.877195  397087 iso.go:125] acquiring lock: {Name:mk4c0ac1c3202b1a296739df622960e7a0bd8566 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 18:11:26.880367  397087 out.go:177] * Starting "ha-086149" primary control-plane node in "ha-086149" cluster
	I0819 18:11:26.881677  397087 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 18:11:26.881723  397087 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0819 18:11:26.881734  397087 cache.go:56] Caching tarball of preloaded images
	I0819 18:11:26.881818  397087 preload.go:172] Found /home/jenkins/minikube-integration/19468-372744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 18:11:26.881830  397087 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 18:11:26.881964  397087 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/config.json ...
	I0819 18:11:26.882188  397087 start.go:360] acquireMachinesLock for ha-086149: {Name:mk24ba67a747357e9ce40f1e460d2bb0bc59cc75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 18:11:26.882247  397087 start.go:364] duration metric: took 37.695µs to acquireMachinesLock for "ha-086149"
	I0819 18:11:26.882268  397087 start.go:96] Skipping create...Using existing machine configuration
	I0819 18:11:26.882285  397087 fix.go:54] fixHost starting: 
	I0819 18:11:26.882566  397087 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:11:26.882619  397087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:11:26.897044  397087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46039
	I0819 18:11:26.897553  397087 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:11:26.898124  397087 main.go:141] libmachine: Using API Version  1
	I0819 18:11:26.898162  397087 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:11:26.898472  397087 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:11:26.898657  397087 main.go:141] libmachine: (ha-086149) Calling .DriverName
	I0819 18:11:26.898848  397087 main.go:141] libmachine: (ha-086149) Calling .GetState
	I0819 18:11:26.900433  397087 fix.go:112] recreateIfNeeded on ha-086149: state=Running err=<nil>
	W0819 18:11:26.900453  397087 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 18:11:26.902377  397087 out.go:177] * Updating the running kvm2 "ha-086149" VM ...
	I0819 18:11:26.903765  397087 machine.go:93] provisionDockerMachine start ...
	I0819 18:11:26.903790  397087 main.go:141] libmachine: (ha-086149) Calling .DriverName
	I0819 18:11:26.904074  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHHostname
	I0819 18:11:26.906649  397087 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:11:26.907111  397087 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:11:26.907140  397087 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:11:26.907303  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHPort
	I0819 18:11:26.907480  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHKeyPath
	I0819 18:11:26.907634  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHKeyPath
	I0819 18:11:26.907769  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHUsername
	I0819 18:11:26.907932  397087 main.go:141] libmachine: Using SSH client type: native
	I0819 18:11:26.908147  397087 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0819 18:11:26.908159  397087 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 18:11:27.013130  397087 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-086149
	
	I0819 18:11:27.013172  397087 main.go:141] libmachine: (ha-086149) Calling .GetMachineName
	I0819 18:11:27.013473  397087 buildroot.go:166] provisioning hostname "ha-086149"
	I0819 18:11:27.013504  397087 main.go:141] libmachine: (ha-086149) Calling .GetMachineName
	I0819 18:11:27.013719  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHHostname
	I0819 18:11:27.016426  397087 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:11:27.016835  397087 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:11:27.016863  397087 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:11:27.017018  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHPort
	I0819 18:11:27.017231  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHKeyPath
	I0819 18:11:27.017381  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHKeyPath
	I0819 18:11:27.017515  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHUsername
	I0819 18:11:27.017672  397087 main.go:141] libmachine: Using SSH client type: native
	I0819 18:11:27.017906  397087 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0819 18:11:27.017923  397087 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-086149 && echo "ha-086149" | sudo tee /etc/hostname
	I0819 18:11:27.141797  397087 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-086149
	
	I0819 18:11:27.141826  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHHostname
	I0819 18:11:27.144440  397087 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:11:27.144771  397087 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:11:27.144804  397087 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:11:27.145009  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHPort
	I0819 18:11:27.145202  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHKeyPath
	I0819 18:11:27.145361  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHKeyPath
	I0819 18:11:27.145536  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHUsername
	I0819 18:11:27.145701  397087 main.go:141] libmachine: Using SSH client type: native
	I0819 18:11:27.145879  397087 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0819 18:11:27.145895  397087 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-086149' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-086149/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-086149' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 18:11:27.257185  397087 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 18:11:27.257235  397087 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19468-372744/.minikube CaCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19468-372744/.minikube}
	I0819 18:11:27.257274  397087 buildroot.go:174] setting up certificates
	I0819 18:11:27.257283  397087 provision.go:84] configureAuth start
	I0819 18:11:27.257296  397087 main.go:141] libmachine: (ha-086149) Calling .GetMachineName
	I0819 18:11:27.257578  397087 main.go:141] libmachine: (ha-086149) Calling .GetIP
	I0819 18:11:27.260335  397087 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:11:27.260693  397087 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:11:27.260718  397087 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:11:27.260865  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHHostname
	I0819 18:11:27.263806  397087 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:11:27.264249  397087 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:11:27.264279  397087 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:11:27.264389  397087 provision.go:143] copyHostCerts
	I0819 18:11:27.264425  397087 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem
	I0819 18:11:27.264504  397087 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem, removing ...
	I0819 18:11:27.264524  397087 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem
	I0819 18:11:27.264609  397087 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem (1082 bytes)
	I0819 18:11:27.264740  397087 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem
	I0819 18:11:27.264771  397087 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem, removing ...
	I0819 18:11:27.264778  397087 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem
	I0819 18:11:27.264827  397087 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem (1123 bytes)
	I0819 18:11:27.264907  397087 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem
	I0819 18:11:27.264925  397087 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem, removing ...
	I0819 18:11:27.264932  397087 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem
	I0819 18:11:27.264957  397087 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem (1675 bytes)
	I0819 18:11:27.265023  397087 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem org=jenkins.ha-086149 san=[127.0.0.1 192.168.39.249 ha-086149 localhost minikube]
	I0819 18:11:27.390873  397087 provision.go:177] copyRemoteCerts
	I0819 18:11:27.390944  397087 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 18:11:27.390970  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHHostname
	I0819 18:11:27.393739  397087 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:11:27.394178  397087 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:11:27.394216  397087 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:11:27.394341  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHPort
	I0819 18:11:27.394539  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHKeyPath
	I0819 18:11:27.394735  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHUsername
	I0819 18:11:27.394832  397087 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149/id_rsa Username:docker}
	I0819 18:11:27.478663  397087 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 18:11:27.478751  397087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 18:11:27.505660  397087 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 18:11:27.505762  397087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0819 18:11:27.533045  397087 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 18:11:27.533128  397087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 18:11:27.564294  397087 provision.go:87] duration metric: took 306.994273ms to configureAuth
	I0819 18:11:27.564324  397087 buildroot.go:189] setting minikube options for container-runtime
	I0819 18:11:27.564601  397087 config.go:182] Loaded profile config "ha-086149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:11:27.564711  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHHostname
	I0819 18:11:27.567394  397087 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:11:27.567789  397087 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:11:27.567818  397087 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:11:27.567989  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHPort
	I0819 18:11:27.568204  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHKeyPath
	I0819 18:11:27.568381  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHKeyPath
	I0819 18:11:27.568533  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHUsername
	I0819 18:11:27.568694  397087 main.go:141] libmachine: Using SSH client type: native
	I0819 18:11:27.568911  397087 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0819 18:11:27.568938  397087 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 18:12:58.395407  397087 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 18:12:58.395460  397087 machine.go:96] duration metric: took 1m31.491678222s to provisionDockerMachine
	I0819 18:12:58.395481  397087 start.go:293] postStartSetup for "ha-086149" (driver="kvm2")
	I0819 18:12:58.395496  397087 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 18:12:58.395525  397087 main.go:141] libmachine: (ha-086149) Calling .DriverName
	I0819 18:12:58.395908  397087 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 18:12:58.395946  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHHostname
	I0819 18:12:58.399108  397087 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:12:58.399509  397087 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:12:58.399538  397087 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:12:58.399716  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHPort
	I0819 18:12:58.399897  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHKeyPath
	I0819 18:12:58.400179  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHUsername
	I0819 18:12:58.400335  397087 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149/id_rsa Username:docker}
	I0819 18:12:58.483619  397087 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 18:12:58.487929  397087 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 18:12:58.487955  397087 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-372744/.minikube/addons for local assets ...
	I0819 18:12:58.488013  397087 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-372744/.minikube/files for local assets ...
	I0819 18:12:58.488091  397087 filesync.go:149] local asset: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem -> 3800092.pem in /etc/ssl/certs
	I0819 18:12:58.488107  397087 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem -> /etc/ssl/certs/3800092.pem
	I0819 18:12:58.488191  397087 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 18:12:58.497553  397087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem --> /etc/ssl/certs/3800092.pem (1708 bytes)
	I0819 18:12:58.522327  397087 start.go:296] duration metric: took 126.830228ms for postStartSetup
	I0819 18:12:58.522380  397087 main.go:141] libmachine: (ha-086149) Calling .DriverName
	I0819 18:12:58.522680  397087 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0819 18:12:58.522722  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHHostname
	I0819 18:12:58.525614  397087 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:12:58.525952  397087 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:12:58.525977  397087 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:12:58.526186  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHPort
	I0819 18:12:58.526376  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHKeyPath
	I0819 18:12:58.526538  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHUsername
	I0819 18:12:58.526687  397087 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149/id_rsa Username:docker}
	W0819 18:12:58.606786  397087 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0819 18:12:58.606817  397087 fix.go:56] duration metric: took 1m31.724542331s for fixHost
	I0819 18:12:58.606841  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHHostname
	I0819 18:12:58.609477  397087 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:12:58.609879  397087 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:12:58.609905  397087 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:12:58.610052  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHPort
	I0819 18:12:58.610266  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHKeyPath
	I0819 18:12:58.610412  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHKeyPath
	I0819 18:12:58.610556  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHUsername
	I0819 18:12:58.610697  397087 main.go:141] libmachine: Using SSH client type: native
	I0819 18:12:58.610881  397087 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0819 18:12:58.610892  397087 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 18:12:58.712750  397087 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724091178.663822557
	
	I0819 18:12:58.712775  397087 fix.go:216] guest clock: 1724091178.663822557
	I0819 18:12:58.712782  397087 fix.go:229] Guest: 2024-08-19 18:12:58.663822557 +0000 UTC Remote: 2024-08-19 18:12:58.606825553 +0000 UTC m=+91.854126584 (delta=56.997004ms)
	I0819 18:12:58.712802  397087 fix.go:200] guest clock delta is within tolerance: 56.997004ms
	I0819 18:12:58.712807  397087 start.go:83] releasing machines lock for "ha-086149", held for 1m31.830548944s
	I0819 18:12:58.712825  397087 main.go:141] libmachine: (ha-086149) Calling .DriverName
	I0819 18:12:58.713130  397087 main.go:141] libmachine: (ha-086149) Calling .GetIP
	I0819 18:12:58.715596  397087 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:12:58.715988  397087 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:12:58.716015  397087 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:12:58.716186  397087 main.go:141] libmachine: (ha-086149) Calling .DriverName
	I0819 18:12:58.716784  397087 main.go:141] libmachine: (ha-086149) Calling .DriverName
	I0819 18:12:58.716968  397087 main.go:141] libmachine: (ha-086149) Calling .DriverName
	I0819 18:12:58.717063  397087 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 18:12:58.717134  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHHostname
	I0819 18:12:58.717199  397087 ssh_runner.go:195] Run: cat /version.json
	I0819 18:12:58.717219  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHHostname
	I0819 18:12:58.719832  397087 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:12:58.720111  397087 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:12:58.720146  397087 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:12:58.720164  397087 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:12:58.720278  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHPort
	I0819 18:12:58.720538  397087 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:12:58.720540  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHKeyPath
	I0819 18:12:58.720563  397087 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:12:58.720715  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHUsername
	I0819 18:12:58.720739  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHPort
	I0819 18:12:58.720883  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHKeyPath
	I0819 18:12:58.720908  397087 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149/id_rsa Username:docker}
	I0819 18:12:58.721013  397087 main.go:141] libmachine: (ha-086149) Calling .GetSSHUsername
	I0819 18:12:58.721220  397087 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/ha-086149/id_rsa Username:docker}
	I0819 18:12:58.827300  397087 ssh_runner.go:195] Run: systemctl --version
	I0819 18:12:58.833614  397087 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 18:12:59.001910  397087 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 18:12:59.011040  397087 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 18:12:59.011116  397087 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 18:12:59.020702  397087 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0819 18:12:59.020736  397087 start.go:495] detecting cgroup driver to use...
	I0819 18:12:59.020803  397087 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 18:12:59.036530  397087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 18:12:59.050394  397087 docker.go:217] disabling cri-docker service (if available) ...
	I0819 18:12:59.050475  397087 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 18:12:59.063866  397087 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 18:12:59.076972  397087 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 18:12:59.230890  397087 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 18:12:59.380358  397087 docker.go:233] disabling docker service ...
	I0819 18:12:59.380448  397087 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 18:12:59.396879  397087 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 18:12:59.411168  397087 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 18:12:59.560874  397087 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 18:12:59.707454  397087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 18:12:59.721622  397087 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 18:12:59.740982  397087 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 18:12:59.741039  397087 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:12:59.751763  397087 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 18:12:59.751862  397087 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:12:59.762338  397087 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:12:59.772603  397087 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:12:59.782855  397087 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 18:12:59.793221  397087 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:12:59.803640  397087 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:12:59.815181  397087 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:12:59.825280  397087 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 18:12:59.834950  397087 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 18:12:59.844552  397087 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 18:12:59.986845  397087 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 18:13:05.808418  397087 ssh_runner.go:235] Completed: sudo systemctl restart crio: (5.821526153s)
	I0819 18:13:05.808456  397087 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 18:13:05.808515  397087 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 18:13:05.813721  397087 start.go:563] Will wait 60s for crictl version
	I0819 18:13:05.813792  397087 ssh_runner.go:195] Run: which crictl
	I0819 18:13:05.818030  397087 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 18:13:05.855021  397087 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 18:13:05.855114  397087 ssh_runner.go:195] Run: crio --version
	I0819 18:13:05.883731  397087 ssh_runner.go:195] Run: crio --version
	I0819 18:13:05.915398  397087 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 18:13:05.916896  397087 main.go:141] libmachine: (ha-086149) Calling .GetIP
	I0819 18:13:05.919751  397087 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:13:05.920125  397087 main.go:141] libmachine: (ha-086149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:ab:95", ip: ""} in network mk-ha-086149: {Iface:virbr1 ExpiryTime:2024-08-19 19:01:28 +0000 UTC Type:0 Mac:52:54:00:3b:ab:95 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-086149 Clientid:01:52:54:00:3b:ab:95}
	I0819 18:13:05.920154  397087 main.go:141] libmachine: (ha-086149) DBG | domain ha-086149 has defined IP address 192.168.39.249 and MAC address 52:54:00:3b:ab:95 in network mk-ha-086149
	I0819 18:13:05.920388  397087 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 18:13:05.925474  397087 kubeadm.go:883] updating cluster {Name:ha-086149 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:ha-086149 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.167 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.121 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.173 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fr
eshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mo
untGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 18:13:05.925636  397087 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 18:13:05.925689  397087 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 18:13:05.971864  397087 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 18:13:05.971892  397087 crio.go:433] Images already preloaded, skipping extraction
	I0819 18:13:05.971984  397087 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 18:13:06.018045  397087 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 18:13:06.018077  397087 cache_images.go:84] Images are preloaded, skipping loading
	I0819 18:13:06.018093  397087 kubeadm.go:934] updating node { 192.168.39.249 8443 v1.31.0 crio true true} ...
	I0819 18:13:06.018218  397087 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-086149 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.249
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-086149 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 18:13:06.018305  397087 ssh_runner.go:195] Run: crio config
	I0819 18:13:06.069464  397087 cni.go:84] Creating CNI manager for ""
	I0819 18:13:06.069488  397087 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0819 18:13:06.069502  397087 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 18:13:06.069524  397087 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.249 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-086149 NodeName:ha-086149 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.249"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.249 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 18:13:06.069658  397087 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.249
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-086149"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.249
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.249"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 18:13:06.069681  397087 kube-vip.go:115] generating kube-vip config ...
	I0819 18:13:06.069733  397087 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0819 18:13:06.081760  397087 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0819 18:13:06.081881  397087 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0819 18:13:06.081937  397087 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 18:13:06.092095  397087 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 18:13:06.092161  397087 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0819 18:13:06.101668  397087 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0819 18:13:06.119159  397087 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 18:13:06.135928  397087 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0819 18:13:06.152409  397087 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0819 18:13:06.168938  397087 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0819 18:13:06.173820  397087 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 18:13:06.325226  397087 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 18:13:06.339965  397087 certs.go:68] Setting up /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149 for IP: 192.168.39.249
	I0819 18:13:06.339996  397087 certs.go:194] generating shared ca certs ...
	I0819 18:13:06.340020  397087 certs.go:226] acquiring lock for ca certs: {Name:mk639e03f593e0bccac045f6e9f5ba3b96cc81e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:13:06.340217  397087 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key
	I0819 18:13:06.340299  397087 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key
	I0819 18:13:06.340318  397087 certs.go:256] generating profile certs ...
	I0819 18:13:06.340424  397087 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/client.key
	I0819 18:13:06.340461  397087 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.key.0421d9d8
	I0819 18:13:06.340482  397087 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.crt.0421d9d8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.249 192.168.39.167 192.168.39.121 192.168.39.254]
	I0819 18:13:06.530153  397087 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.crt.0421d9d8 ...
	I0819 18:13:06.530189  397087 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.crt.0421d9d8: {Name:mk99868fe2b76b367216e96c32af4ec27110846d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:13:06.530368  397087 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.key.0421d9d8 ...
	I0819 18:13:06.530382  397087 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.key.0421d9d8: {Name:mk2c0f96ce4c77a08f0c0939f37c4fbbed2e333d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:13:06.530454  397087 certs.go:381] copying /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.crt.0421d9d8 -> /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.crt
	I0819 18:13:06.530641  397087 certs.go:385] copying /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.key.0421d9d8 -> /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.key
	I0819 18:13:06.530778  397087 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/proxy-client.key
	I0819 18:13:06.530809  397087 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 18:13:06.530826  397087 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 18:13:06.530841  397087 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 18:13:06.530853  397087 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 18:13:06.530866  397087 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0819 18:13:06.530883  397087 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0819 18:13:06.530901  397087 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0819 18:13:06.530911  397087 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0819 18:13:06.530956  397087 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem (1338 bytes)
	W0819 18:13:06.530986  397087 certs.go:480] ignoring /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009_empty.pem, impossibly tiny 0 bytes
	I0819 18:13:06.530995  397087 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 18:13:06.531019  397087 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem (1082 bytes)
	I0819 18:13:06.531041  397087 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem (1123 bytes)
	I0819 18:13:06.531062  397087 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem (1675 bytes)
	I0819 18:13:06.531098  397087 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem (1708 bytes)
	I0819 18:13:06.531124  397087 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem -> /usr/share/ca-certificates/3800092.pem
	I0819 18:13:06.531139  397087 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:13:06.531152  397087 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem -> /usr/share/ca-certificates/380009.pem
	I0819 18:13:06.531741  397087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 18:13:06.558053  397087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 18:13:06.583054  397087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 18:13:06.607651  397087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 18:13:06.634144  397087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0819 18:13:06.658012  397087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 18:13:06.682158  397087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 18:13:06.706631  397087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/ha-086149/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 18:13:06.730952  397087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem --> /usr/share/ca-certificates/3800092.pem (1708 bytes)
	I0819 18:13:06.755061  397087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 18:13:06.779260  397087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem --> /usr/share/ca-certificates/380009.pem (1338 bytes)
	I0819 18:13:06.805310  397087 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 18:13:06.832120  397087 ssh_runner.go:195] Run: openssl version
	I0819 18:13:06.845693  397087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/380009.pem && ln -fs /usr/share/ca-certificates/380009.pem /etc/ssl/certs/380009.pem"
	I0819 18:13:06.864029  397087 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/380009.pem
	I0819 18:13:06.876056  397087 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 17:56 /usr/share/ca-certificates/380009.pem
	I0819 18:13:06.876116  397087 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/380009.pem
	I0819 18:13:06.883958  397087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/380009.pem /etc/ssl/certs/51391683.0"
	I0819 18:13:06.912663  397087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3800092.pem && ln -fs /usr/share/ca-certificates/3800092.pem /etc/ssl/certs/3800092.pem"
	I0819 18:13:06.938548  397087 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3800092.pem
	I0819 18:13:06.945514  397087 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 17:56 /usr/share/ca-certificates/3800092.pem
	I0819 18:13:06.945576  397087 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3800092.pem
	I0819 18:13:06.953177  397087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3800092.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 18:13:06.972664  397087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 18:13:06.986664  397087 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:13:06.991565  397087 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:13:06.991624  397087 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:13:06.997458  397087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 18:13:07.008703  397087 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 18:13:07.018016  397087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 18:13:07.024635  397087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 18:13:07.030997  397087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 18:13:07.037077  397087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 18:13:07.043463  397087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 18:13:07.049164  397087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 18:13:07.055094  397087 kubeadm.go:392] StartCluster: {Name:ha-086149 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clust
erName:ha-086149 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.167 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.121 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.173 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fresh
pod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 18:13:07.055231  397087 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 18:13:07.055287  397087 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 18:13:07.115004  397087 cri.go:89] found id: "ebc9aecd46ff7854b20e6b5fd38c6125d892096e8032a7c50445c7130f92158f"
	I0819 18:13:07.115030  397087 cri.go:89] found id: "6dd571ecf979fd6b33d2d3a930406edcad4fc4673aef14b144d3919400614448"
	I0819 18:13:07.115034  397087 cri.go:89] found id: "ccb73fc4640a2d71e367fe2751278531cdb9da26a96f1e3f5450f2dd052cef48"
	I0819 18:13:07.115036  397087 cri.go:89] found id: "d4208b72f7684106eeabb79597e9a16912d86fddf552d810668e52ee86e4cacf"
	I0819 18:13:07.115039  397087 cri.go:89] found id: "86aec3b9357709107938f07e57e09bef332ea9baea288a18bb10389d5108084b"
	I0819 18:13:07.115042  397087 cri.go:89] found id: "de3b095c19e3f3ff1bf0fb76700cc09513b591cda7c219c31dee7842602944b4"
	I0819 18:13:07.115045  397087 cri.go:89] found id: "66fd9c9b32e5e0294c89ebc2ee3c443fda85c40c3ad5b05d42357b4968e8d305"
	I0819 18:13:07.115047  397087 cri.go:89] found id: "eb8cccc1568bbb207d2c7c285f3897a7a425cba60f4dfcf3e8daa8082fc38ef0"
	I0819 18:13:07.115050  397087 cri.go:89] found id: "0cbf110391a2708b365a6d117cd1facf1a5820add049c9338b5eaa12f02254e4"
	I0819 18:13:07.115056  397087 cri.go:89] found id: "f5e746178ed6a3645979a5bd617a6d9f408bb3e6af232f31409c7e79a0c4f6b2"
	I0819 18:13:07.115060  397087 cri.go:89] found id: "426a12b48132d73e1b93e6a7fb5b3420868e384eb280274c6ee81ae6f6bcea12"
	I0819 18:13:07.115062  397087 cri.go:89] found id: "2f729929f59edc9bd3c0ec7e99f4b984f94d6b6ec06edf83cf6dc3efba7a1fe5"
	I0819 18:13:07.115065  397087 cri.go:89] found id: "d0e66231bf791048a9932068b5f28d8479613545885bea8e42cf9c79913ffccd"
	I0819 18:13:07.115067  397087 cri.go:89] found id: ""
	I0819 18:13:07.115111  397087 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 19 18:18:54 ha-086149 crio[3620]: time="2024-08-19 18:18:54.954963476Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724091534954936797,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2926e183-98b7-468f-ba13-a3f0550b075e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:18:54 ha-086149 crio[3620]: time="2024-08-19 18:18:54.955569673Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f14ad9fa-e7a5-4d72-9064-f6efc44c52a6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:18:54 ha-086149 crio[3620]: time="2024-08-19 18:18:54.955650559Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f14ad9fa-e7a5-4d72-9064-f6efc44c52a6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:18:54 ha-086149 crio[3620]: time="2024-08-19 18:18:54.956549848Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7fc48458ae307ff361499fde833d54b89f7ed1cc124b8e2e4c5e623d5b59f5cf,PodSandboxId:4a10374978122541ed15e7f43ce8d30cc1d0cd85f051271ab88948bcb2c57a79,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724091270295934817,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c12159a8-5f84-4d19-aa54-7b56a9669f6c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b110ed1c7e4de28f673ba115eee8636180545973d22374de2fefcc11c697539,PodSandboxId:834b78e6f8c8ae9b6949554d2864db66ec486375169d8a29f441745a6c13a6c7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724091234291807173,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab6b0fe91f166a5c05b58933ead885f6,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8eb0bee9a15dccc2d82fb1b3ac35c0edda4dfaf7f15f58e06a340bf55e8f26ab,PodSandboxId:54b9254dbd54ee93a0df9ad92074a813df205bbacf4bf2950d47a6955ebf62e1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724091232293434047,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9269a2cf31966e0bbf30b6554fa311ee,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01d3428fa47f18423ed50d84a758bf632905445652b3088c079a7522697a5d53,PodSandboxId:4a10374978122541ed15e7f43ce8d30cc1d0cd85f051271ab88948bcb2c57a79,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724091227289314593,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c12159a8-5f84-4d19-aa54-7b56a9669f6c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31b792a184ef5bf6c6881a58559ac54b18794b0d2bdb0f213f9015a19c994ff0,PodSandboxId:5fdc0c51659ed4b89dc97e11cbfa487bd8403121cd95beb25e6735b3c83aa363,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724091226319223025,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-fd2dw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f5e2f831-487f-4edb-b6c1-b391906a6d5b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9e6f6fd570fa5f5a880753efc7f1228506acc92db408df6bf6ed9b5f34cfe93,PodSandboxId:c78d69ffa13ba1619670b2ce62d5e954ea916933dc86eecc61164297731d3363,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1724091207063154781,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71315ae10c82422e3efaca00d9b232cb,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2deb18dfc60e51eddc31befa21ffc0090ce2abc67b4511b62104ca5e8342f60,PodSandboxId:c7bab8d9969d8f2b85807cc8e16e713161cd1c353dfcfd272f167836b340da0c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724091193216378555,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vb66s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9322737a-5f8a-4d5a-a7d1-ba076bc8f2d8,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:7421b967684844bf1fe8f4abc52f1cd8635544a588cbdb2b910b55bf74594619,PodSandboxId:d4370b1d6fcb3ede7b9a41e432046068b76ec99429ad0424f03c801cdfedc7c1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724091193106196622,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fwkf2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 001a3fe7-633c-44f8-9a8c-7401cec7af54,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62c3f84d9
e207c67a86a14e215f591225047843d0d3d8ff01470104c28ec3372,PodSandboxId:47c6aecb02b827d90fa98d560860dcb29069184c64ee4755d4c1f590c6ad5989,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724091193019658925,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-p65cb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f30449e-d4ea-4d6f-a63a-08551024bd04,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a1b7fec3f151c3ebd32ce721f81861e00daf06da360b6bad7a4c99a4b3c71d5,PodSandboxId:a8bc21a4e7d10603f5f44f0819a6baf14dda6ea43bd4a34b0756f711804ae455,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724091192823979246,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcf0b1666b512c678d4309e6a2bd2773,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea2f2cfbcacac8b9d0f716fc5bf8be816dac486447f26b5969f1d79a9031f7ca,PodSandboxId:834b78e6f8c8ae9b6949554d2864db66ec486375169d8a29f441745a6c13a6c7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724091192904907760,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab6b0fe91f166a5c05b58933ead885f6,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4760d8a0d8843fa04600f76c7a9e2b2ba5c4212e748492168d8c00d31ea0d515,PodSandboxId:15a37a0b36621b359e14b4e497dcf9a8bee8c5d328dee2de0d16ab4c727f8823,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724091192778643283,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 465e756b61a05a6f1c4dfeba2adbdeeb,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dbebbcf5b28297a583e961cdeb22de8d630ca8836e6f0ffcca3c4fe28b9a104,PodSandboxId:54b9254dbd54ee93a0df9ad92074a813df205bbacf4bf2950d47a6955ebf62e1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724091192765986526,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9269a2cf31966e0bbf30b6554fa311ee,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1560513c5f2f2c855e651ca853a37a183c2c92361d4db5001d39a783f9bf1dec,PodSandboxId:2a5484a378f88550bea42ad9cd40a477fefe20e700d36531a230194dd27918f7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724091186996673333,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-8fjpd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bedb900-107a-4f7e-aae7-391b18da4a26,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"
name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef0b28473496e4ab21e3f86bc64eb662e5c22e59e4a56f80f7bdad009460c73d,PodSandboxId:0f784aeccda9e0bff51a30b97a310813be1e271fdaae54f30006645ed5ae31b1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724090682352298631,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-fd2dw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f5e2f831-487f-4edb-b6c1-b391906a6d5b,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4208b72f7684106eeabb79597e9a16912d86fddf552d810668e52ee86e4cacf,PodSandboxId:5b83e59b0dd3110115fa51715b6d8f6d29e006636ab031766095bcb6200ff245,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724090536333833063,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-p65cb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f30449e-d4ea-4d6f-a63a-08551024bd04,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86aec3b9357709107938f07e57e09bef332ea9baea288a18bb10389d5108084b,PodSandboxId:86507aaa25957ebc7ff023a8f042b236a729503785cd3163a2a44e79daf28a80,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724090536330243526,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-8fjpd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bedb900-107a-4f7e-aae7-391b18da4a26,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66fd9c9b32e5e0294c89ebc2ee3c443fda85c40c3ad5b05d42357b4968e8d305,PodSandboxId:3c6e833618ab7965e295c1f82164c28a64e619a82a0a8a90542c16f004e32954,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724090524118032785,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vb66s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9322737a-5f8a-4d5a-a7d1-ba076bc8f2d8,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb8cccc1568bbb207d2c7c285f3897a7a425cba60f4dfcf3e8daa8082fc38ef0,PodSandboxId:dc27fd8c8c4a6cec062f5420b6ed3489f5b075fb1eb4e02074e5505c76d238e5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724090520283730684,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fwkf2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 001a3fe7-633c-44f8-9a8c-7401cec7af54,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:426a12b48132d73e1b93e6a7fb5b3420868e384eb280274c6ee81ae6f6bcea12,PodSandboxId:4cd25796bc67e8c9b4a666188feb3addfa806bf372a40c47a0ed8a3e3576c9a2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915a
f3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724090509151262837,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcf0b1666b512c678d4309e6a2bd2773,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0e66231bf791048a9932068b5f28d8479613545885bea8e42cf9c79913ffccd,PodSandboxId:1f46f8e2ba79c3a9b9a7f9729c154fc9c495e280d0a9fac6dc4fdf837a2e0b73,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
,State:CONTAINER_EXITED,CreatedAt:1724090509024852872,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 465e756b61a05a6f1c4dfeba2adbdeeb,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f14ad9fa-e7a5-4d72-9064-f6efc44c52a6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:18:55 ha-086149 crio[3620]: time="2024-08-19 18:18:55.006370369Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4458a041-711d-48a5-9182-59c5d71cb8bf name=/runtime.v1.RuntimeService/Version
	Aug 19 18:18:55 ha-086149 crio[3620]: time="2024-08-19 18:18:55.006464517Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4458a041-711d-48a5-9182-59c5d71cb8bf name=/runtime.v1.RuntimeService/Version
	Aug 19 18:18:55 ha-086149 crio[3620]: time="2024-08-19 18:18:55.007677368Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=95326b13-5c01-4b54-abfe-4d34835b299f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:18:55 ha-086149 crio[3620]: time="2024-08-19 18:18:55.008283802Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724091535008256429,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=95326b13-5c01-4b54-abfe-4d34835b299f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:18:55 ha-086149 crio[3620]: time="2024-08-19 18:18:55.008965092Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a80fb9b7-5e10-4511-adb4-5535d4d21cb5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:18:55 ha-086149 crio[3620]: time="2024-08-19 18:18:55.009023803Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a80fb9b7-5e10-4511-adb4-5535d4d21cb5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:18:55 ha-086149 crio[3620]: time="2024-08-19 18:18:55.009488716Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7fc48458ae307ff361499fde833d54b89f7ed1cc124b8e2e4c5e623d5b59f5cf,PodSandboxId:4a10374978122541ed15e7f43ce8d30cc1d0cd85f051271ab88948bcb2c57a79,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724091270295934817,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c12159a8-5f84-4d19-aa54-7b56a9669f6c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b110ed1c7e4de28f673ba115eee8636180545973d22374de2fefcc11c697539,PodSandboxId:834b78e6f8c8ae9b6949554d2864db66ec486375169d8a29f441745a6c13a6c7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724091234291807173,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab6b0fe91f166a5c05b58933ead885f6,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8eb0bee9a15dccc2d82fb1b3ac35c0edda4dfaf7f15f58e06a340bf55e8f26ab,PodSandboxId:54b9254dbd54ee93a0df9ad92074a813df205bbacf4bf2950d47a6955ebf62e1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724091232293434047,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9269a2cf31966e0bbf30b6554fa311ee,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01d3428fa47f18423ed50d84a758bf632905445652b3088c079a7522697a5d53,PodSandboxId:4a10374978122541ed15e7f43ce8d30cc1d0cd85f051271ab88948bcb2c57a79,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724091227289314593,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c12159a8-5f84-4d19-aa54-7b56a9669f6c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31b792a184ef5bf6c6881a58559ac54b18794b0d2bdb0f213f9015a19c994ff0,PodSandboxId:5fdc0c51659ed4b89dc97e11cbfa487bd8403121cd95beb25e6735b3c83aa363,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724091226319223025,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-fd2dw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f5e2f831-487f-4edb-b6c1-b391906a6d5b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9e6f6fd570fa5f5a880753efc7f1228506acc92db408df6bf6ed9b5f34cfe93,PodSandboxId:c78d69ffa13ba1619670b2ce62d5e954ea916933dc86eecc61164297731d3363,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1724091207063154781,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71315ae10c82422e3efaca00d9b232cb,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2deb18dfc60e51eddc31befa21ffc0090ce2abc67b4511b62104ca5e8342f60,PodSandboxId:c7bab8d9969d8f2b85807cc8e16e713161cd1c353dfcfd272f167836b340da0c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724091193216378555,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vb66s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9322737a-5f8a-4d5a-a7d1-ba076bc8f2d8,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:7421b967684844bf1fe8f4abc52f1cd8635544a588cbdb2b910b55bf74594619,PodSandboxId:d4370b1d6fcb3ede7b9a41e432046068b76ec99429ad0424f03c801cdfedc7c1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724091193106196622,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fwkf2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 001a3fe7-633c-44f8-9a8c-7401cec7af54,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62c3f84d9
e207c67a86a14e215f591225047843d0d3d8ff01470104c28ec3372,PodSandboxId:47c6aecb02b827d90fa98d560860dcb29069184c64ee4755d4c1f590c6ad5989,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724091193019658925,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-p65cb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f30449e-d4ea-4d6f-a63a-08551024bd04,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a1b7fec3f151c3ebd32ce721f81861e00daf06da360b6bad7a4c99a4b3c71d5,PodSandboxId:a8bc21a4e7d10603f5f44f0819a6baf14dda6ea43bd4a34b0756f711804ae455,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724091192823979246,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcf0b1666b512c678d4309e6a2bd2773,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea2f2cfbcacac8b9d0f716fc5bf8be816dac486447f26b5969f1d79a9031f7ca,PodSandboxId:834b78e6f8c8ae9b6949554d2864db66ec486375169d8a29f441745a6c13a6c7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724091192904907760,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab6b0fe91f166a5c05b58933ead885f6,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4760d8a0d8843fa04600f76c7a9e2b2ba5c4212e748492168d8c00d31ea0d515,PodSandboxId:15a37a0b36621b359e14b4e497dcf9a8bee8c5d328dee2de0d16ab4c727f8823,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724091192778643283,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 465e756b61a05a6f1c4dfeba2adbdeeb,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dbebbcf5b28297a583e961cdeb22de8d630ca8836e6f0ffcca3c4fe28b9a104,PodSandboxId:54b9254dbd54ee93a0df9ad92074a813df205bbacf4bf2950d47a6955ebf62e1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724091192765986526,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9269a2cf31966e0bbf30b6554fa311ee,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1560513c5f2f2c855e651ca853a37a183c2c92361d4db5001d39a783f9bf1dec,PodSandboxId:2a5484a378f88550bea42ad9cd40a477fefe20e700d36531a230194dd27918f7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724091186996673333,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-8fjpd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bedb900-107a-4f7e-aae7-391b18da4a26,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"
name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef0b28473496e4ab21e3f86bc64eb662e5c22e59e4a56f80f7bdad009460c73d,PodSandboxId:0f784aeccda9e0bff51a30b97a310813be1e271fdaae54f30006645ed5ae31b1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724090682352298631,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-fd2dw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f5e2f831-487f-4edb-b6c1-b391906a6d5b,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4208b72f7684106eeabb79597e9a16912d86fddf552d810668e52ee86e4cacf,PodSandboxId:5b83e59b0dd3110115fa51715b6d8f6d29e006636ab031766095bcb6200ff245,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724090536333833063,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-p65cb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f30449e-d4ea-4d6f-a63a-08551024bd04,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86aec3b9357709107938f07e57e09bef332ea9baea288a18bb10389d5108084b,PodSandboxId:86507aaa25957ebc7ff023a8f042b236a729503785cd3163a2a44e79daf28a80,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724090536330243526,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-8fjpd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bedb900-107a-4f7e-aae7-391b18da4a26,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66fd9c9b32e5e0294c89ebc2ee3c443fda85c40c3ad5b05d42357b4968e8d305,PodSandboxId:3c6e833618ab7965e295c1f82164c28a64e619a82a0a8a90542c16f004e32954,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724090524118032785,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vb66s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9322737a-5f8a-4d5a-a7d1-ba076bc8f2d8,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb8cccc1568bbb207d2c7c285f3897a7a425cba60f4dfcf3e8daa8082fc38ef0,PodSandboxId:dc27fd8c8c4a6cec062f5420b6ed3489f5b075fb1eb4e02074e5505c76d238e5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724090520283730684,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fwkf2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 001a3fe7-633c-44f8-9a8c-7401cec7af54,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:426a12b48132d73e1b93e6a7fb5b3420868e384eb280274c6ee81ae6f6bcea12,PodSandboxId:4cd25796bc67e8c9b4a666188feb3addfa806bf372a40c47a0ed8a3e3576c9a2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915a
f3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724090509151262837,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcf0b1666b512c678d4309e6a2bd2773,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0e66231bf791048a9932068b5f28d8479613545885bea8e42cf9c79913ffccd,PodSandboxId:1f46f8e2ba79c3a9b9a7f9729c154fc9c495e280d0a9fac6dc4fdf837a2e0b73,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
,State:CONTAINER_EXITED,CreatedAt:1724090509024852872,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 465e756b61a05a6f1c4dfeba2adbdeeb,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a80fb9b7-5e10-4511-adb4-5535d4d21cb5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:18:55 ha-086149 crio[3620]: time="2024-08-19 18:18:55.053807169Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ff66274d-3c89-475e-bf9d-fe3c7a68e0e4 name=/runtime.v1.RuntimeService/Version
	Aug 19 18:18:55 ha-086149 crio[3620]: time="2024-08-19 18:18:55.053898924Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ff66274d-3c89-475e-bf9d-fe3c7a68e0e4 name=/runtime.v1.RuntimeService/Version
	Aug 19 18:18:55 ha-086149 crio[3620]: time="2024-08-19 18:18:55.055284117Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5617336f-40e1-4f8c-b4f7-c2020ccd5bbe name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:18:55 ha-086149 crio[3620]: time="2024-08-19 18:18:55.055718875Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724091535055696711,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5617336f-40e1-4f8c-b4f7-c2020ccd5bbe name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:18:55 ha-086149 crio[3620]: time="2024-08-19 18:18:55.056221156Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dfc8da25-e761-4d5b-89c6-156289078802 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:18:55 ha-086149 crio[3620]: time="2024-08-19 18:18:55.056297254Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dfc8da25-e761-4d5b-89c6-156289078802 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:18:55 ha-086149 crio[3620]: time="2024-08-19 18:18:55.056723818Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7fc48458ae307ff361499fde833d54b89f7ed1cc124b8e2e4c5e623d5b59f5cf,PodSandboxId:4a10374978122541ed15e7f43ce8d30cc1d0cd85f051271ab88948bcb2c57a79,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724091270295934817,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c12159a8-5f84-4d19-aa54-7b56a9669f6c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b110ed1c7e4de28f673ba115eee8636180545973d22374de2fefcc11c697539,PodSandboxId:834b78e6f8c8ae9b6949554d2864db66ec486375169d8a29f441745a6c13a6c7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724091234291807173,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab6b0fe91f166a5c05b58933ead885f6,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8eb0bee9a15dccc2d82fb1b3ac35c0edda4dfaf7f15f58e06a340bf55e8f26ab,PodSandboxId:54b9254dbd54ee93a0df9ad92074a813df205bbacf4bf2950d47a6955ebf62e1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724091232293434047,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9269a2cf31966e0bbf30b6554fa311ee,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01d3428fa47f18423ed50d84a758bf632905445652b3088c079a7522697a5d53,PodSandboxId:4a10374978122541ed15e7f43ce8d30cc1d0cd85f051271ab88948bcb2c57a79,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724091227289314593,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c12159a8-5f84-4d19-aa54-7b56a9669f6c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31b792a184ef5bf6c6881a58559ac54b18794b0d2bdb0f213f9015a19c994ff0,PodSandboxId:5fdc0c51659ed4b89dc97e11cbfa487bd8403121cd95beb25e6735b3c83aa363,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724091226319223025,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-fd2dw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f5e2f831-487f-4edb-b6c1-b391906a6d5b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9e6f6fd570fa5f5a880753efc7f1228506acc92db408df6bf6ed9b5f34cfe93,PodSandboxId:c78d69ffa13ba1619670b2ce62d5e954ea916933dc86eecc61164297731d3363,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1724091207063154781,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71315ae10c82422e3efaca00d9b232cb,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2deb18dfc60e51eddc31befa21ffc0090ce2abc67b4511b62104ca5e8342f60,PodSandboxId:c7bab8d9969d8f2b85807cc8e16e713161cd1c353dfcfd272f167836b340da0c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724091193216378555,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vb66s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9322737a-5f8a-4d5a-a7d1-ba076bc8f2d8,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:7421b967684844bf1fe8f4abc52f1cd8635544a588cbdb2b910b55bf74594619,PodSandboxId:d4370b1d6fcb3ede7b9a41e432046068b76ec99429ad0424f03c801cdfedc7c1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724091193106196622,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fwkf2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 001a3fe7-633c-44f8-9a8c-7401cec7af54,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62c3f84d9
e207c67a86a14e215f591225047843d0d3d8ff01470104c28ec3372,PodSandboxId:47c6aecb02b827d90fa98d560860dcb29069184c64ee4755d4c1f590c6ad5989,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724091193019658925,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-p65cb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f30449e-d4ea-4d6f-a63a-08551024bd04,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a1b7fec3f151c3ebd32ce721f81861e00daf06da360b6bad7a4c99a4b3c71d5,PodSandboxId:a8bc21a4e7d10603f5f44f0819a6baf14dda6ea43bd4a34b0756f711804ae455,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724091192823979246,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcf0b1666b512c678d4309e6a2bd2773,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea2f2cfbcacac8b9d0f716fc5bf8be816dac486447f26b5969f1d79a9031f7ca,PodSandboxId:834b78e6f8c8ae9b6949554d2864db66ec486375169d8a29f441745a6c13a6c7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724091192904907760,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab6b0fe91f166a5c05b58933ead885f6,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4760d8a0d8843fa04600f76c7a9e2b2ba5c4212e748492168d8c00d31ea0d515,PodSandboxId:15a37a0b36621b359e14b4e497dcf9a8bee8c5d328dee2de0d16ab4c727f8823,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724091192778643283,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 465e756b61a05a6f1c4dfeba2adbdeeb,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dbebbcf5b28297a583e961cdeb22de8d630ca8836e6f0ffcca3c4fe28b9a104,PodSandboxId:54b9254dbd54ee93a0df9ad92074a813df205bbacf4bf2950d47a6955ebf62e1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724091192765986526,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9269a2cf31966e0bbf30b6554fa311ee,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1560513c5f2f2c855e651ca853a37a183c2c92361d4db5001d39a783f9bf1dec,PodSandboxId:2a5484a378f88550bea42ad9cd40a477fefe20e700d36531a230194dd27918f7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724091186996673333,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-8fjpd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bedb900-107a-4f7e-aae7-391b18da4a26,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"
name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef0b28473496e4ab21e3f86bc64eb662e5c22e59e4a56f80f7bdad009460c73d,PodSandboxId:0f784aeccda9e0bff51a30b97a310813be1e271fdaae54f30006645ed5ae31b1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724090682352298631,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-fd2dw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f5e2f831-487f-4edb-b6c1-b391906a6d5b,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4208b72f7684106eeabb79597e9a16912d86fddf552d810668e52ee86e4cacf,PodSandboxId:5b83e59b0dd3110115fa51715b6d8f6d29e006636ab031766095bcb6200ff245,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724090536333833063,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-p65cb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f30449e-d4ea-4d6f-a63a-08551024bd04,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86aec3b9357709107938f07e57e09bef332ea9baea288a18bb10389d5108084b,PodSandboxId:86507aaa25957ebc7ff023a8f042b236a729503785cd3163a2a44e79daf28a80,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724090536330243526,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-8fjpd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bedb900-107a-4f7e-aae7-391b18da4a26,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66fd9c9b32e5e0294c89ebc2ee3c443fda85c40c3ad5b05d42357b4968e8d305,PodSandboxId:3c6e833618ab7965e295c1f82164c28a64e619a82a0a8a90542c16f004e32954,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724090524118032785,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vb66s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9322737a-5f8a-4d5a-a7d1-ba076bc8f2d8,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb8cccc1568bbb207d2c7c285f3897a7a425cba60f4dfcf3e8daa8082fc38ef0,PodSandboxId:dc27fd8c8c4a6cec062f5420b6ed3489f5b075fb1eb4e02074e5505c76d238e5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724090520283730684,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fwkf2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 001a3fe7-633c-44f8-9a8c-7401cec7af54,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:426a12b48132d73e1b93e6a7fb5b3420868e384eb280274c6ee81ae6f6bcea12,PodSandboxId:4cd25796bc67e8c9b4a666188feb3addfa806bf372a40c47a0ed8a3e3576c9a2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915a
f3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724090509151262837,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcf0b1666b512c678d4309e6a2bd2773,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0e66231bf791048a9932068b5f28d8479613545885bea8e42cf9c79913ffccd,PodSandboxId:1f46f8e2ba79c3a9b9a7f9729c154fc9c495e280d0a9fac6dc4fdf837a2e0b73,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
,State:CONTAINER_EXITED,CreatedAt:1724090509024852872,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 465e756b61a05a6f1c4dfeba2adbdeeb,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dfc8da25-e761-4d5b-89c6-156289078802 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:18:55 ha-086149 crio[3620]: time="2024-08-19 18:18:55.100429337Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6d91837f-16d2-4a55-ade5-43f47b0290a5 name=/runtime.v1.RuntimeService/Version
	Aug 19 18:18:55 ha-086149 crio[3620]: time="2024-08-19 18:18:55.100520748Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6d91837f-16d2-4a55-ade5-43f47b0290a5 name=/runtime.v1.RuntimeService/Version
	Aug 19 18:18:55 ha-086149 crio[3620]: time="2024-08-19 18:18:55.102320012Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1d4253b7-5a01-4db1-82a7-bf8a857ad271 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:18:55 ha-086149 crio[3620]: time="2024-08-19 18:18:55.102739405Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724091535102717798,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1d4253b7-5a01-4db1-82a7-bf8a857ad271 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:18:55 ha-086149 crio[3620]: time="2024-08-19 18:18:55.103287712Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=27006f17-c078-45c1-89e9-3770ddbc6100 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:18:55 ha-086149 crio[3620]: time="2024-08-19 18:18:55.103367314Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=27006f17-c078-45c1-89e9-3770ddbc6100 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:18:55 ha-086149 crio[3620]: time="2024-08-19 18:18:55.103933607Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7fc48458ae307ff361499fde833d54b89f7ed1cc124b8e2e4c5e623d5b59f5cf,PodSandboxId:4a10374978122541ed15e7f43ce8d30cc1d0cd85f051271ab88948bcb2c57a79,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724091270295934817,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c12159a8-5f84-4d19-aa54-7b56a9669f6c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b110ed1c7e4de28f673ba115eee8636180545973d22374de2fefcc11c697539,PodSandboxId:834b78e6f8c8ae9b6949554d2864db66ec486375169d8a29f441745a6c13a6c7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724091234291807173,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab6b0fe91f166a5c05b58933ead885f6,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8eb0bee9a15dccc2d82fb1b3ac35c0edda4dfaf7f15f58e06a340bf55e8f26ab,PodSandboxId:54b9254dbd54ee93a0df9ad92074a813df205bbacf4bf2950d47a6955ebf62e1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724091232293434047,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9269a2cf31966e0bbf30b6554fa311ee,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01d3428fa47f18423ed50d84a758bf632905445652b3088c079a7522697a5d53,PodSandboxId:4a10374978122541ed15e7f43ce8d30cc1d0cd85f051271ab88948bcb2c57a79,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724091227289314593,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c12159a8-5f84-4d19-aa54-7b56a9669f6c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31b792a184ef5bf6c6881a58559ac54b18794b0d2bdb0f213f9015a19c994ff0,PodSandboxId:5fdc0c51659ed4b89dc97e11cbfa487bd8403121cd95beb25e6735b3c83aa363,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724091226319223025,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-fd2dw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f5e2f831-487f-4edb-b6c1-b391906a6d5b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9e6f6fd570fa5f5a880753efc7f1228506acc92db408df6bf6ed9b5f34cfe93,PodSandboxId:c78d69ffa13ba1619670b2ce62d5e954ea916933dc86eecc61164297731d3363,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1724091207063154781,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71315ae10c82422e3efaca00d9b232cb,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2deb18dfc60e51eddc31befa21ffc0090ce2abc67b4511b62104ca5e8342f60,PodSandboxId:c7bab8d9969d8f2b85807cc8e16e713161cd1c353dfcfd272f167836b340da0c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724091193216378555,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vb66s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9322737a-5f8a-4d5a-a7d1-ba076bc8f2d8,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:7421b967684844bf1fe8f4abc52f1cd8635544a588cbdb2b910b55bf74594619,PodSandboxId:d4370b1d6fcb3ede7b9a41e432046068b76ec99429ad0424f03c801cdfedc7c1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724091193106196622,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fwkf2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 001a3fe7-633c-44f8-9a8c-7401cec7af54,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62c3f84d9
e207c67a86a14e215f591225047843d0d3d8ff01470104c28ec3372,PodSandboxId:47c6aecb02b827d90fa98d560860dcb29069184c64ee4755d4c1f590c6ad5989,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724091193019658925,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-p65cb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f30449e-d4ea-4d6f-a63a-08551024bd04,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a1b7fec3f151c3ebd32ce721f81861e00daf06da360b6bad7a4c99a4b3c71d5,PodSandboxId:a8bc21a4e7d10603f5f44f0819a6baf14dda6ea43bd4a34b0756f711804ae455,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724091192823979246,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcf0b1666b512c678d4309e6a2bd2773,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea2f2cfbcacac8b9d0f716fc5bf8be816dac486447f26b5969f1d79a9031f7ca,PodSandboxId:834b78e6f8c8ae9b6949554d2864db66ec486375169d8a29f441745a6c13a6c7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724091192904907760,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab6b0fe91f166a5c05b58933ead885f6,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4760d8a0d8843fa04600f76c7a9e2b2ba5c4212e748492168d8c00d31ea0d515,PodSandboxId:15a37a0b36621b359e14b4e497dcf9a8bee8c5d328dee2de0d16ab4c727f8823,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724091192778643283,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 465e756b61a05a6f1c4dfeba2adbdeeb,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dbebbcf5b28297a583e961cdeb22de8d630ca8836e6f0ffcca3c4fe28b9a104,PodSandboxId:54b9254dbd54ee93a0df9ad92074a813df205bbacf4bf2950d47a6955ebf62e1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724091192765986526,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9269a2cf31966e0bbf30b6554fa311ee,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1560513c5f2f2c855e651ca853a37a183c2c92361d4db5001d39a783f9bf1dec,PodSandboxId:2a5484a378f88550bea42ad9cd40a477fefe20e700d36531a230194dd27918f7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724091186996673333,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-8fjpd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bedb900-107a-4f7e-aae7-391b18da4a26,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"
name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef0b28473496e4ab21e3f86bc64eb662e5c22e59e4a56f80f7bdad009460c73d,PodSandboxId:0f784aeccda9e0bff51a30b97a310813be1e271fdaae54f30006645ed5ae31b1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724090682352298631,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-fd2dw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f5e2f831-487f-4edb-b6c1-b391906a6d5b,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4208b72f7684106eeabb79597e9a16912d86fddf552d810668e52ee86e4cacf,PodSandboxId:5b83e59b0dd3110115fa51715b6d8f6d29e006636ab031766095bcb6200ff245,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724090536333833063,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-p65cb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f30449e-d4ea-4d6f-a63a-08551024bd04,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86aec3b9357709107938f07e57e09bef332ea9baea288a18bb10389d5108084b,PodSandboxId:86507aaa25957ebc7ff023a8f042b236a729503785cd3163a2a44e79daf28a80,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724090536330243526,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-8fjpd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bedb900-107a-4f7e-aae7-391b18da4a26,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66fd9c9b32e5e0294c89ebc2ee3c443fda85c40c3ad5b05d42357b4968e8d305,PodSandboxId:3c6e833618ab7965e295c1f82164c28a64e619a82a0a8a90542c16f004e32954,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724090524118032785,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vb66s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9322737a-5f8a-4d5a-a7d1-ba076bc8f2d8,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb8cccc1568bbb207d2c7c285f3897a7a425cba60f4dfcf3e8daa8082fc38ef0,PodSandboxId:dc27fd8c8c4a6cec062f5420b6ed3489f5b075fb1eb4e02074e5505c76d238e5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724090520283730684,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fwkf2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 001a3fe7-633c-44f8-9a8c-7401cec7af54,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:426a12b48132d73e1b93e6a7fb5b3420868e384eb280274c6ee81ae6f6bcea12,PodSandboxId:4cd25796bc67e8c9b4a666188feb3addfa806bf372a40c47a0ed8a3e3576c9a2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915a
f3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724090509151262837,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcf0b1666b512c678d4309e6a2bd2773,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0e66231bf791048a9932068b5f28d8479613545885bea8e42cf9c79913ffccd,PodSandboxId:1f46f8e2ba79c3a9b9a7f9729c154fc9c495e280d0a9fac6dc4fdf837a2e0b73,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
,State:CONTAINER_EXITED,CreatedAt:1724090509024852872,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-086149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 465e756b61a05a6f1c4dfeba2adbdeeb,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=27006f17-c078-45c1-89e9-3770ddbc6100 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7fc48458ae307       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       4                   4a10374978122       storage-provisioner
	0b110ed1c7e4d       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      5 minutes ago       Running             kube-controller-manager   2                   834b78e6f8c8a       kube-controller-manager-ha-086149
	8eb0bee9a15dc       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      5 minutes ago       Running             kube-apiserver            3                   54b9254dbd54e       kube-apiserver-ha-086149
	01d3428fa47f1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Exited              storage-provisioner       3                   4a10374978122       storage-provisioner
	31b792a184ef5       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      5 minutes ago       Running             busybox                   1                   5fdc0c51659ed       busybox-7dff88458-fd2dw
	a9e6f6fd570fa       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      5 minutes ago       Running             kube-vip                  0                   c78d69ffa13ba       kube-vip-ha-086149
	c2deb18dfc60e       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      5 minutes ago       Running             kindnet-cni               1                   c7bab8d9969d8       kindnet-vb66s
	7421b96768484       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      5 minutes ago       Running             kube-proxy                1                   d4370b1d6fcb3       kube-proxy-fwkf2
	62c3f84d9e207       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   47c6aecb02b82       coredns-6f6b679f8f-p65cb
	ea2f2cfbcacac       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      5 minutes ago       Exited              kube-controller-manager   1                   834b78e6f8c8a       kube-controller-manager-ha-086149
	8a1b7fec3f151       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      5 minutes ago       Running             etcd                      1                   a8bc21a4e7d10       etcd-ha-086149
	4760d8a0d8843       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      5 minutes ago       Running             kube-scheduler            1                   15a37a0b36621       kube-scheduler-ha-086149
	3dbebbcf5b282       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      5 minutes ago       Exited              kube-apiserver            2                   54b9254dbd54e       kube-apiserver-ha-086149
	1560513c5f2f2       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   2a5484a378f88       coredns-6f6b679f8f-8fjpd
	ef0b28473496e       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   14 minutes ago      Exited              busybox                   0                   0f784aeccda9e       busybox-7dff88458-fd2dw
	d4208b72f7684       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      16 minutes ago      Exited              coredns                   0                   5b83e59b0dd31       coredns-6f6b679f8f-p65cb
	86aec3b935770       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      16 minutes ago      Exited              coredns                   0                   86507aaa25957       coredns-6f6b679f8f-8fjpd
	66fd9c9b32e5e       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    16 minutes ago      Exited              kindnet-cni               0                   3c6e833618ab7       kindnet-vb66s
	eb8cccc1568bb       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      16 minutes ago      Exited              kube-proxy                0                   dc27fd8c8c4a6       kube-proxy-fwkf2
	426a12b48132d       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      17 minutes ago      Exited              etcd                      0                   4cd25796bc67e       etcd-ha-086149
	d0e66231bf791       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      17 minutes ago      Exited              kube-scheduler            0                   1f46f8e2ba79c       kube-scheduler-ha-086149
	
	
	==> coredns [1560513c5f2f2c855e651ca853a37a183c2c92361d4db5001d39a783f9bf1dec] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[390561824]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Aug-2024 18:13:21.023) (total time: 10001ms):
	Trace[390561824]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (18:13:31.025)
	Trace[390561824]: [10.001691472s] [10.001691472s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:45622->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:45622->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:45632->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:45632->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [62c3f84d9e207c67a86a14e215f591225047843d0d3d8ff01470104c28ec3372] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:45636->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:45636->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:42590->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1605571601]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Aug-2024 18:13:24.787) (total time: 12203ms):
	Trace[1605571601]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:42590->10.96.0.1:443: read: connection reset by peer 12203ms (18:13:36.990)
	Trace[1605571601]: [12.20334978s] [12.20334978s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:42590->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [86aec3b9357709107938f07e57e09bef332ea9baea288a18bb10389d5108084b] <==
	[INFO] 10.244.2.2:53329 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000136079s
	[INFO] 10.244.0.4:48191 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014988s
	[INFO] 10.244.0.4:47708 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000096718s
	[INFO] 10.244.0.4:42128 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000149115s
	[INFO] 10.244.0.4:49211 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000058729s
	[INFO] 10.244.0.4:41169 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000147844s
	[INFO] 10.244.1.2:55021 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000105902s
	[INFO] 10.244.1.2:39523 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000197158s
	[INFO] 10.244.1.2:39402 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000068589s
	[INFO] 10.244.1.2:46940 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000086232s
	[INFO] 10.244.2.2:59049 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000177439s
	[INFO] 10.244.2.2:48370 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000103075s
	[INFO] 10.244.2.2:36161 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000110997s
	[INFO] 10.244.2.2:44839 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000079394s
	[INFO] 10.244.1.2:53636 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000153191s
	[INFO] 10.244.1.2:46986 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00014037s
	[INFO] 10.244.1.2:39517 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000205565s
	[INFO] 10.244.2.2:34630 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000217644s
	[INFO] 10.244.2.2:48208 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000175515s
	[INFO] 10.244.2.2:42420 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000305788s
	[INFO] 10.244.0.4:49746 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000082325s
	[INFO] 10.244.0.4:48461 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000222115s
	[INFO] 10.244.1.2:58589 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000263104s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [d4208b72f7684106eeabb79597e9a16912d86fddf552d810668e52ee86e4cacf] <==
	[INFO] 10.244.2.2:60503 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002804151s
	[INFO] 10.244.2.2:49027 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000124508s
	[INFO] 10.244.0.4:59229 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001769172s
	[INFO] 10.244.0.4:34487 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001315875s
	[INFO] 10.244.0.4:34657 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000124575s
	[INFO] 10.244.1.2:49809 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001830693s
	[INFO] 10.244.1.2:60513 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001456039s
	[INFO] 10.244.1.2:58099 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000201903s
	[INFO] 10.244.1.2:36863 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000108279s
	[INFO] 10.244.0.4:48767 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000119232s
	[INFO] 10.244.0.4:35383 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00018722s
	[INFO] 10.244.0.4:58993 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000063721s
	[INFO] 10.244.0.4:55887 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000059646s
	[INFO] 10.244.1.2:45536 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000124964s
	[INFO] 10.244.2.2:45976 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000160498s
	[INFO] 10.244.0.4:38315 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000146686s
	[INFO] 10.244.0.4:36553 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000130807s
	[INFO] 10.244.1.2:46657 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00022076s
	[INFO] 10.244.1.2:44650 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000123411s
	[INFO] 10.244.1.2:46585 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000089999s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io)
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-086149
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-086149
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9c2db9d51ec33b5c53a86e9ba3d384ee332e3411
	                    minikube.k8s.io/name=ha-086149
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T18_01_56_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 18:01:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-086149
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 18:18:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 18:13:54 +0000   Mon, 19 Aug 2024 18:01:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 18:13:54 +0000   Mon, 19 Aug 2024 18:01:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 18:13:54 +0000   Mon, 19 Aug 2024 18:01:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 18:13:54 +0000   Mon, 19 Aug 2024 18:02:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.249
	  Hostname:    ha-086149
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f2adf13588c04842be48ba7ffa571365
	  System UUID:                f2adf135-88c0-4842-be48-ba7ffa571365
	  Boot ID:                    affd916c-f074-4dc0-bd43-4c71cd2f0b12
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-fd2dw              0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 coredns-6f6b679f8f-8fjpd             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 coredns-6f6b679f8f-p65cb             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 etcd-ha-086149                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         17m
	  kube-system                 kindnet-vb66s                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      16m
	  kube-system                 kube-apiserver-ha-086149             250m (12%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-controller-manager-ha-086149    200m (10%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-proxy-fwkf2                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-ha-086149             100m (5%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-vip-ha-086149                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m21s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 5m                     kube-proxy       
	  Normal   Starting                 16m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  17m                    kubelet          Node ha-086149 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     17m                    kubelet          Node ha-086149 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    17m                    kubelet          Node ha-086149 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 17m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  17m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           16m                    node-controller  Node ha-086149 event: Registered Node ha-086149 in Controller
	  Normal   NodeReady                16m                    kubelet          Node ha-086149 status is now: NodeReady
	  Normal   RegisteredNode           15m                    node-controller  Node ha-086149 event: Registered Node ha-086149 in Controller
	  Normal   RegisteredNode           14m                    node-controller  Node ha-086149 event: Registered Node ha-086149 in Controller
	  Warning  ContainerGCFailed        6m (x2 over 7m)        kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             5m55s (x3 over 6m45s)  kubelet          Node ha-086149 status is now: NodeNotReady
	  Normal   RegisteredNode           5m3s                   node-controller  Node ha-086149 event: Registered Node ha-086149 in Controller
	  Normal   RegisteredNode           4m58s                  node-controller  Node ha-086149 event: Registered Node ha-086149 in Controller
	  Normal   RegisteredNode           3m16s                  node-controller  Node ha-086149 event: Registered Node ha-086149 in Controller
	
	
	Name:               ha-086149-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-086149-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9c2db9d51ec33b5c53a86e9ba3d384ee332e3411
	                    minikube.k8s.io/name=ha-086149
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_19T18_02_56_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 18:02:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-086149-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 18:18:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 18:17:41 +0000   Mon, 19 Aug 2024 18:17:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 18:17:41 +0000   Mon, 19 Aug 2024 18:17:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 18:17:41 +0000   Mon, 19 Aug 2024 18:17:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 18:17:41 +0000   Mon, 19 Aug 2024 18:17:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.167
	  Hostname:    ha-086149-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 db74a62099694214b3e6abfad40c4b33
	  System UUID:                db74a620-9969-4214-b3e6-abfad40c4b33
	  Boot ID:                    caf8e9a8-08b4-4ee9-b12f-02973afc1d5b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-vgcdh                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 etcd-ha-086149-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         16m
	  kube-system                 kindnet-dgj9c                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      16m
	  kube-system                 kube-apiserver-ha-086149-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-ha-086149-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-vx94r                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-ha-086149-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-vip-ha-086149-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 15m                    kube-proxy       
	  Normal  Starting                 4m55s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  16m (x8 over 16m)      kubelet          Node ha-086149-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     16m (x7 over 16m)      kubelet          Node ha-086149-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    16m (x8 over 16m)      kubelet          Node ha-086149-m02 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           16m                    node-controller  Node ha-086149-m02 event: Registered Node ha-086149-m02 in Controller
	  Normal  RegisteredNode           15m                    node-controller  Node ha-086149-m02 event: Registered Node ha-086149-m02 in Controller
	  Normal  RegisteredNode           14m                    node-controller  Node ha-086149-m02 event: Registered Node ha-086149-m02 in Controller
	  Normal  NodeNotReady             12m                    node-controller  Node ha-086149-m02 status is now: NodeNotReady
	  Normal  NodeHasNoDiskPressure    5m26s (x8 over 5m26s)  kubelet          Node ha-086149-m02 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 5m26s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m26s (x8 over 5m26s)  kubelet          Node ha-086149-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     5m26s (x7 over 5m26s)  kubelet          Node ha-086149-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m3s                   node-controller  Node ha-086149-m02 event: Registered Node ha-086149-m02 in Controller
	  Normal  RegisteredNode           4m58s                  node-controller  Node ha-086149-m02 event: Registered Node ha-086149-m02 in Controller
	  Normal  RegisteredNode           3m17s                  node-controller  Node ha-086149-m02 event: Registered Node ha-086149-m02 in Controller
	  Normal  NodeNotReady             103s                   node-controller  Node ha-086149-m02 status is now: NodeNotReady
	
	
	Name:               ha-086149-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-086149-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9c2db9d51ec33b5c53a86e9ba3d384ee332e3411
	                    minikube.k8s.io/name=ha-086149
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_19T18_05_16_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 18:05:15 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-086149-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 18:16:27 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 19 Aug 2024 18:16:07 +0000   Mon, 19 Aug 2024 18:17:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 19 Aug 2024 18:16:07 +0000   Mon, 19 Aug 2024 18:17:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 19 Aug 2024 18:16:07 +0000   Mon, 19 Aug 2024 18:17:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 19 Aug 2024 18:16:07 +0000   Mon, 19 Aug 2024 18:17:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.173
	  Hostname:    ha-086149-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e1e9d0d713474980a7c895cb88752846
	  System UUID:                e1e9d0d7-1347-4980-a7c8-95cb88752846
	  Boot ID:                    09f37e7f-8da3-4260-ba70-0b5b1342b6fc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-kt7km    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m37s
	  kube-system                 kindnet-gvr65              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      13m
	  kube-system                 kube-proxy-9t8vw           0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m44s                  kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  13m (x2 over 13m)      kubelet          Node ha-086149-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x2 over 13m)      kubelet          Node ha-086149-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x2 over 13m)      kubelet          Node ha-086149-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           13m                    node-controller  Node ha-086149-m04 event: Registered Node ha-086149-m04 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-086149-m04 event: Registered Node ha-086149-m04 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-086149-m04 event: Registered Node ha-086149-m04 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-086149-m04 status is now: NodeReady
	  Normal   RegisteredNode           5m3s                   node-controller  Node ha-086149-m04 event: Registered Node ha-086149-m04 in Controller
	  Normal   RegisteredNode           4m58s                  node-controller  Node ha-086149-m04 event: Registered Node ha-086149-m04 in Controller
	  Normal   RegisteredNode           3m17s                  node-controller  Node ha-086149-m04 event: Registered Node ha-086149-m04 in Controller
	  Warning  Rebooted                 2m48s                  kubelet          Node ha-086149-m04 has been rebooted, boot id: 09f37e7f-8da3-4260-ba70-0b5b1342b6fc
	  Normal   NodeAllocatableEnforced  2m48s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 2m48s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  2m48s (x2 over 2m48s)  kubelet          Node ha-086149-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m48s (x2 over 2m48s)  kubelet          Node ha-086149-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m48s (x2 over 2m48s)  kubelet          Node ha-086149-m04 status is now: NodeHasSufficientPID
	  Normal   NodeReady                2m48s                  kubelet          Node ha-086149-m04 status is now: NodeReady
	  Normal   NodeNotReady             103s (x2 over 4m23s)   node-controller  Node ha-086149-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +9.178691] systemd-fstab-generator[603]: Ignoring "noauto" option for root device
	[  +0.057166] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065842] systemd-fstab-generator[615]: Ignoring "noauto" option for root device
	[  +0.172283] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.148890] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.254962] systemd-fstab-generator[671]: Ignoring "noauto" option for root device
	[  +4.015563] systemd-fstab-generator[771]: Ignoring "noauto" option for root device
	[  +4.054508] systemd-fstab-generator[906]: Ignoring "noauto" option for root device
	[  +0.063854] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.951467] systemd-fstab-generator[1326]: Ignoring "noauto" option for root device
	[  +0.096986] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.046961] kauditd_printk_skb: 21 callbacks suppressed
	[Aug19 18:02] kauditd_printk_skb: 37 callbacks suppressed
	[ +54.874778] kauditd_printk_skb: 26 callbacks suppressed
	[Aug19 18:12] systemd-fstab-generator[3539]: Ignoring "noauto" option for root device
	[  +0.152453] systemd-fstab-generator[3551]: Ignoring "noauto" option for root device
	[  +0.178007] systemd-fstab-generator[3565]: Ignoring "noauto" option for root device
	[  +0.154410] systemd-fstab-generator[3577]: Ignoring "noauto" option for root device
	[  +0.275147] systemd-fstab-generator[3605]: Ignoring "noauto" option for root device
	[Aug19 18:13] systemd-fstab-generator[3707]: Ignoring "noauto" option for root device
	[  +0.091829] kauditd_printk_skb: 100 callbacks suppressed
	[  +6.154917] kauditd_printk_skb: 22 callbacks suppressed
	[  +6.555350] kauditd_printk_skb: 75 callbacks suppressed
	[ +32.934513] kauditd_printk_skb: 5 callbacks suppressed
	[  +8.279321] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [426a12b48132d73e1b93e6a7fb5b3420868e384eb280274c6ee81ae6f6bcea12] <==
	2024/08/19 18:11:27 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-19T18:11:27.737563Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15368412145618819627,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-08-19T18:11:27.763672Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.249:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-19T18:11:27.763729Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.249:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-19T18:11:27.763789Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"318ee90c3446d547","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-08-19T18:11:27.763948Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"d67143b3afdcc30"}
	{"level":"info","ts":"2024-08-19T18:11:27.763988Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"d67143b3afdcc30"}
	{"level":"info","ts":"2024-08-19T18:11:27.764030Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"d67143b3afdcc30"}
	{"level":"info","ts":"2024-08-19T18:11:27.764178Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"318ee90c3446d547","remote-peer-id":"d67143b3afdcc30"}
	{"level":"info","ts":"2024-08-19T18:11:27.764234Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"318ee90c3446d547","remote-peer-id":"d67143b3afdcc30"}
	{"level":"info","ts":"2024-08-19T18:11:27.764266Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"318ee90c3446d547","remote-peer-id":"d67143b3afdcc30"}
	{"level":"info","ts":"2024-08-19T18:11:27.764277Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"d67143b3afdcc30"}
	{"level":"info","ts":"2024-08-19T18:11:27.764282Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"15d98aedf6fb70a2"}
	{"level":"info","ts":"2024-08-19T18:11:27.764291Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"15d98aedf6fb70a2"}
	{"level":"info","ts":"2024-08-19T18:11:27.764327Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"15d98aedf6fb70a2"}
	{"level":"info","ts":"2024-08-19T18:11:27.764412Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"318ee90c3446d547","remote-peer-id":"15d98aedf6fb70a2"}
	{"level":"info","ts":"2024-08-19T18:11:27.764456Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"318ee90c3446d547","remote-peer-id":"15d98aedf6fb70a2"}
	{"level":"info","ts":"2024-08-19T18:11:27.764502Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"318ee90c3446d547","remote-peer-id":"15d98aedf6fb70a2"}
	{"level":"info","ts":"2024-08-19T18:11:27.764514Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"15d98aedf6fb70a2"}
	{"level":"info","ts":"2024-08-19T18:11:27.767384Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.249:2380"}
	{"level":"warn","ts":"2024-08-19T18:11:27.767499Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"9.032148929s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	{"level":"info","ts":"2024-08-19T18:11:27.767544Z","caller":"traceutil/trace.go:171","msg":"trace[1303976489] range","detail":"{range_begin:; range_end:; }","duration":"9.032205059s","start":"2024-08-19T18:11:18.735326Z","end":"2024-08-19T18:11:27.767531Z","steps":["trace[1303976489] 'agreement among raft nodes before linearized reading'  (duration: 9.032147733s)"],"step_count":1}
	{"level":"error","ts":"2024-08-19T18:11:27.767591Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[+]serializable_read ok\n[-]linearizable_read failed: etcdserver: server stopped\n[+]data_corruption ok\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	{"level":"info","ts":"2024-08-19T18:11:27.767772Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.249:2380"}
	{"level":"info","ts":"2024-08-19T18:11:27.767804Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-086149","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.249:2380"],"advertise-client-urls":["https://192.168.39.249:2379"]}
	
	
	==> etcd [8a1b7fec3f151c3ebd32ce721f81861e00daf06da360b6bad7a4c99a4b3c71d5] <==
	{"level":"info","ts":"2024-08-19T18:15:29.691697Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"318ee90c3446d547","remote-peer-id":"15d98aedf6fb70a2"}
	{"level":"info","ts":"2024-08-19T18:15:29.703852Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"318ee90c3446d547","to":"15d98aedf6fb70a2","stream-type":"stream Message"}
	{"level":"info","ts":"2024-08-19T18:15:29.703908Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"318ee90c3446d547","remote-peer-id":"15d98aedf6fb70a2"}
	{"level":"info","ts":"2024-08-19T18:15:31.036989Z","caller":"traceutil/trace.go:171","msg":"trace[1006721948] transaction","detail":"{read_only:false; response_revision:2494; number_of_response:1; }","duration":"107.568279ms","start":"2024-08-19T18:15:30.929394Z","end":"2024-08-19T18:15:31.036962Z","steps":["trace[1006721948] 'process raft request'  (duration: 107.424392ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T18:15:35.491345Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"137.426574ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T18:15:35.491437Z","caller":"traceutil/trace.go:171","msg":"trace[111029861] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:2511; }","duration":"137.55958ms","start":"2024-08-19T18:15:35.353859Z","end":"2024-08-19T18:15:35.491418Z","steps":["trace[111029861] 'agreement among raft nodes before linearized reading'  (duration: 74.578777ms)","trace[111029861] 'range keys from in-memory index tree'  (duration: 62.831484ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-19T18:16:11.041700Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.556582ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-8snb5\" ","response":"range_response_count:1 size:4870"}
	{"level":"info","ts":"2024-08-19T18:16:11.041897Z","caller":"traceutil/trace.go:171","msg":"trace[521001956] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-8snb5; range_end:; response_count:1; response_revision:2652; }","duration":"101.818476ms","start":"2024-08-19T18:16:10.940057Z","end":"2024-08-19T18:16:11.041876Z","steps":["trace[521001956] 'agreement among raft nodes before linearized reading'  (duration: 99.332111ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T18:16:21.566524Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 switched to configuration voters=(965762889719598128 3571047793177318727)"}
	{"level":"info","ts":"2024-08-19T18:16:21.569000Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"ba21282e7acd13d6","local-member-id":"318ee90c3446d547","removed-remote-peer-id":"15d98aedf6fb70a2","removed-remote-peer-urls":["https://192.168.39.121:2380"]}
	{"level":"info","ts":"2024-08-19T18:16:21.569056Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"15d98aedf6fb70a2"}
	{"level":"warn","ts":"2024-08-19T18:16:21.569210Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"15d98aedf6fb70a2"}
	{"level":"info","ts":"2024-08-19T18:16:21.569262Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"15d98aedf6fb70a2"}
	{"level":"warn","ts":"2024-08-19T18:16:21.569378Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"15d98aedf6fb70a2"}
	{"level":"info","ts":"2024-08-19T18:16:21.569399Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"15d98aedf6fb70a2"}
	{"level":"info","ts":"2024-08-19T18:16:21.569789Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"318ee90c3446d547","remote-peer-id":"15d98aedf6fb70a2"}
	{"level":"warn","ts":"2024-08-19T18:16:21.570042Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"318ee90c3446d547","remote-peer-id":"15d98aedf6fb70a2","error":"context canceled"}
	{"level":"warn","ts":"2024-08-19T18:16:21.570178Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"15d98aedf6fb70a2","error":"failed to read 15d98aedf6fb70a2 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-08-19T18:16:21.570214Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"318ee90c3446d547","remote-peer-id":"15d98aedf6fb70a2"}
	{"level":"warn","ts":"2024-08-19T18:16:21.570392Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"318ee90c3446d547","remote-peer-id":"15d98aedf6fb70a2","error":"context canceled"}
	{"level":"info","ts":"2024-08-19T18:16:21.570434Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"318ee90c3446d547","remote-peer-id":"15d98aedf6fb70a2"}
	{"level":"info","ts":"2024-08-19T18:16:21.570455Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"15d98aedf6fb70a2"}
	{"level":"info","ts":"2024-08-19T18:16:21.570470Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"318ee90c3446d547","removed-remote-peer-id":"15d98aedf6fb70a2"}
	{"level":"warn","ts":"2024-08-19T18:16:21.596199Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"318ee90c3446d547","remote-peer-id-stream-handler":"318ee90c3446d547","remote-peer-id-from":"15d98aedf6fb70a2"}
	{"level":"warn","ts":"2024-08-19T18:16:21.597918Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"318ee90c3446d547","remote-peer-id-stream-handler":"318ee90c3446d547","remote-peer-id-from":"15d98aedf6fb70a2"}
	
	
	==> kernel <==
	 18:18:55 up 17 min,  0 users,  load average: 0.36, 0.52, 0.34
	Linux ha-086149 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [66fd9c9b32e5e0294c89ebc2ee3c443fda85c40c3ad5b05d42357b4968e8d305] <==
	I0819 18:10:55.256725       1 main.go:322] Node ha-086149-m04 has CIDR [10.244.3.0/24] 
	I0819 18:11:05.253880       1 main.go:295] Handling node with IPs: map[192.168.39.173:{}]
	I0819 18:11:05.254010       1 main.go:322] Node ha-086149-m04 has CIDR [10.244.3.0/24] 
	I0819 18:11:05.254352       1 main.go:295] Handling node with IPs: map[192.168.39.249:{}]
	I0819 18:11:05.254496       1 main.go:299] handling current node
	I0819 18:11:05.254549       1 main.go:295] Handling node with IPs: map[192.168.39.167:{}]
	I0819 18:11:05.254571       1 main.go:322] Node ha-086149-m02 has CIDR [10.244.1.0/24] 
	I0819 18:11:05.254680       1 main.go:295] Handling node with IPs: map[192.168.39.121:{}]
	I0819 18:11:05.254702       1 main.go:322] Node ha-086149-m03 has CIDR [10.244.2.0/24] 
	I0819 18:11:15.261601       1 main.go:295] Handling node with IPs: map[192.168.39.249:{}]
	I0819 18:11:15.261806       1 main.go:299] handling current node
	I0819 18:11:15.261851       1 main.go:295] Handling node with IPs: map[192.168.39.167:{}]
	I0819 18:11:15.261870       1 main.go:322] Node ha-086149-m02 has CIDR [10.244.1.0/24] 
	I0819 18:11:15.262042       1 main.go:295] Handling node with IPs: map[192.168.39.121:{}]
	I0819 18:11:15.262067       1 main.go:322] Node ha-086149-m03 has CIDR [10.244.2.0/24] 
	I0819 18:11:15.262273       1 main.go:295] Handling node with IPs: map[192.168.39.173:{}]
	I0819 18:11:15.262306       1 main.go:322] Node ha-086149-m04 has CIDR [10.244.3.0/24] 
	I0819 18:11:25.253256       1 main.go:295] Handling node with IPs: map[192.168.39.249:{}]
	I0819 18:11:25.253371       1 main.go:299] handling current node
	I0819 18:11:25.253427       1 main.go:295] Handling node with IPs: map[192.168.39.167:{}]
	I0819 18:11:25.253446       1 main.go:322] Node ha-086149-m02 has CIDR [10.244.1.0/24] 
	I0819 18:11:25.253659       1 main.go:295] Handling node with IPs: map[192.168.39.121:{}]
	I0819 18:11:25.253683       1 main.go:322] Node ha-086149-m03 has CIDR [10.244.2.0/24] 
	I0819 18:11:25.253744       1 main.go:295] Handling node with IPs: map[192.168.39.173:{}]
	I0819 18:11:25.253762       1 main.go:322] Node ha-086149-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [c2deb18dfc60e51eddc31befa21ffc0090ce2abc67b4511b62104ca5e8342f60] <==
	I0819 18:18:14.450331       1 main.go:322] Node ha-086149-m02 has CIDR [10.244.1.0/24] 
	I0819 18:18:24.452293       1 main.go:295] Handling node with IPs: map[192.168.39.173:{}]
	I0819 18:18:24.452408       1 main.go:322] Node ha-086149-m04 has CIDR [10.244.3.0/24] 
	I0819 18:18:24.452544       1 main.go:295] Handling node with IPs: map[192.168.39.249:{}]
	I0819 18:18:24.452566       1 main.go:299] handling current node
	I0819 18:18:24.452587       1 main.go:295] Handling node with IPs: map[192.168.39.167:{}]
	I0819 18:18:24.452615       1 main.go:322] Node ha-086149-m02 has CIDR [10.244.1.0/24] 
	I0819 18:18:34.449588       1 main.go:295] Handling node with IPs: map[192.168.39.249:{}]
	I0819 18:18:34.452197       1 main.go:299] handling current node
	I0819 18:18:34.452238       1 main.go:295] Handling node with IPs: map[192.168.39.167:{}]
	I0819 18:18:34.452252       1 main.go:322] Node ha-086149-m02 has CIDR [10.244.1.0/24] 
	I0819 18:18:34.452461       1 main.go:295] Handling node with IPs: map[192.168.39.173:{}]
	I0819 18:18:34.452486       1 main.go:322] Node ha-086149-m04 has CIDR [10.244.3.0/24] 
	I0819 18:18:44.455777       1 main.go:295] Handling node with IPs: map[192.168.39.249:{}]
	I0819 18:18:44.455832       1 main.go:299] handling current node
	I0819 18:18:44.455848       1 main.go:295] Handling node with IPs: map[192.168.39.167:{}]
	I0819 18:18:44.455854       1 main.go:322] Node ha-086149-m02 has CIDR [10.244.1.0/24] 
	I0819 18:18:44.455988       1 main.go:295] Handling node with IPs: map[192.168.39.173:{}]
	I0819 18:18:44.456012       1 main.go:322] Node ha-086149-m04 has CIDR [10.244.3.0/24] 
	I0819 18:18:54.459255       1 main.go:295] Handling node with IPs: map[192.168.39.249:{}]
	I0819 18:18:54.459285       1 main.go:299] handling current node
	I0819 18:18:54.459302       1 main.go:295] Handling node with IPs: map[192.168.39.167:{}]
	I0819 18:18:54.459306       1 main.go:322] Node ha-086149-m02 has CIDR [10.244.1.0/24] 
	I0819 18:18:54.459431       1 main.go:295] Handling node with IPs: map[192.168.39.173:{}]
	I0819 18:18:54.459436       1 main.go:322] Node ha-086149-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [3dbebbcf5b28297a583e961cdeb22de8d630ca8836e6f0ffcca3c4fe28b9a104] <==
	I0819 18:13:13.608871       1 options.go:228] external host was not specified, using 192.168.39.249
	I0819 18:13:13.613712       1 server.go:142] Version: v1.31.0
	I0819 18:13:13.619756       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 18:13:14.538281       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0819 18:13:14.541682       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0819 18:13:14.546515       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0819 18:13:14.546607       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0819 18:13:14.546859       1 instance.go:232] Using reconciler: lease
	W0819 18:13:34.534239       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0819 18:13:34.534414       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0819 18:13:34.548163       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0819 18:13:34.548314       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [8eb0bee9a15dccc2d82fb1b3ac35c0edda4dfaf7f15f58e06a340bf55e8f26ab] <==
	I0819 18:13:54.134935       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0819 18:13:54.211004       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0819 18:13:54.211057       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0819 18:13:54.212805       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0819 18:13:54.213000       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0819 18:13:54.213127       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0819 18:13:54.218385       1 shared_informer.go:320] Caches are synced for configmaps
	I0819 18:13:54.218453       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0819 18:13:54.218460       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0819 18:13:54.220682       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0819 18:13:54.224192       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0819 18:13:54.224226       1 policy_source.go:224] refreshing policies
	W0819 18:13:54.227449       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.121 192.168.39.167]
	I0819 18:13:54.228676       1 controller.go:615] quota admission added evaluator for: endpoints
	I0819 18:13:54.237009       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0819 18:13:54.237056       1 aggregator.go:171] initial CRD sync complete...
	I0819 18:13:54.237075       1 autoregister_controller.go:144] Starting autoregister controller
	I0819 18:13:54.237121       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0819 18:13:54.237128       1 cache.go:39] Caches are synced for autoregister controller
	I0819 18:13:54.238876       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0819 18:13:54.241874       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0819 18:13:54.305856       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0819 18:13:55.146014       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0819 18:13:55.755504       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.121 192.168.39.167 192.168.39.249]
	W0819 18:16:35.769818       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.167 192.168.39.249]
	
	
	==> kube-controller-manager [0b110ed1c7e4de28f673ba115eee8636180545973d22374de2fefcc11c697539] <==
	E0819 18:17:17.624457       1 gc_controller.go:151] "Failed to get node" err="node \"ha-086149-m03\" not found" logger="pod-garbage-collector-controller" node="ha-086149-m03"
	E0819 18:17:17.624463       1 gc_controller.go:151] "Failed to get node" err="node \"ha-086149-m03\" not found" logger="pod-garbage-collector-controller" node="ha-086149-m03"
	E0819 18:17:17.624467       1 gc_controller.go:151] "Failed to get node" err="node \"ha-086149-m03\" not found" logger="pod-garbage-collector-controller" node="ha-086149-m03"
	I0819 18:17:17.641902       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-086149-m02"
	I0819 18:17:17.642929       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-086149-m03"
	I0819 18:17:17.672343       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-086149-m03"
	I0819 18:17:17.672682       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-086149-m03"
	I0819 18:17:17.704204       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-086149-m03"
	I0819 18:17:17.704378       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-086149-m03"
	I0819 18:17:17.741610       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-086149-m03"
	I0819 18:17:17.741884       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-086149-m03"
	I0819 18:17:17.774232       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-086149-m03"
	I0819 18:17:17.774283       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-8snb5"
	I0819 18:17:17.801562       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-8snb5"
	I0819 18:17:17.801666       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-086149-m03"
	I0819 18:17:17.833407       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-086149-m03"
	I0819 18:17:17.833447       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-x87ch"
	I0819 18:17:17.859300       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-x87ch"
	I0819 18:17:22.885268       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-086149-m04"
	I0819 18:17:27.723164       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-086149-m04"
	I0819 18:17:41.922180       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-086149-m02"
	I0819 18:17:41.946324       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-086149-m02"
	I0819 18:17:42.632342       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-086149-m02"
	I0819 18:17:49.916314       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="34.86841ms"
	I0819 18:17:49.916420       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="50.75µs"
	
	
	==> kube-controller-manager [ea2f2cfbcacac8b9d0f716fc5bf8be816dac486447f26b5969f1d79a9031f7ca] <==
	I0819 18:13:14.057185       1 serving.go:386] Generated self-signed cert in-memory
	I0819 18:13:14.479168       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0819 18:13:14.479210       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 18:13:14.481055       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0819 18:13:14.481286       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0819 18:13:14.481813       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0819 18:13:14.481871       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0819 18:13:35.558373       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.249:8443/healthz\": dial tcp 192.168.39.249:8443: connect: connection refused"
	
	
	==> kube-proxy [7421b967684844bf1fe8f4abc52f1cd8635544a588cbdb2b910b55bf74594619] <==
	E0819 18:13:14.654644       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-086149\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0819 18:13:17.727705       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-086149\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0819 18:13:20.798256       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-086149\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0819 18:13:26.944396       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-086149\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0819 18:13:36.157495       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-086149\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0819 18:13:54.660920       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.249"]
	E0819 18:13:54.663530       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 18:13:55.157974       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 18:13:55.158351       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 18:13:55.158489       1 server_linux.go:169] "Using iptables Proxier"
	I0819 18:13:55.164853       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 18:13:55.169656       1 server.go:483] "Version info" version="v1.31.0"
	I0819 18:13:55.169915       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 18:13:55.174365       1 config.go:197] "Starting service config controller"
	I0819 18:13:55.174478       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 18:13:55.176684       1 config.go:104] "Starting endpoint slice config controller"
	I0819 18:13:55.177212       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 18:13:55.178024       1 config.go:326] "Starting node config controller"
	I0819 18:13:55.178076       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 18:13:55.277940       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0819 18:13:55.278565       1 shared_informer.go:320] Caches are synced for service config
	I0819 18:13:55.278585       1 shared_informer.go:320] Caches are synced for node config
	W0819 18:17:20.784269       1 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0819 18:17:20.784503       1 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.EndpointSlice ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0819 18:17:20.784668       1 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	
	
	==> kube-proxy [eb8cccc1568bbb207d2c7c285f3897a7a425cba60f4dfcf3e8daa8082fc38ef0] <==
	E0819 18:10:25.703221       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1900\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 18:10:25.704174       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-086149&resourceVersion=1867": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 18:10:25.707259       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-086149&resourceVersion=1867\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 18:10:25.718147       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1900": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 18:10:25.718292       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1900\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 18:10:28.766492       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1900": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 18:10:28.766573       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1900\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 18:10:31.839383       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1900": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 18:10:31.839529       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1900\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 18:10:31.840527       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-086149&resourceVersion=1867": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 18:10:31.840582       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-086149&resourceVersion=1867\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 18:10:34.910480       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1900": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 18:10:34.910735       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1900\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 18:10:41.054583       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-086149&resourceVersion=1867": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 18:10:41.054663       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-086149&resourceVersion=1867\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 18:10:44.126300       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1900": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 18:10:44.127077       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1900\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 18:10:47.198727       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1900": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 18:10:47.198928       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1900\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 18:10:56.414227       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-086149&resourceVersion=1867": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 18:10:56.414371       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-086149&resourceVersion=1867\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 18:11:05.630418       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1900": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 18:11:05.630530       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1900\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 18:11:08.701861       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1900": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 18:11:08.702426       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1900\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-scheduler [4760d8a0d8843fa04600f76c7a9e2b2ba5c4212e748492168d8c00d31ea0d515] <==
	E0819 18:13:44.705213       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.249:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0819 18:13:45.943647       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.249:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0819 18:13:45.943806       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.39.249:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0819 18:13:46.072894       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.249:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0819 18:13:46.072954       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.249:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0819 18:13:50.753255       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.249:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0819 18:13:50.753431       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.39.249:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0819 18:13:50.775833       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.249:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0819 18:13:50.775953       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.249:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0819 18:13:54.190864       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0819 18:13:54.191062       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0819 18:13:54.191711       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0819 18:13:54.192990       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 18:13:54.193433       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0819 18:13:54.195199       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 18:13:54.195706       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0819 18:13:54.195817       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 18:13:54.196032       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0819 18:13:54.196148       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 18:13:54.196296       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0819 18:13:54.196459       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0819 18:14:15.865213       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0819 18:16:18.266827       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-kt7km\": pod busybox-7dff88458-kt7km is already assigned to node \"ha-086149-m04\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-kt7km" node="ha-086149-m04"
	E0819 18:16:18.267358       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-kt7km\": pod busybox-7dff88458-kt7km is already assigned to node \"ha-086149-m04\"" pod="default/busybox-7dff88458-kt7km"
	I0819 18:16:18.267487       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-kt7km" node="ha-086149-m04"
	
	
	==> kube-scheduler [d0e66231bf791048a9932068b5f28d8479613545885bea8e42cf9c79913ffccd] <==
	E0819 18:04:39.322857       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-fd2dw\": pod busybox-7dff88458-fd2dw is already assigned to node \"ha-086149\"" pod="default/busybox-7dff88458-fd2dw"
	I0819 18:04:39.322879       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-fd2dw" node="ha-086149"
	E0819 18:04:39.328354       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-vgcdh\": pod busybox-7dff88458-vgcdh is already assigned to node \"ha-086149-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-vgcdh" node="ha-086149-m02"
	E0819 18:04:39.328444       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-vgcdh\": pod busybox-7dff88458-vgcdh is already assigned to node \"ha-086149-m02\"" pod="default/busybox-7dff88458-vgcdh"
	E0819 18:11:04.562137       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	E0819 18:11:10.701998       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	E0819 18:11:12.036397       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	E0819 18:11:12.392187       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E0819 18:11:13.316775       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E0819 18:11:14.080432       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E0819 18:11:15.559037       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)" logger="UnhandledError"
	E0819 18:11:16.554424       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	E0819 18:11:16.591958       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	E0819 18:11:16.676641       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)" logger="UnhandledError"
	E0819 18:11:18.981726       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces)" logger="UnhandledError"
	E0819 18:11:20.048874       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E0819 18:11:21.237680       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	E0819 18:11:23.228000       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	E0819 18:11:23.304646       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	W0819 18:11:26.223810       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0819 18:11:26.223879       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0819 18:11:27.688620       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0819 18:11:27.689515       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0819 18:11:27.689929       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0819 18:11:27.696259       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Aug 19 18:17:45 ha-086149 kubelet[1333]: E0819 18:17:45.581213    1333 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724091465579488368,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:17:55 ha-086149 kubelet[1333]: E0819 18:17:55.297583    1333 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 18:17:55 ha-086149 kubelet[1333]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 18:17:55 ha-086149 kubelet[1333]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 18:17:55 ha-086149 kubelet[1333]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 18:17:55 ha-086149 kubelet[1333]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 18:17:55 ha-086149 kubelet[1333]: E0819 18:17:55.582864    1333 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724091475582609834,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:17:55 ha-086149 kubelet[1333]: E0819 18:17:55.582887    1333 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724091475582609834,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:18:05 ha-086149 kubelet[1333]: E0819 18:18:05.585069    1333 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724091485584553932,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:18:05 ha-086149 kubelet[1333]: E0819 18:18:05.585538    1333 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724091485584553932,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:18:15 ha-086149 kubelet[1333]: E0819 18:18:15.587044    1333 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724091495586829051,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:18:15 ha-086149 kubelet[1333]: E0819 18:18:15.587065    1333 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724091495586829051,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:18:25 ha-086149 kubelet[1333]: E0819 18:18:25.590062    1333 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724091505589286452,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:18:25 ha-086149 kubelet[1333]: E0819 18:18:25.590146    1333 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724091505589286452,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:18:35 ha-086149 kubelet[1333]: E0819 18:18:35.591585    1333 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724091515591058384,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:18:35 ha-086149 kubelet[1333]: E0819 18:18:35.591610    1333 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724091515591058384,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:18:45 ha-086149 kubelet[1333]: E0819 18:18:45.593515    1333 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724091525593246318,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:18:45 ha-086149 kubelet[1333]: E0819 18:18:45.593555    1333 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724091525593246318,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:18:55 ha-086149 kubelet[1333]: E0819 18:18:55.297566    1333 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 18:18:55 ha-086149 kubelet[1333]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 18:18:55 ha-086149 kubelet[1333]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 18:18:55 ha-086149 kubelet[1333]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 18:18:55 ha-086149 kubelet[1333]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 18:18:55 ha-086149 kubelet[1333]: E0819 18:18:55.595014    1333 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724091535594769758,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:18:55 ha-086149 kubelet[1333]: E0819 18:18:55.595037    1333 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724091535594769758,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 18:18:54.670610  399985 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19468-372744/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-086149 -n ha-086149
helpers_test.go:261: (dbg) Run:  kubectl --context ha-086149 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.91s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (331.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-528433
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-528433
E0819 18:35:24.365708  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/functional-499773/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-528433: exit status 82 (2m1.872212702s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-528433-m03"  ...
	* Stopping node "multinode-528433-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-528433" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-528433 --wait=true -v=8 --alsologtostderr
E0819 18:37:10.115802  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:38:27.434637  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/functional-499773/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-528433 --wait=true -v=8 --alsologtostderr: (3m27.363509394s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-528433
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-528433 -n multinode-528433
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-528433 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-528433 logs -n 25: (1.609035841s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-528433 ssh -n                                                                 | multinode-528433 | jenkins | v1.33.1 | 19 Aug 24 18:33 UTC | 19 Aug 24 18:33 UTC |
	|         | multinode-528433-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-528433 cp multinode-528433-m02:/home/docker/cp-test.txt                       | multinode-528433 | jenkins | v1.33.1 | 19 Aug 24 18:33 UTC | 19 Aug 24 18:33 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1208495116/001/cp-test_multinode-528433-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-528433 ssh -n                                                                 | multinode-528433 | jenkins | v1.33.1 | 19 Aug 24 18:33 UTC | 19 Aug 24 18:33 UTC |
	|         | multinode-528433-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-528433 cp multinode-528433-m02:/home/docker/cp-test.txt                       | multinode-528433 | jenkins | v1.33.1 | 19 Aug 24 18:33 UTC | 19 Aug 24 18:33 UTC |
	|         | multinode-528433:/home/docker/cp-test_multinode-528433-m02_multinode-528433.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-528433 ssh -n                                                                 | multinode-528433 | jenkins | v1.33.1 | 19 Aug 24 18:33 UTC | 19 Aug 24 18:33 UTC |
	|         | multinode-528433-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-528433 ssh -n multinode-528433 sudo cat                                       | multinode-528433 | jenkins | v1.33.1 | 19 Aug 24 18:33 UTC | 19 Aug 24 18:33 UTC |
	|         | /home/docker/cp-test_multinode-528433-m02_multinode-528433.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-528433 cp multinode-528433-m02:/home/docker/cp-test.txt                       | multinode-528433 | jenkins | v1.33.1 | 19 Aug 24 18:33 UTC | 19 Aug 24 18:33 UTC |
	|         | multinode-528433-m03:/home/docker/cp-test_multinode-528433-m02_multinode-528433-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-528433 ssh -n                                                                 | multinode-528433 | jenkins | v1.33.1 | 19 Aug 24 18:33 UTC | 19 Aug 24 18:33 UTC |
	|         | multinode-528433-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-528433 ssh -n multinode-528433-m03 sudo cat                                   | multinode-528433 | jenkins | v1.33.1 | 19 Aug 24 18:33 UTC | 19 Aug 24 18:33 UTC |
	|         | /home/docker/cp-test_multinode-528433-m02_multinode-528433-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-528433 cp testdata/cp-test.txt                                                | multinode-528433 | jenkins | v1.33.1 | 19 Aug 24 18:33 UTC | 19 Aug 24 18:33 UTC |
	|         | multinode-528433-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-528433 ssh -n                                                                 | multinode-528433 | jenkins | v1.33.1 | 19 Aug 24 18:33 UTC | 19 Aug 24 18:33 UTC |
	|         | multinode-528433-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-528433 cp multinode-528433-m03:/home/docker/cp-test.txt                       | multinode-528433 | jenkins | v1.33.1 | 19 Aug 24 18:33 UTC | 19 Aug 24 18:33 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1208495116/001/cp-test_multinode-528433-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-528433 ssh -n                                                                 | multinode-528433 | jenkins | v1.33.1 | 19 Aug 24 18:33 UTC | 19 Aug 24 18:33 UTC |
	|         | multinode-528433-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-528433 cp multinode-528433-m03:/home/docker/cp-test.txt                       | multinode-528433 | jenkins | v1.33.1 | 19 Aug 24 18:33 UTC | 19 Aug 24 18:33 UTC |
	|         | multinode-528433:/home/docker/cp-test_multinode-528433-m03_multinode-528433.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-528433 ssh -n                                                                 | multinode-528433 | jenkins | v1.33.1 | 19 Aug 24 18:33 UTC | 19 Aug 24 18:33 UTC |
	|         | multinode-528433-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-528433 ssh -n multinode-528433 sudo cat                                       | multinode-528433 | jenkins | v1.33.1 | 19 Aug 24 18:33 UTC | 19 Aug 24 18:33 UTC |
	|         | /home/docker/cp-test_multinode-528433-m03_multinode-528433.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-528433 cp multinode-528433-m03:/home/docker/cp-test.txt                       | multinode-528433 | jenkins | v1.33.1 | 19 Aug 24 18:33 UTC | 19 Aug 24 18:33 UTC |
	|         | multinode-528433-m02:/home/docker/cp-test_multinode-528433-m03_multinode-528433-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-528433 ssh -n                                                                 | multinode-528433 | jenkins | v1.33.1 | 19 Aug 24 18:33 UTC | 19 Aug 24 18:33 UTC |
	|         | multinode-528433-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-528433 ssh -n multinode-528433-m02 sudo cat                                   | multinode-528433 | jenkins | v1.33.1 | 19 Aug 24 18:33 UTC | 19 Aug 24 18:33 UTC |
	|         | /home/docker/cp-test_multinode-528433-m03_multinode-528433-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-528433 node stop m03                                                          | multinode-528433 | jenkins | v1.33.1 | 19 Aug 24 18:33 UTC | 19 Aug 24 18:33 UTC |
	| node    | multinode-528433 node start                                                             | multinode-528433 | jenkins | v1.33.1 | 19 Aug 24 18:33 UTC | 19 Aug 24 18:34 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-528433                                                                | multinode-528433 | jenkins | v1.33.1 | 19 Aug 24 18:34 UTC |                     |
	| stop    | -p multinode-528433                                                                     | multinode-528433 | jenkins | v1.33.1 | 19 Aug 24 18:34 UTC |                     |
	| start   | -p multinode-528433                                                                     | multinode-528433 | jenkins | v1.33.1 | 19 Aug 24 18:36 UTC | 19 Aug 24 18:39 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-528433                                                                | multinode-528433 | jenkins | v1.33.1 | 19 Aug 24 18:39 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 18:36:05
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 18:36:05.241418  409340 out.go:345] Setting OutFile to fd 1 ...
	I0819 18:36:05.241693  409340 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:36:05.241704  409340 out.go:358] Setting ErrFile to fd 2...
	I0819 18:36:05.241708  409340 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:36:05.241899  409340 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19468-372744/.minikube/bin
	I0819 18:36:05.242459  409340 out.go:352] Setting JSON to false
	I0819 18:36:05.243480  409340 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":8308,"bootTime":1724084257,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 18:36:05.243543  409340 start.go:139] virtualization: kvm guest
	I0819 18:36:05.245989  409340 out.go:177] * [multinode-528433] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 18:36:05.247617  409340 out.go:177]   - MINIKUBE_LOCATION=19468
	I0819 18:36:05.247644  409340 notify.go:220] Checking for updates...
	I0819 18:36:05.250302  409340 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 18:36:05.251820  409340 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19468-372744/kubeconfig
	I0819 18:36:05.253233  409340 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19468-372744/.minikube
	I0819 18:36:05.254530  409340 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 18:36:05.255854  409340 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 18:36:05.257617  409340 config.go:182] Loaded profile config "multinode-528433": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:36:05.257697  409340 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 18:36:05.258125  409340 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:36:05.258169  409340 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:36:05.273549  409340 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41741
	I0819 18:36:05.274024  409340 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:36:05.274603  409340 main.go:141] libmachine: Using API Version  1
	I0819 18:36:05.274624  409340 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:36:05.274995  409340 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:36:05.275201  409340 main.go:141] libmachine: (multinode-528433) Calling .DriverName
	I0819 18:36:05.310759  409340 out.go:177] * Using the kvm2 driver based on existing profile
	I0819 18:36:05.312080  409340 start.go:297] selected driver: kvm2
	I0819 18:36:05.312102  409340 start.go:901] validating driver "kvm2" against &{Name:multinode-528433 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.0 ClusterName:multinode-528433 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.168 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.107 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.113 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ing
ress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 18:36:05.312261  409340 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 18:36:05.312563  409340 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 18:36:05.312634  409340 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19468-372744/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 18:36:05.327995  409340 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 18:36:05.328690  409340 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 18:36:05.328766  409340 cni.go:84] Creating CNI manager for ""
	I0819 18:36:05.328778  409340 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0819 18:36:05.328841  409340 start.go:340] cluster config:
	{Name:multinode-528433 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-528433 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.168 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.107 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.113 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false
kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 18:36:05.328980  409340 iso.go:125] acquiring lock: {Name:mk4c0ac1c3202b1a296739df622960e7a0bd8566 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 18:36:05.330798  409340 out.go:177] * Starting "multinode-528433" primary control-plane node in "multinode-528433" cluster
	I0819 18:36:05.332102  409340 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 18:36:05.332144  409340 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0819 18:36:05.332162  409340 cache.go:56] Caching tarball of preloaded images
	I0819 18:36:05.332248  409340 preload.go:172] Found /home/jenkins/minikube-integration/19468-372744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 18:36:05.332262  409340 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 18:36:05.332397  409340 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/multinode-528433/config.json ...
	I0819 18:36:05.332628  409340 start.go:360] acquireMachinesLock for multinode-528433: {Name:mk24ba67a747357e9ce40f1e460d2bb0bc59cc75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 18:36:05.332683  409340 start.go:364] duration metric: took 33.324µs to acquireMachinesLock for "multinode-528433"
	I0819 18:36:05.332704  409340 start.go:96] Skipping create...Using existing machine configuration
	I0819 18:36:05.332714  409340 fix.go:54] fixHost starting: 
	I0819 18:36:05.332980  409340 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:36:05.333030  409340 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:36:05.347703  409340 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36757
	I0819 18:36:05.348203  409340 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:36:05.348717  409340 main.go:141] libmachine: Using API Version  1
	I0819 18:36:05.348740  409340 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:36:05.349054  409340 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:36:05.349247  409340 main.go:141] libmachine: (multinode-528433) Calling .DriverName
	I0819 18:36:05.349414  409340 main.go:141] libmachine: (multinode-528433) Calling .GetState
	I0819 18:36:05.350843  409340 fix.go:112] recreateIfNeeded on multinode-528433: state=Running err=<nil>
	W0819 18:36:05.350865  409340 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 18:36:05.352601  409340 out.go:177] * Updating the running kvm2 "multinode-528433" VM ...
	I0819 18:36:05.353730  409340 machine.go:93] provisionDockerMachine start ...
	I0819 18:36:05.353747  409340 main.go:141] libmachine: (multinode-528433) Calling .DriverName
	I0819 18:36:05.353959  409340 main.go:141] libmachine: (multinode-528433) Calling .GetSSHHostname
	I0819 18:36:05.356553  409340 main.go:141] libmachine: (multinode-528433) DBG | domain multinode-528433 has defined MAC address 52:54:00:78:95:69 in network mk-multinode-528433
	I0819 18:36:05.356948  409340 main.go:141] libmachine: (multinode-528433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:95:69", ip: ""} in network mk-multinode-528433: {Iface:virbr1 ExpiryTime:2024-08-19 19:30:36 +0000 UTC Type:0 Mac:52:54:00:78:95:69 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-528433 Clientid:01:52:54:00:78:95:69}
	I0819 18:36:05.356969  409340 main.go:141] libmachine: (multinode-528433) DBG | domain multinode-528433 has defined IP address 192.168.39.168 and MAC address 52:54:00:78:95:69 in network mk-multinode-528433
	I0819 18:36:05.357144  409340 main.go:141] libmachine: (multinode-528433) Calling .GetSSHPort
	I0819 18:36:05.357314  409340 main.go:141] libmachine: (multinode-528433) Calling .GetSSHKeyPath
	I0819 18:36:05.357483  409340 main.go:141] libmachine: (multinode-528433) Calling .GetSSHKeyPath
	I0819 18:36:05.357621  409340 main.go:141] libmachine: (multinode-528433) Calling .GetSSHUsername
	I0819 18:36:05.357774  409340 main.go:141] libmachine: Using SSH client type: native
	I0819 18:36:05.357963  409340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0819 18:36:05.357981  409340 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 18:36:05.465212  409340 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-528433
	
	I0819 18:36:05.465249  409340 main.go:141] libmachine: (multinode-528433) Calling .GetMachineName
	I0819 18:36:05.465499  409340 buildroot.go:166] provisioning hostname "multinode-528433"
	I0819 18:36:05.465536  409340 main.go:141] libmachine: (multinode-528433) Calling .GetMachineName
	I0819 18:36:05.465716  409340 main.go:141] libmachine: (multinode-528433) Calling .GetSSHHostname
	I0819 18:36:05.468392  409340 main.go:141] libmachine: (multinode-528433) DBG | domain multinode-528433 has defined MAC address 52:54:00:78:95:69 in network mk-multinode-528433
	I0819 18:36:05.468774  409340 main.go:141] libmachine: (multinode-528433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:95:69", ip: ""} in network mk-multinode-528433: {Iface:virbr1 ExpiryTime:2024-08-19 19:30:36 +0000 UTC Type:0 Mac:52:54:00:78:95:69 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-528433 Clientid:01:52:54:00:78:95:69}
	I0819 18:36:05.468817  409340 main.go:141] libmachine: (multinode-528433) DBG | domain multinode-528433 has defined IP address 192.168.39.168 and MAC address 52:54:00:78:95:69 in network mk-multinode-528433
	I0819 18:36:05.468966  409340 main.go:141] libmachine: (multinode-528433) Calling .GetSSHPort
	I0819 18:36:05.469141  409340 main.go:141] libmachine: (multinode-528433) Calling .GetSSHKeyPath
	I0819 18:36:05.469316  409340 main.go:141] libmachine: (multinode-528433) Calling .GetSSHKeyPath
	I0819 18:36:05.469474  409340 main.go:141] libmachine: (multinode-528433) Calling .GetSSHUsername
	I0819 18:36:05.469683  409340 main.go:141] libmachine: Using SSH client type: native
	I0819 18:36:05.469851  409340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0819 18:36:05.469863  409340 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-528433 && echo "multinode-528433" | sudo tee /etc/hostname
	I0819 18:36:05.591349  409340 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-528433
	
	I0819 18:36:05.591382  409340 main.go:141] libmachine: (multinode-528433) Calling .GetSSHHostname
	I0819 18:36:05.594036  409340 main.go:141] libmachine: (multinode-528433) DBG | domain multinode-528433 has defined MAC address 52:54:00:78:95:69 in network mk-multinode-528433
	I0819 18:36:05.594427  409340 main.go:141] libmachine: (multinode-528433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:95:69", ip: ""} in network mk-multinode-528433: {Iface:virbr1 ExpiryTime:2024-08-19 19:30:36 +0000 UTC Type:0 Mac:52:54:00:78:95:69 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-528433 Clientid:01:52:54:00:78:95:69}
	I0819 18:36:05.594455  409340 main.go:141] libmachine: (multinode-528433) DBG | domain multinode-528433 has defined IP address 192.168.39.168 and MAC address 52:54:00:78:95:69 in network mk-multinode-528433
	I0819 18:36:05.594619  409340 main.go:141] libmachine: (multinode-528433) Calling .GetSSHPort
	I0819 18:36:05.594811  409340 main.go:141] libmachine: (multinode-528433) Calling .GetSSHKeyPath
	I0819 18:36:05.595006  409340 main.go:141] libmachine: (multinode-528433) Calling .GetSSHKeyPath
	I0819 18:36:05.595173  409340 main.go:141] libmachine: (multinode-528433) Calling .GetSSHUsername
	I0819 18:36:05.595343  409340 main.go:141] libmachine: Using SSH client type: native
	I0819 18:36:05.595561  409340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0819 18:36:05.595581  409340 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-528433' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-528433/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-528433' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 18:36:05.705410  409340 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 18:36:05.705443  409340 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19468-372744/.minikube CaCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19468-372744/.minikube}
	I0819 18:36:05.705472  409340 buildroot.go:174] setting up certificates
	I0819 18:36:05.705482  409340 provision.go:84] configureAuth start
	I0819 18:36:05.705500  409340 main.go:141] libmachine: (multinode-528433) Calling .GetMachineName
	I0819 18:36:05.705778  409340 main.go:141] libmachine: (multinode-528433) Calling .GetIP
	I0819 18:36:05.708341  409340 main.go:141] libmachine: (multinode-528433) DBG | domain multinode-528433 has defined MAC address 52:54:00:78:95:69 in network mk-multinode-528433
	I0819 18:36:05.708663  409340 main.go:141] libmachine: (multinode-528433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:95:69", ip: ""} in network mk-multinode-528433: {Iface:virbr1 ExpiryTime:2024-08-19 19:30:36 +0000 UTC Type:0 Mac:52:54:00:78:95:69 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-528433 Clientid:01:52:54:00:78:95:69}
	I0819 18:36:05.708687  409340 main.go:141] libmachine: (multinode-528433) DBG | domain multinode-528433 has defined IP address 192.168.39.168 and MAC address 52:54:00:78:95:69 in network mk-multinode-528433
	I0819 18:36:05.708825  409340 main.go:141] libmachine: (multinode-528433) Calling .GetSSHHostname
	I0819 18:36:05.711069  409340 main.go:141] libmachine: (multinode-528433) DBG | domain multinode-528433 has defined MAC address 52:54:00:78:95:69 in network mk-multinode-528433
	I0819 18:36:05.711415  409340 main.go:141] libmachine: (multinode-528433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:95:69", ip: ""} in network mk-multinode-528433: {Iface:virbr1 ExpiryTime:2024-08-19 19:30:36 +0000 UTC Type:0 Mac:52:54:00:78:95:69 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-528433 Clientid:01:52:54:00:78:95:69}
	I0819 18:36:05.711444  409340 main.go:141] libmachine: (multinode-528433) DBG | domain multinode-528433 has defined IP address 192.168.39.168 and MAC address 52:54:00:78:95:69 in network mk-multinode-528433
	I0819 18:36:05.711568  409340 provision.go:143] copyHostCerts
	I0819 18:36:05.711601  409340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem
	I0819 18:36:05.711637  409340 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem, removing ...
	I0819 18:36:05.711659  409340 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem
	I0819 18:36:05.711756  409340 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem (1082 bytes)
	I0819 18:36:05.711887  409340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem
	I0819 18:36:05.711916  409340 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem, removing ...
	I0819 18:36:05.711925  409340 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem
	I0819 18:36:05.711978  409340 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem (1123 bytes)
	I0819 18:36:05.712069  409340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem
	I0819 18:36:05.712100  409340 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem, removing ...
	I0819 18:36:05.712108  409340 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem
	I0819 18:36:05.712144  409340 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem (1675 bytes)
	I0819 18:36:05.712235  409340 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem org=jenkins.multinode-528433 san=[127.0.0.1 192.168.39.168 localhost minikube multinode-528433]
	I0819 18:36:05.888467  409340 provision.go:177] copyRemoteCerts
	I0819 18:36:05.888541  409340 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 18:36:05.888569  409340 main.go:141] libmachine: (multinode-528433) Calling .GetSSHHostname
	I0819 18:36:05.891196  409340 main.go:141] libmachine: (multinode-528433) DBG | domain multinode-528433 has defined MAC address 52:54:00:78:95:69 in network mk-multinode-528433
	I0819 18:36:05.891499  409340 main.go:141] libmachine: (multinode-528433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:95:69", ip: ""} in network mk-multinode-528433: {Iface:virbr1 ExpiryTime:2024-08-19 19:30:36 +0000 UTC Type:0 Mac:52:54:00:78:95:69 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-528433 Clientid:01:52:54:00:78:95:69}
	I0819 18:36:05.891542  409340 main.go:141] libmachine: (multinode-528433) DBG | domain multinode-528433 has defined IP address 192.168.39.168 and MAC address 52:54:00:78:95:69 in network mk-multinode-528433
	I0819 18:36:05.891723  409340 main.go:141] libmachine: (multinode-528433) Calling .GetSSHPort
	I0819 18:36:05.891942  409340 main.go:141] libmachine: (multinode-528433) Calling .GetSSHKeyPath
	I0819 18:36:05.892101  409340 main.go:141] libmachine: (multinode-528433) Calling .GetSSHUsername
	I0819 18:36:05.892248  409340 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/multinode-528433/id_rsa Username:docker}
	I0819 18:36:05.979819  409340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 18:36:05.979897  409340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 18:36:06.016416  409340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 18:36:06.016495  409340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0819 18:36:06.052421  409340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 18:36:06.052495  409340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 18:36:06.084002  409340 provision.go:87] duration metric: took 378.505809ms to configureAuth
	I0819 18:36:06.084029  409340 buildroot.go:189] setting minikube options for container-runtime
	I0819 18:36:06.084264  409340 config.go:182] Loaded profile config "multinode-528433": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:36:06.084341  409340 main.go:141] libmachine: (multinode-528433) Calling .GetSSHHostname
	I0819 18:36:06.086967  409340 main.go:141] libmachine: (multinode-528433) DBG | domain multinode-528433 has defined MAC address 52:54:00:78:95:69 in network mk-multinode-528433
	I0819 18:36:06.087346  409340 main.go:141] libmachine: (multinode-528433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:95:69", ip: ""} in network mk-multinode-528433: {Iface:virbr1 ExpiryTime:2024-08-19 19:30:36 +0000 UTC Type:0 Mac:52:54:00:78:95:69 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-528433 Clientid:01:52:54:00:78:95:69}
	I0819 18:36:06.087367  409340 main.go:141] libmachine: (multinode-528433) DBG | domain multinode-528433 has defined IP address 192.168.39.168 and MAC address 52:54:00:78:95:69 in network mk-multinode-528433
	I0819 18:36:06.087562  409340 main.go:141] libmachine: (multinode-528433) Calling .GetSSHPort
	I0819 18:36:06.087797  409340 main.go:141] libmachine: (multinode-528433) Calling .GetSSHKeyPath
	I0819 18:36:06.087948  409340 main.go:141] libmachine: (multinode-528433) Calling .GetSSHKeyPath
	I0819 18:36:06.088081  409340 main.go:141] libmachine: (multinode-528433) Calling .GetSSHUsername
	I0819 18:36:06.088228  409340 main.go:141] libmachine: Using SSH client type: native
	I0819 18:36:06.088431  409340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0819 18:36:06.088447  409340 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 18:37:36.975094  409340 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 18:37:36.975130  409340 machine.go:96] duration metric: took 1m31.621385972s to provisionDockerMachine
	I0819 18:37:36.975149  409340 start.go:293] postStartSetup for "multinode-528433" (driver="kvm2")
	I0819 18:37:36.975163  409340 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 18:37:36.975189  409340 main.go:141] libmachine: (multinode-528433) Calling .DriverName
	I0819 18:37:36.975616  409340 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 18:37:36.975648  409340 main.go:141] libmachine: (multinode-528433) Calling .GetSSHHostname
	I0819 18:37:36.979355  409340 main.go:141] libmachine: (multinode-528433) DBG | domain multinode-528433 has defined MAC address 52:54:00:78:95:69 in network mk-multinode-528433
	I0819 18:37:36.979897  409340 main.go:141] libmachine: (multinode-528433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:95:69", ip: ""} in network mk-multinode-528433: {Iface:virbr1 ExpiryTime:2024-08-19 19:30:36 +0000 UTC Type:0 Mac:52:54:00:78:95:69 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-528433 Clientid:01:52:54:00:78:95:69}
	I0819 18:37:36.979935  409340 main.go:141] libmachine: (multinode-528433) DBG | domain multinode-528433 has defined IP address 192.168.39.168 and MAC address 52:54:00:78:95:69 in network mk-multinode-528433
	I0819 18:37:36.980094  409340 main.go:141] libmachine: (multinode-528433) Calling .GetSSHPort
	I0819 18:37:36.980300  409340 main.go:141] libmachine: (multinode-528433) Calling .GetSSHKeyPath
	I0819 18:37:36.980517  409340 main.go:141] libmachine: (multinode-528433) Calling .GetSSHUsername
	I0819 18:37:36.980680  409340 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/multinode-528433/id_rsa Username:docker}
	I0819 18:37:37.062741  409340 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 18:37:37.067178  409340 command_runner.go:130] > NAME=Buildroot
	I0819 18:37:37.067203  409340 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0819 18:37:37.067208  409340 command_runner.go:130] > ID=buildroot
	I0819 18:37:37.067213  409340 command_runner.go:130] > VERSION_ID=2023.02.9
	I0819 18:37:37.067218  409340 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0819 18:37:37.067286  409340 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 18:37:37.067312  409340 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-372744/.minikube/addons for local assets ...
	I0819 18:37:37.067394  409340 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-372744/.minikube/files for local assets ...
	I0819 18:37:37.067491  409340 filesync.go:149] local asset: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem -> 3800092.pem in /etc/ssl/certs
	I0819 18:37:37.067504  409340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem -> /etc/ssl/certs/3800092.pem
	I0819 18:37:37.067642  409340 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 18:37:37.076788  409340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem --> /etc/ssl/certs/3800092.pem (1708 bytes)
	I0819 18:37:37.100746  409340 start.go:296] duration metric: took 125.579857ms for postStartSetup
	I0819 18:37:37.100792  409340 fix.go:56] duration metric: took 1m31.768078659s for fixHost
	I0819 18:37:37.100815  409340 main.go:141] libmachine: (multinode-528433) Calling .GetSSHHostname
	I0819 18:37:37.104040  409340 main.go:141] libmachine: (multinode-528433) DBG | domain multinode-528433 has defined MAC address 52:54:00:78:95:69 in network mk-multinode-528433
	I0819 18:37:37.104523  409340 main.go:141] libmachine: (multinode-528433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:95:69", ip: ""} in network mk-multinode-528433: {Iface:virbr1 ExpiryTime:2024-08-19 19:30:36 +0000 UTC Type:0 Mac:52:54:00:78:95:69 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-528433 Clientid:01:52:54:00:78:95:69}
	I0819 18:37:37.104558  409340 main.go:141] libmachine: (multinode-528433) DBG | domain multinode-528433 has defined IP address 192.168.39.168 and MAC address 52:54:00:78:95:69 in network mk-multinode-528433
	I0819 18:37:37.104788  409340 main.go:141] libmachine: (multinode-528433) Calling .GetSSHPort
	I0819 18:37:37.104975  409340 main.go:141] libmachine: (multinode-528433) Calling .GetSSHKeyPath
	I0819 18:37:37.105152  409340 main.go:141] libmachine: (multinode-528433) Calling .GetSSHKeyPath
	I0819 18:37:37.105286  409340 main.go:141] libmachine: (multinode-528433) Calling .GetSSHUsername
	I0819 18:37:37.105478  409340 main.go:141] libmachine: Using SSH client type: native
	I0819 18:37:37.105657  409340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0819 18:37:37.105667  409340 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 18:37:37.208978  409340 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724092657.181349835
	
	I0819 18:37:37.209009  409340 fix.go:216] guest clock: 1724092657.181349835
	I0819 18:37:37.209020  409340 fix.go:229] Guest: 2024-08-19 18:37:37.181349835 +0000 UTC Remote: 2024-08-19 18:37:37.100796894 +0000 UTC m=+91.897693888 (delta=80.552941ms)
	I0819 18:37:37.209069  409340 fix.go:200] guest clock delta is within tolerance: 80.552941ms
	I0819 18:37:37.209076  409340 start.go:83] releasing machines lock for "multinode-528433", held for 1m31.876380758s
	I0819 18:37:37.209102  409340 main.go:141] libmachine: (multinode-528433) Calling .DriverName
	I0819 18:37:37.209366  409340 main.go:141] libmachine: (multinode-528433) Calling .GetIP
	I0819 18:37:37.212281  409340 main.go:141] libmachine: (multinode-528433) DBG | domain multinode-528433 has defined MAC address 52:54:00:78:95:69 in network mk-multinode-528433
	I0819 18:37:37.212786  409340 main.go:141] libmachine: (multinode-528433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:95:69", ip: ""} in network mk-multinode-528433: {Iface:virbr1 ExpiryTime:2024-08-19 19:30:36 +0000 UTC Type:0 Mac:52:54:00:78:95:69 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-528433 Clientid:01:52:54:00:78:95:69}
	I0819 18:37:37.212818  409340 main.go:141] libmachine: (multinode-528433) DBG | domain multinode-528433 has defined IP address 192.168.39.168 and MAC address 52:54:00:78:95:69 in network mk-multinode-528433
	I0819 18:37:37.212994  409340 main.go:141] libmachine: (multinode-528433) Calling .DriverName
	I0819 18:37:37.213610  409340 main.go:141] libmachine: (multinode-528433) Calling .DriverName
	I0819 18:37:37.213809  409340 main.go:141] libmachine: (multinode-528433) Calling .DriverName
	I0819 18:37:37.213930  409340 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 18:37:37.213990  409340 main.go:141] libmachine: (multinode-528433) Calling .GetSSHHostname
	I0819 18:37:37.214031  409340 ssh_runner.go:195] Run: cat /version.json
	I0819 18:37:37.214053  409340 main.go:141] libmachine: (multinode-528433) Calling .GetSSHHostname
	I0819 18:37:37.216850  409340 main.go:141] libmachine: (multinode-528433) DBG | domain multinode-528433 has defined MAC address 52:54:00:78:95:69 in network mk-multinode-528433
	I0819 18:37:37.217075  409340 main.go:141] libmachine: (multinode-528433) DBG | domain multinode-528433 has defined MAC address 52:54:00:78:95:69 in network mk-multinode-528433
	I0819 18:37:37.217272  409340 main.go:141] libmachine: (multinode-528433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:95:69", ip: ""} in network mk-multinode-528433: {Iface:virbr1 ExpiryTime:2024-08-19 19:30:36 +0000 UTC Type:0 Mac:52:54:00:78:95:69 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-528433 Clientid:01:52:54:00:78:95:69}
	I0819 18:37:37.217299  409340 main.go:141] libmachine: (multinode-528433) DBG | domain multinode-528433 has defined IP address 192.168.39.168 and MAC address 52:54:00:78:95:69 in network mk-multinode-528433
	I0819 18:37:37.217510  409340 main.go:141] libmachine: (multinode-528433) Calling .GetSSHPort
	I0819 18:37:37.217618  409340 main.go:141] libmachine: (multinode-528433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:95:69", ip: ""} in network mk-multinode-528433: {Iface:virbr1 ExpiryTime:2024-08-19 19:30:36 +0000 UTC Type:0 Mac:52:54:00:78:95:69 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-528433 Clientid:01:52:54:00:78:95:69}
	I0819 18:37:37.217654  409340 main.go:141] libmachine: (multinode-528433) DBG | domain multinode-528433 has defined IP address 192.168.39.168 and MAC address 52:54:00:78:95:69 in network mk-multinode-528433
	I0819 18:37:37.217682  409340 main.go:141] libmachine: (multinode-528433) Calling .GetSSHKeyPath
	I0819 18:37:37.217840  409340 main.go:141] libmachine: (multinode-528433) Calling .GetSSHUsername
	I0819 18:37:37.217922  409340 main.go:141] libmachine: (multinode-528433) Calling .GetSSHPort
	I0819 18:37:37.218007  409340 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/multinode-528433/id_rsa Username:docker}
	I0819 18:37:37.218106  409340 main.go:141] libmachine: (multinode-528433) Calling .GetSSHKeyPath
	I0819 18:37:37.218256  409340 main.go:141] libmachine: (multinode-528433) Calling .GetSSHUsername
	I0819 18:37:37.218413  409340 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/multinode-528433/id_rsa Username:docker}
	I0819 18:37:37.293455  409340 command_runner.go:130] > {"iso_version": "v1.33.1-1723740674-19452", "kicbase_version": "v0.0.44-1723650208-19443", "minikube_version": "v1.33.1", "commit": "3bcdc720eef782394bf386d06fca73d1934e08fb"}
	I0819 18:37:37.293749  409340 ssh_runner.go:195] Run: systemctl --version
	I0819 18:37:37.319282  409340 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0819 18:37:37.320112  409340 command_runner.go:130] > systemd 252 (252)
	I0819 18:37:37.320151  409340 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0819 18:37:37.320215  409340 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 18:37:37.490885  409340 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0819 18:37:37.496979  409340 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0819 18:37:37.497040  409340 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 18:37:37.497109  409340 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 18:37:37.506544  409340 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0819 18:37:37.506570  409340 start.go:495] detecting cgroup driver to use...
	I0819 18:37:37.506648  409340 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 18:37:37.526272  409340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 18:37:37.541307  409340 docker.go:217] disabling cri-docker service (if available) ...
	I0819 18:37:37.541375  409340 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 18:37:37.556301  409340 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 18:37:37.571492  409340 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 18:37:37.720875  409340 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 18:37:37.856449  409340 docker.go:233] disabling docker service ...
	I0819 18:37:37.856528  409340 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 18:37:37.872304  409340 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 18:37:37.886136  409340 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 18:37:38.027196  409340 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 18:37:38.163145  409340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 18:37:38.177742  409340 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 18:37:38.197266  409340 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0819 18:37:38.197796  409340 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 18:37:38.197862  409340 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:37:38.208611  409340 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 18:37:38.208692  409340 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:37:38.219806  409340 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:37:38.230464  409340 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:37:38.241087  409340 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 18:37:38.251997  409340 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:37:38.262590  409340 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:37:38.274329  409340 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:37:38.284996  409340 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 18:37:38.294585  409340 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0819 18:37:38.294757  409340 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 18:37:38.304591  409340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 18:37:38.439597  409340 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 18:37:45.311807  409340 ssh_runner.go:235] Completed: sudo systemctl restart crio: (6.872162836s)
	I0819 18:37:45.311843  409340 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 18:37:45.311894  409340 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 18:37:45.316736  409340 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0819 18:37:45.316769  409340 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0819 18:37:45.316782  409340 command_runner.go:130] > Device: 0,22	Inode: 1323        Links: 1
	I0819 18:37:45.316792  409340 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0819 18:37:45.316808  409340 command_runner.go:130] > Access: 2024-08-19 18:37:45.165858708 +0000
	I0819 18:37:45.316816  409340 command_runner.go:130] > Modify: 2024-08-19 18:37:45.165858708 +0000
	I0819 18:37:45.316824  409340 command_runner.go:130] > Change: 2024-08-19 18:37:45.165858708 +0000
	I0819 18:37:45.316829  409340 command_runner.go:130] >  Birth: -
	I0819 18:37:45.316860  409340 start.go:563] Will wait 60s for crictl version
	I0819 18:37:45.316945  409340 ssh_runner.go:195] Run: which crictl
	I0819 18:37:45.321654  409340 command_runner.go:130] > /usr/bin/crictl
	I0819 18:37:45.321716  409340 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 18:37:45.358596  409340 command_runner.go:130] > Version:  0.1.0
	I0819 18:37:45.358623  409340 command_runner.go:130] > RuntimeName:  cri-o
	I0819 18:37:45.358629  409340 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0819 18:37:45.358635  409340 command_runner.go:130] > RuntimeApiVersion:  v1
	I0819 18:37:45.359707  409340 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 18:37:45.359776  409340 ssh_runner.go:195] Run: crio --version
	I0819 18:37:45.392589  409340 command_runner.go:130] > crio version 1.29.1
	I0819 18:37:45.392617  409340 command_runner.go:130] > Version:        1.29.1
	I0819 18:37:45.392633  409340 command_runner.go:130] > GitCommit:      unknown
	I0819 18:37:45.392637  409340 command_runner.go:130] > GitCommitDate:  unknown
	I0819 18:37:45.392641  409340 command_runner.go:130] > GitTreeState:   clean
	I0819 18:37:45.392647  409340 command_runner.go:130] > BuildDate:      2024-08-15T22:11:01Z
	I0819 18:37:45.392651  409340 command_runner.go:130] > GoVersion:      go1.21.6
	I0819 18:37:45.392655  409340 command_runner.go:130] > Compiler:       gc
	I0819 18:37:45.392660  409340 command_runner.go:130] > Platform:       linux/amd64
	I0819 18:37:45.392663  409340 command_runner.go:130] > Linkmode:       dynamic
	I0819 18:37:45.392668  409340 command_runner.go:130] > BuildTags:      
	I0819 18:37:45.392673  409340 command_runner.go:130] >   containers_image_ostree_stub
	I0819 18:37:45.392677  409340 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0819 18:37:45.392681  409340 command_runner.go:130] >   btrfs_noversion
	I0819 18:37:45.392689  409340 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0819 18:37:45.392693  409340 command_runner.go:130] >   libdm_no_deferred_remove
	I0819 18:37:45.392699  409340 command_runner.go:130] >   seccomp
	I0819 18:37:45.392710  409340 command_runner.go:130] > LDFlags:          unknown
	I0819 18:37:45.392717  409340 command_runner.go:130] > SeccompEnabled:   true
	I0819 18:37:45.392721  409340 command_runner.go:130] > AppArmorEnabled:  false
	I0819 18:37:45.392799  409340 ssh_runner.go:195] Run: crio --version
	I0819 18:37:45.423212  409340 command_runner.go:130] > crio version 1.29.1
	I0819 18:37:45.423236  409340 command_runner.go:130] > Version:        1.29.1
	I0819 18:37:45.423243  409340 command_runner.go:130] > GitCommit:      unknown
	I0819 18:37:45.423247  409340 command_runner.go:130] > GitCommitDate:  unknown
	I0819 18:37:45.423251  409340 command_runner.go:130] > GitTreeState:   clean
	I0819 18:37:45.423257  409340 command_runner.go:130] > BuildDate:      2024-08-15T22:11:01Z
	I0819 18:37:45.423263  409340 command_runner.go:130] > GoVersion:      go1.21.6
	I0819 18:37:45.423268  409340 command_runner.go:130] > Compiler:       gc
	I0819 18:37:45.423276  409340 command_runner.go:130] > Platform:       linux/amd64
	I0819 18:37:45.423282  409340 command_runner.go:130] > Linkmode:       dynamic
	I0819 18:37:45.423293  409340 command_runner.go:130] > BuildTags:      
	I0819 18:37:45.423303  409340 command_runner.go:130] >   containers_image_ostree_stub
	I0819 18:37:45.423310  409340 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0819 18:37:45.423319  409340 command_runner.go:130] >   btrfs_noversion
	I0819 18:37:45.423329  409340 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0819 18:37:45.423338  409340 command_runner.go:130] >   libdm_no_deferred_remove
	I0819 18:37:45.423342  409340 command_runner.go:130] >   seccomp
	I0819 18:37:45.423351  409340 command_runner.go:130] > LDFlags:          unknown
	I0819 18:37:45.423361  409340 command_runner.go:130] > SeccompEnabled:   true
	I0819 18:37:45.423372  409340 command_runner.go:130] > AppArmorEnabled:  false
	I0819 18:37:45.425600  409340 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 18:37:45.427114  409340 main.go:141] libmachine: (multinode-528433) Calling .GetIP
	I0819 18:37:45.429627  409340 main.go:141] libmachine: (multinode-528433) DBG | domain multinode-528433 has defined MAC address 52:54:00:78:95:69 in network mk-multinode-528433
	I0819 18:37:45.429961  409340 main.go:141] libmachine: (multinode-528433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:95:69", ip: ""} in network mk-multinode-528433: {Iface:virbr1 ExpiryTime:2024-08-19 19:30:36 +0000 UTC Type:0 Mac:52:54:00:78:95:69 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-528433 Clientid:01:52:54:00:78:95:69}
	I0819 18:37:45.429991  409340 main.go:141] libmachine: (multinode-528433) DBG | domain multinode-528433 has defined IP address 192.168.39.168 and MAC address 52:54:00:78:95:69 in network mk-multinode-528433
	I0819 18:37:45.430197  409340 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 18:37:45.434470  409340 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0819 18:37:45.434572  409340 kubeadm.go:883] updating cluster {Name:multinode-528433 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.0 ClusterName:multinode-528433 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.168 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.107 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.113 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fa
lse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 18:37:45.434732  409340 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 18:37:45.434785  409340 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 18:37:45.494067  409340 command_runner.go:130] > {
	I0819 18:37:45.494096  409340 command_runner.go:130] >   "images": [
	I0819 18:37:45.494100  409340 command_runner.go:130] >     {
	I0819 18:37:45.494109  409340 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0819 18:37:45.494116  409340 command_runner.go:130] >       "repoTags": [
	I0819 18:37:45.494126  409340 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0819 18:37:45.494132  409340 command_runner.go:130] >       ],
	I0819 18:37:45.494139  409340 command_runner.go:130] >       "repoDigests": [
	I0819 18:37:45.494154  409340 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0819 18:37:45.494166  409340 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0819 18:37:45.494173  409340 command_runner.go:130] >       ],
	I0819 18:37:45.494177  409340 command_runner.go:130] >       "size": "87165492",
	I0819 18:37:45.494181  409340 command_runner.go:130] >       "uid": null,
	I0819 18:37:45.494185  409340 command_runner.go:130] >       "username": "",
	I0819 18:37:45.494193  409340 command_runner.go:130] >       "spec": null,
	I0819 18:37:45.494198  409340 command_runner.go:130] >       "pinned": false
	I0819 18:37:45.494203  409340 command_runner.go:130] >     },
	I0819 18:37:45.494219  409340 command_runner.go:130] >     {
	I0819 18:37:45.494233  409340 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0819 18:37:45.494242  409340 command_runner.go:130] >       "repoTags": [
	I0819 18:37:45.494251  409340 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0819 18:37:45.494260  409340 command_runner.go:130] >       ],
	I0819 18:37:45.494266  409340 command_runner.go:130] >       "repoDigests": [
	I0819 18:37:45.494277  409340 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0819 18:37:45.494285  409340 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0819 18:37:45.494292  409340 command_runner.go:130] >       ],
	I0819 18:37:45.494299  409340 command_runner.go:130] >       "size": "87190579",
	I0819 18:37:45.494309  409340 command_runner.go:130] >       "uid": null,
	I0819 18:37:45.494326  409340 command_runner.go:130] >       "username": "",
	I0819 18:37:45.494335  409340 command_runner.go:130] >       "spec": null,
	I0819 18:37:45.494345  409340 command_runner.go:130] >       "pinned": false
	I0819 18:37:45.494351  409340 command_runner.go:130] >     },
	I0819 18:37:45.494359  409340 command_runner.go:130] >     {
	I0819 18:37:45.494367  409340 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0819 18:37:45.494387  409340 command_runner.go:130] >       "repoTags": [
	I0819 18:37:45.494400  409340 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0819 18:37:45.494409  409340 command_runner.go:130] >       ],
	I0819 18:37:45.494419  409340 command_runner.go:130] >       "repoDigests": [
	I0819 18:37:45.494434  409340 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0819 18:37:45.494448  409340 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0819 18:37:45.494455  409340 command_runner.go:130] >       ],
	I0819 18:37:45.494459  409340 command_runner.go:130] >       "size": "1363676",
	I0819 18:37:45.494465  409340 command_runner.go:130] >       "uid": null,
	I0819 18:37:45.494472  409340 command_runner.go:130] >       "username": "",
	I0819 18:37:45.494479  409340 command_runner.go:130] >       "spec": null,
	I0819 18:37:45.494489  409340 command_runner.go:130] >       "pinned": false
	I0819 18:37:45.494498  409340 command_runner.go:130] >     },
	I0819 18:37:45.494504  409340 command_runner.go:130] >     {
	I0819 18:37:45.494516  409340 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0819 18:37:45.494525  409340 command_runner.go:130] >       "repoTags": [
	I0819 18:37:45.494535  409340 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0819 18:37:45.494542  409340 command_runner.go:130] >       ],
	I0819 18:37:45.494546  409340 command_runner.go:130] >       "repoDigests": [
	I0819 18:37:45.494557  409340 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0819 18:37:45.494577  409340 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0819 18:37:45.494585  409340 command_runner.go:130] >       ],
	I0819 18:37:45.494593  409340 command_runner.go:130] >       "size": "31470524",
	I0819 18:37:45.494601  409340 command_runner.go:130] >       "uid": null,
	I0819 18:37:45.494608  409340 command_runner.go:130] >       "username": "",
	I0819 18:37:45.494617  409340 command_runner.go:130] >       "spec": null,
	I0819 18:37:45.494625  409340 command_runner.go:130] >       "pinned": false
	I0819 18:37:45.494629  409340 command_runner.go:130] >     },
	I0819 18:37:45.494636  409340 command_runner.go:130] >     {
	I0819 18:37:45.494646  409340 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0819 18:37:45.494656  409340 command_runner.go:130] >       "repoTags": [
	I0819 18:37:45.494664  409340 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0819 18:37:45.494673  409340 command_runner.go:130] >       ],
	I0819 18:37:45.494680  409340 command_runner.go:130] >       "repoDigests": [
	I0819 18:37:45.494696  409340 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0819 18:37:45.494710  409340 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0819 18:37:45.494716  409340 command_runner.go:130] >       ],
	I0819 18:37:45.494720  409340 command_runner.go:130] >       "size": "61245718",
	I0819 18:37:45.494729  409340 command_runner.go:130] >       "uid": null,
	I0819 18:37:45.494736  409340 command_runner.go:130] >       "username": "nonroot",
	I0819 18:37:45.494745  409340 command_runner.go:130] >       "spec": null,
	I0819 18:37:45.494752  409340 command_runner.go:130] >       "pinned": false
	I0819 18:37:45.494761  409340 command_runner.go:130] >     },
	I0819 18:37:45.494767  409340 command_runner.go:130] >     {
	I0819 18:37:45.494779  409340 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0819 18:37:45.494788  409340 command_runner.go:130] >       "repoTags": [
	I0819 18:37:45.494796  409340 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0819 18:37:45.494802  409340 command_runner.go:130] >       ],
	I0819 18:37:45.494807  409340 command_runner.go:130] >       "repoDigests": [
	I0819 18:37:45.494820  409340 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0819 18:37:45.494835  409340 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0819 18:37:45.494843  409340 command_runner.go:130] >       ],
	I0819 18:37:45.494850  409340 command_runner.go:130] >       "size": "149009664",
	I0819 18:37:45.494858  409340 command_runner.go:130] >       "uid": {
	I0819 18:37:45.494865  409340 command_runner.go:130] >         "value": "0"
	I0819 18:37:45.494873  409340 command_runner.go:130] >       },
	I0819 18:37:45.494879  409340 command_runner.go:130] >       "username": "",
	I0819 18:37:45.494886  409340 command_runner.go:130] >       "spec": null,
	I0819 18:37:45.494890  409340 command_runner.go:130] >       "pinned": false
	I0819 18:37:45.494911  409340 command_runner.go:130] >     },
	I0819 18:37:45.494916  409340 command_runner.go:130] >     {
	I0819 18:37:45.494929  409340 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0819 18:37:45.494938  409340 command_runner.go:130] >       "repoTags": [
	I0819 18:37:45.494946  409340 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0819 18:37:45.494954  409340 command_runner.go:130] >       ],
	I0819 18:37:45.494961  409340 command_runner.go:130] >       "repoDigests": [
	I0819 18:37:45.494974  409340 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0819 18:37:45.494981  409340 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0819 18:37:45.494990  409340 command_runner.go:130] >       ],
	I0819 18:37:45.494997  409340 command_runner.go:130] >       "size": "95233506",
	I0819 18:37:45.495006  409340 command_runner.go:130] >       "uid": {
	I0819 18:37:45.495014  409340 command_runner.go:130] >         "value": "0"
	I0819 18:37:45.495022  409340 command_runner.go:130] >       },
	I0819 18:37:45.495028  409340 command_runner.go:130] >       "username": "",
	I0819 18:37:45.495037  409340 command_runner.go:130] >       "spec": null,
	I0819 18:37:45.495043  409340 command_runner.go:130] >       "pinned": false
	I0819 18:37:45.495048  409340 command_runner.go:130] >     },
	I0819 18:37:45.495056  409340 command_runner.go:130] >     {
	I0819 18:37:45.495063  409340 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0819 18:37:45.495070  409340 command_runner.go:130] >       "repoTags": [
	I0819 18:37:45.495079  409340 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0819 18:37:45.495088  409340 command_runner.go:130] >       ],
	I0819 18:37:45.495095  409340 command_runner.go:130] >       "repoDigests": [
	I0819 18:37:45.495117  409340 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0819 18:37:45.495132  409340 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0819 18:37:45.495141  409340 command_runner.go:130] >       ],
	I0819 18:37:45.495147  409340 command_runner.go:130] >       "size": "89437512",
	I0819 18:37:45.495153  409340 command_runner.go:130] >       "uid": {
	I0819 18:37:45.495159  409340 command_runner.go:130] >         "value": "0"
	I0819 18:37:45.495165  409340 command_runner.go:130] >       },
	I0819 18:37:45.495172  409340 command_runner.go:130] >       "username": "",
	I0819 18:37:45.495178  409340 command_runner.go:130] >       "spec": null,
	I0819 18:37:45.495185  409340 command_runner.go:130] >       "pinned": false
	I0819 18:37:45.495190  409340 command_runner.go:130] >     },
	I0819 18:37:45.495195  409340 command_runner.go:130] >     {
	I0819 18:37:45.495204  409340 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0819 18:37:45.495218  409340 command_runner.go:130] >       "repoTags": [
	I0819 18:37:45.495226  409340 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0819 18:37:45.495233  409340 command_runner.go:130] >       ],
	I0819 18:37:45.495237  409340 command_runner.go:130] >       "repoDigests": [
	I0819 18:37:45.495251  409340 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0819 18:37:45.495265  409340 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0819 18:37:45.495274  409340 command_runner.go:130] >       ],
	I0819 18:37:45.495285  409340 command_runner.go:130] >       "size": "92728217",
	I0819 18:37:45.495294  409340 command_runner.go:130] >       "uid": null,
	I0819 18:37:45.495300  409340 command_runner.go:130] >       "username": "",
	I0819 18:37:45.495309  409340 command_runner.go:130] >       "spec": null,
	I0819 18:37:45.495315  409340 command_runner.go:130] >       "pinned": false
	I0819 18:37:45.495322  409340 command_runner.go:130] >     },
	I0819 18:37:45.495326  409340 command_runner.go:130] >     {
	I0819 18:37:45.495335  409340 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0819 18:37:45.495345  409340 command_runner.go:130] >       "repoTags": [
	I0819 18:37:45.495354  409340 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0819 18:37:45.495362  409340 command_runner.go:130] >       ],
	I0819 18:37:45.495369  409340 command_runner.go:130] >       "repoDigests": [
	I0819 18:37:45.495383  409340 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0819 18:37:45.495398  409340 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0819 18:37:45.495405  409340 command_runner.go:130] >       ],
	I0819 18:37:45.495410  409340 command_runner.go:130] >       "size": "68420936",
	I0819 18:37:45.495418  409340 command_runner.go:130] >       "uid": {
	I0819 18:37:45.495424  409340 command_runner.go:130] >         "value": "0"
	I0819 18:37:45.495430  409340 command_runner.go:130] >       },
	I0819 18:37:45.495441  409340 command_runner.go:130] >       "username": "",
	I0819 18:37:45.495447  409340 command_runner.go:130] >       "spec": null,
	I0819 18:37:45.495457  409340 command_runner.go:130] >       "pinned": false
	I0819 18:37:45.495463  409340 command_runner.go:130] >     },
	I0819 18:37:45.495470  409340 command_runner.go:130] >     {
	I0819 18:37:45.495480  409340 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0819 18:37:45.495489  409340 command_runner.go:130] >       "repoTags": [
	I0819 18:37:45.495495  409340 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0819 18:37:45.495500  409340 command_runner.go:130] >       ],
	I0819 18:37:45.495506  409340 command_runner.go:130] >       "repoDigests": [
	I0819 18:37:45.495520  409340 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0819 18:37:45.495534  409340 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0819 18:37:45.495543  409340 command_runner.go:130] >       ],
	I0819 18:37:45.495550  409340 command_runner.go:130] >       "size": "742080",
	I0819 18:37:45.495558  409340 command_runner.go:130] >       "uid": {
	I0819 18:37:45.495564  409340 command_runner.go:130] >         "value": "65535"
	I0819 18:37:45.495573  409340 command_runner.go:130] >       },
	I0819 18:37:45.495580  409340 command_runner.go:130] >       "username": "",
	I0819 18:37:45.495587  409340 command_runner.go:130] >       "spec": null,
	I0819 18:37:45.495594  409340 command_runner.go:130] >       "pinned": true
	I0819 18:37:45.495602  409340 command_runner.go:130] >     }
	I0819 18:37:45.495607  409340 command_runner.go:130] >   ]
	I0819 18:37:45.495615  409340 command_runner.go:130] > }
	I0819 18:37:45.495881  409340 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 18:37:45.495898  409340 crio.go:433] Images already preloaded, skipping extraction
	I0819 18:37:45.495978  409340 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 18:37:45.534149  409340 command_runner.go:130] > {
	I0819 18:37:45.534177  409340 command_runner.go:130] >   "images": [
	I0819 18:37:45.534181  409340 command_runner.go:130] >     {
	I0819 18:37:45.534192  409340 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0819 18:37:45.534198  409340 command_runner.go:130] >       "repoTags": [
	I0819 18:37:45.534205  409340 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0819 18:37:45.534210  409340 command_runner.go:130] >       ],
	I0819 18:37:45.534216  409340 command_runner.go:130] >       "repoDigests": [
	I0819 18:37:45.534229  409340 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0819 18:37:45.534240  409340 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0819 18:37:45.534248  409340 command_runner.go:130] >       ],
	I0819 18:37:45.534256  409340 command_runner.go:130] >       "size": "87165492",
	I0819 18:37:45.534264  409340 command_runner.go:130] >       "uid": null,
	I0819 18:37:45.534268  409340 command_runner.go:130] >       "username": "",
	I0819 18:37:45.534279  409340 command_runner.go:130] >       "spec": null,
	I0819 18:37:45.534285  409340 command_runner.go:130] >       "pinned": false
	I0819 18:37:45.534289  409340 command_runner.go:130] >     },
	I0819 18:37:45.534295  409340 command_runner.go:130] >     {
	I0819 18:37:45.534302  409340 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0819 18:37:45.534308  409340 command_runner.go:130] >       "repoTags": [
	I0819 18:37:45.534316  409340 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0819 18:37:45.534324  409340 command_runner.go:130] >       ],
	I0819 18:37:45.534332  409340 command_runner.go:130] >       "repoDigests": [
	I0819 18:37:45.534346  409340 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0819 18:37:45.534359  409340 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0819 18:37:45.534364  409340 command_runner.go:130] >       ],
	I0819 18:37:45.534368  409340 command_runner.go:130] >       "size": "87190579",
	I0819 18:37:45.534374  409340 command_runner.go:130] >       "uid": null,
	I0819 18:37:45.534382  409340 command_runner.go:130] >       "username": "",
	I0819 18:37:45.534388  409340 command_runner.go:130] >       "spec": null,
	I0819 18:37:45.534392  409340 command_runner.go:130] >       "pinned": false
	I0819 18:37:45.534398  409340 command_runner.go:130] >     },
	I0819 18:37:45.534404  409340 command_runner.go:130] >     {
	I0819 18:37:45.534417  409340 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0819 18:37:45.534427  409340 command_runner.go:130] >       "repoTags": [
	I0819 18:37:45.534438  409340 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0819 18:37:45.534453  409340 command_runner.go:130] >       ],
	I0819 18:37:45.534462  409340 command_runner.go:130] >       "repoDigests": [
	I0819 18:37:45.534471  409340 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0819 18:37:45.534481  409340 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0819 18:37:45.534484  409340 command_runner.go:130] >       ],
	I0819 18:37:45.534489  409340 command_runner.go:130] >       "size": "1363676",
	I0819 18:37:45.534496  409340 command_runner.go:130] >       "uid": null,
	I0819 18:37:45.534503  409340 command_runner.go:130] >       "username": "",
	I0819 18:37:45.534514  409340 command_runner.go:130] >       "spec": null,
	I0819 18:37:45.534523  409340 command_runner.go:130] >       "pinned": false
	I0819 18:37:45.534532  409340 command_runner.go:130] >     },
	I0819 18:37:45.534537  409340 command_runner.go:130] >     {
	I0819 18:37:45.534548  409340 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0819 18:37:45.534557  409340 command_runner.go:130] >       "repoTags": [
	I0819 18:37:45.534565  409340 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0819 18:37:45.534571  409340 command_runner.go:130] >       ],
	I0819 18:37:45.534575  409340 command_runner.go:130] >       "repoDigests": [
	I0819 18:37:45.534587  409340 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0819 18:37:45.534608  409340 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0819 18:37:45.534616  409340 command_runner.go:130] >       ],
	I0819 18:37:45.534623  409340 command_runner.go:130] >       "size": "31470524",
	I0819 18:37:45.534632  409340 command_runner.go:130] >       "uid": null,
	I0819 18:37:45.534642  409340 command_runner.go:130] >       "username": "",
	I0819 18:37:45.534648  409340 command_runner.go:130] >       "spec": null,
	I0819 18:37:45.534656  409340 command_runner.go:130] >       "pinned": false
	I0819 18:37:45.534659  409340 command_runner.go:130] >     },
	I0819 18:37:45.534663  409340 command_runner.go:130] >     {
	I0819 18:37:45.534673  409340 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0819 18:37:45.534683  409340 command_runner.go:130] >       "repoTags": [
	I0819 18:37:45.534692  409340 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0819 18:37:45.534701  409340 command_runner.go:130] >       ],
	I0819 18:37:45.534708  409340 command_runner.go:130] >       "repoDigests": [
	I0819 18:37:45.534720  409340 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0819 18:37:45.534734  409340 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0819 18:37:45.534741  409340 command_runner.go:130] >       ],
	I0819 18:37:45.534745  409340 command_runner.go:130] >       "size": "61245718",
	I0819 18:37:45.534761  409340 command_runner.go:130] >       "uid": null,
	I0819 18:37:45.534771  409340 command_runner.go:130] >       "username": "nonroot",
	I0819 18:37:45.534777  409340 command_runner.go:130] >       "spec": null,
	I0819 18:37:45.534787  409340 command_runner.go:130] >       "pinned": false
	I0819 18:37:45.534792  409340 command_runner.go:130] >     },
	I0819 18:37:45.534800  409340 command_runner.go:130] >     {
	I0819 18:37:45.534810  409340 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0819 18:37:45.534824  409340 command_runner.go:130] >       "repoTags": [
	I0819 18:37:45.534830  409340 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0819 18:37:45.534836  409340 command_runner.go:130] >       ],
	I0819 18:37:45.534843  409340 command_runner.go:130] >       "repoDigests": [
	I0819 18:37:45.534857  409340 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0819 18:37:45.534871  409340 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0819 18:37:45.534880  409340 command_runner.go:130] >       ],
	I0819 18:37:45.534888  409340 command_runner.go:130] >       "size": "149009664",
	I0819 18:37:45.534909  409340 command_runner.go:130] >       "uid": {
	I0819 18:37:45.534916  409340 command_runner.go:130] >         "value": "0"
	I0819 18:37:45.534920  409340 command_runner.go:130] >       },
	I0819 18:37:45.534927  409340 command_runner.go:130] >       "username": "",
	I0819 18:37:45.534936  409340 command_runner.go:130] >       "spec": null,
	I0819 18:37:45.534943  409340 command_runner.go:130] >       "pinned": false
	I0819 18:37:45.534951  409340 command_runner.go:130] >     },
	I0819 18:37:45.534962  409340 command_runner.go:130] >     {
	I0819 18:37:45.534975  409340 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0819 18:37:45.534981  409340 command_runner.go:130] >       "repoTags": [
	I0819 18:37:45.534991  409340 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0819 18:37:45.534997  409340 command_runner.go:130] >       ],
	I0819 18:37:45.535005  409340 command_runner.go:130] >       "repoDigests": [
	I0819 18:37:45.535014  409340 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0819 18:37:45.535028  409340 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0819 18:37:45.535036  409340 command_runner.go:130] >       ],
	I0819 18:37:45.535043  409340 command_runner.go:130] >       "size": "95233506",
	I0819 18:37:45.535051  409340 command_runner.go:130] >       "uid": {
	I0819 18:37:45.535057  409340 command_runner.go:130] >         "value": "0"
	I0819 18:37:45.535062  409340 command_runner.go:130] >       },
	I0819 18:37:45.535072  409340 command_runner.go:130] >       "username": "",
	I0819 18:37:45.535087  409340 command_runner.go:130] >       "spec": null,
	I0819 18:37:45.535096  409340 command_runner.go:130] >       "pinned": false
	I0819 18:37:45.535103  409340 command_runner.go:130] >     },
	I0819 18:37:45.535111  409340 command_runner.go:130] >     {
	I0819 18:37:45.535120  409340 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0819 18:37:45.535129  409340 command_runner.go:130] >       "repoTags": [
	I0819 18:37:45.535135  409340 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0819 18:37:45.535139  409340 command_runner.go:130] >       ],
	I0819 18:37:45.535144  409340 command_runner.go:130] >       "repoDigests": [
	I0819 18:37:45.535166  409340 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0819 18:37:45.535176  409340 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0819 18:37:45.535180  409340 command_runner.go:130] >       ],
	I0819 18:37:45.535187  409340 command_runner.go:130] >       "size": "89437512",
	I0819 18:37:45.535191  409340 command_runner.go:130] >       "uid": {
	I0819 18:37:45.535197  409340 command_runner.go:130] >         "value": "0"
	I0819 18:37:45.535201  409340 command_runner.go:130] >       },
	I0819 18:37:45.535205  409340 command_runner.go:130] >       "username": "",
	I0819 18:37:45.535210  409340 command_runner.go:130] >       "spec": null,
	I0819 18:37:45.535217  409340 command_runner.go:130] >       "pinned": false
	I0819 18:37:45.535220  409340 command_runner.go:130] >     },
	I0819 18:37:45.535224  409340 command_runner.go:130] >     {
	I0819 18:37:45.535229  409340 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0819 18:37:45.535236  409340 command_runner.go:130] >       "repoTags": [
	I0819 18:37:45.535241  409340 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0819 18:37:45.535246  409340 command_runner.go:130] >       ],
	I0819 18:37:45.535251  409340 command_runner.go:130] >       "repoDigests": [
	I0819 18:37:45.535263  409340 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0819 18:37:45.535276  409340 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0819 18:37:45.535284  409340 command_runner.go:130] >       ],
	I0819 18:37:45.535290  409340 command_runner.go:130] >       "size": "92728217",
	I0819 18:37:45.535299  409340 command_runner.go:130] >       "uid": null,
	I0819 18:37:45.535305  409340 command_runner.go:130] >       "username": "",
	I0819 18:37:45.535311  409340 command_runner.go:130] >       "spec": null,
	I0819 18:37:45.535316  409340 command_runner.go:130] >       "pinned": false
	I0819 18:37:45.535324  409340 command_runner.go:130] >     },
	I0819 18:37:45.535329  409340 command_runner.go:130] >     {
	I0819 18:37:45.535354  409340 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0819 18:37:45.535360  409340 command_runner.go:130] >       "repoTags": [
	I0819 18:37:45.535368  409340 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0819 18:37:45.535372  409340 command_runner.go:130] >       ],
	I0819 18:37:45.535378  409340 command_runner.go:130] >       "repoDigests": [
	I0819 18:37:45.535392  409340 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0819 18:37:45.535407  409340 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0819 18:37:45.535416  409340 command_runner.go:130] >       ],
	I0819 18:37:45.535423  409340 command_runner.go:130] >       "size": "68420936",
	I0819 18:37:45.535435  409340 command_runner.go:130] >       "uid": {
	I0819 18:37:45.535441  409340 command_runner.go:130] >         "value": "0"
	I0819 18:37:45.535448  409340 command_runner.go:130] >       },
	I0819 18:37:45.535452  409340 command_runner.go:130] >       "username": "",
	I0819 18:37:45.535455  409340 command_runner.go:130] >       "spec": null,
	I0819 18:37:45.535460  409340 command_runner.go:130] >       "pinned": false
	I0819 18:37:45.535465  409340 command_runner.go:130] >     },
	I0819 18:37:45.535469  409340 command_runner.go:130] >     {
	I0819 18:37:45.535475  409340 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0819 18:37:45.535480  409340 command_runner.go:130] >       "repoTags": [
	I0819 18:37:45.535485  409340 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0819 18:37:45.535488  409340 command_runner.go:130] >       ],
	I0819 18:37:45.535493  409340 command_runner.go:130] >       "repoDigests": [
	I0819 18:37:45.535499  409340 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0819 18:37:45.535506  409340 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0819 18:37:45.535512  409340 command_runner.go:130] >       ],
	I0819 18:37:45.535516  409340 command_runner.go:130] >       "size": "742080",
	I0819 18:37:45.535520  409340 command_runner.go:130] >       "uid": {
	I0819 18:37:45.535524  409340 command_runner.go:130] >         "value": "65535"
	I0819 18:37:45.535531  409340 command_runner.go:130] >       },
	I0819 18:37:45.535534  409340 command_runner.go:130] >       "username": "",
	I0819 18:37:45.535539  409340 command_runner.go:130] >       "spec": null,
	I0819 18:37:45.535545  409340 command_runner.go:130] >       "pinned": true
	I0819 18:37:45.535548  409340 command_runner.go:130] >     }
	I0819 18:37:45.535553  409340 command_runner.go:130] >   ]
	I0819 18:37:45.535556  409340 command_runner.go:130] > }
	I0819 18:37:45.535743  409340 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 18:37:45.535762  409340 cache_images.go:84] Images are preloaded, skipping loading
	I0819 18:37:45.535771  409340 kubeadm.go:934] updating node { 192.168.39.168 8443 v1.31.0 crio true true} ...
	I0819 18:37:45.535891  409340 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-528433 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.168
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:multinode-528433 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 18:37:45.535961  409340 ssh_runner.go:195] Run: crio config
	I0819 18:37:45.585162  409340 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0819 18:37:45.585196  409340 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0819 18:37:45.585207  409340 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0819 18:37:45.585212  409340 command_runner.go:130] > #
	I0819 18:37:45.585223  409340 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0819 18:37:45.585232  409340 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0819 18:37:45.585239  409340 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0819 18:37:45.585247  409340 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0819 18:37:45.585251  409340 command_runner.go:130] > # reload'.
	I0819 18:37:45.585257  409340 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0819 18:37:45.585263  409340 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0819 18:37:45.585273  409340 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0819 18:37:45.585283  409340 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0819 18:37:45.585291  409340 command_runner.go:130] > [crio]
	I0819 18:37:45.585300  409340 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0819 18:37:45.585308  409340 command_runner.go:130] > # containers images, in this directory.
	I0819 18:37:45.585446  409340 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0819 18:37:45.585469  409340 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0819 18:37:45.585477  409340 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0819 18:37:45.585487  409340 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0819 18:37:45.585494  409340 command_runner.go:130] > # imagestore = ""
	I0819 18:37:45.585508  409340 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0819 18:37:45.585518  409340 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0819 18:37:45.585529  409340 command_runner.go:130] > storage_driver = "overlay"
	I0819 18:37:45.585540  409340 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0819 18:37:45.585552  409340 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0819 18:37:45.585565  409340 command_runner.go:130] > storage_option = [
	I0819 18:37:45.585622  409340 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0819 18:37:45.585638  409340 command_runner.go:130] > ]
	I0819 18:37:45.585645  409340 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0819 18:37:45.585652  409340 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0819 18:37:45.585658  409340 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0819 18:37:45.585664  409340 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0819 18:37:45.585672  409340 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0819 18:37:45.585677  409340 command_runner.go:130] > # always happen on a node reboot
	I0819 18:37:45.585684  409340 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0819 18:37:45.585696  409340 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0819 18:37:45.585708  409340 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0819 18:37:45.585718  409340 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0819 18:37:45.585727  409340 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0819 18:37:45.585742  409340 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0819 18:37:45.585758  409340 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0819 18:37:45.585769  409340 command_runner.go:130] > # internal_wipe = true
	I0819 18:37:45.585783  409340 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0819 18:37:45.585791  409340 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0819 18:37:45.585863  409340 command_runner.go:130] > # internal_repair = false
	I0819 18:37:45.585879  409340 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0819 18:37:45.585888  409340 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0819 18:37:45.585898  409340 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0819 18:37:45.585910  409340 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0819 18:37:45.585922  409340 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0819 18:37:45.585931  409340 command_runner.go:130] > [crio.api]
	I0819 18:37:45.585942  409340 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0819 18:37:45.585953  409340 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0819 18:37:45.585965  409340 command_runner.go:130] > # IP address on which the stream server will listen.
	I0819 18:37:45.585988  409340 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0819 18:37:45.586005  409340 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0819 18:37:45.586018  409340 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0819 18:37:45.586037  409340 command_runner.go:130] > # stream_port = "0"
	I0819 18:37:45.586050  409340 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0819 18:37:45.586060  409340 command_runner.go:130] > # stream_enable_tls = false
	I0819 18:37:45.586072  409340 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0819 18:37:45.586083  409340 command_runner.go:130] > # stream_idle_timeout = ""
	I0819 18:37:45.586093  409340 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0819 18:37:45.586106  409340 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0819 18:37:45.586115  409340 command_runner.go:130] > # minutes.
	I0819 18:37:45.586124  409340 command_runner.go:130] > # stream_tls_cert = ""
	I0819 18:37:45.586138  409340 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0819 18:37:45.586152  409340 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0819 18:37:45.586162  409340 command_runner.go:130] > # stream_tls_key = ""
	I0819 18:37:45.586173  409340 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0819 18:37:45.586185  409340 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0819 18:37:45.586212  409340 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0819 18:37:45.586235  409340 command_runner.go:130] > # stream_tls_ca = ""
	I0819 18:37:45.586250  409340 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0819 18:37:45.586261  409340 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0819 18:37:45.586273  409340 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0819 18:37:45.586284  409340 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0819 18:37:45.586297  409340 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0819 18:37:45.586310  409340 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0819 18:37:45.586320  409340 command_runner.go:130] > [crio.runtime]
	I0819 18:37:45.586331  409340 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0819 18:37:45.586343  409340 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0819 18:37:45.586352  409340 command_runner.go:130] > # "nofile=1024:2048"
	I0819 18:37:45.586362  409340 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0819 18:37:45.586371  409340 command_runner.go:130] > # default_ulimits = [
	I0819 18:37:45.586378  409340 command_runner.go:130] > # ]
	I0819 18:37:45.586390  409340 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0819 18:37:45.586400  409340 command_runner.go:130] > # no_pivot = false
	I0819 18:37:45.586412  409340 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0819 18:37:45.586424  409340 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0819 18:37:45.586435  409340 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0819 18:37:45.586446  409340 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0819 18:37:45.586457  409340 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0819 18:37:45.586478  409340 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0819 18:37:45.586489  409340 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0819 18:37:45.586498  409340 command_runner.go:130] > # Cgroup setting for conmon
	I0819 18:37:45.586511  409340 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0819 18:37:45.586520  409340 command_runner.go:130] > conmon_cgroup = "pod"
	I0819 18:37:45.586533  409340 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0819 18:37:45.586544  409340 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0819 18:37:45.586555  409340 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0819 18:37:45.586567  409340 command_runner.go:130] > conmon_env = [
	I0819 18:37:45.586576  409340 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0819 18:37:45.586581  409340 command_runner.go:130] > ]
	I0819 18:37:45.586589  409340 command_runner.go:130] > # Additional environment variables to set for all the
	I0819 18:37:45.586597  409340 command_runner.go:130] > # containers. These are overridden if set in the
	I0819 18:37:45.586605  409340 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0819 18:37:45.586612  409340 command_runner.go:130] > # default_env = [
	I0819 18:37:45.586617  409340 command_runner.go:130] > # ]
	I0819 18:37:45.586626  409340 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0819 18:37:45.586637  409340 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0819 18:37:45.586642  409340 command_runner.go:130] > # selinux = false
	I0819 18:37:45.586649  409340 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0819 18:37:45.586654  409340 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0819 18:37:45.586660  409340 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0819 18:37:45.586664  409340 command_runner.go:130] > # seccomp_profile = ""
	I0819 18:37:45.586669  409340 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0819 18:37:45.586674  409340 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0819 18:37:45.586683  409340 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0819 18:37:45.586687  409340 command_runner.go:130] > # which might increase security.
	I0819 18:37:45.586695  409340 command_runner.go:130] > # This option is currently deprecated,
	I0819 18:37:45.586700  409340 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0819 18:37:45.586704  409340 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0819 18:37:45.586715  409340 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0819 18:37:45.586725  409340 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0819 18:37:45.586736  409340 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0819 18:37:45.586746  409340 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0819 18:37:45.586753  409340 command_runner.go:130] > # This option supports live configuration reload.
	I0819 18:37:45.586764  409340 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0819 18:37:45.586783  409340 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0819 18:37:45.586793  409340 command_runner.go:130] > # the cgroup blockio controller.
	I0819 18:37:45.586800  409340 command_runner.go:130] > # blockio_config_file = ""
	I0819 18:37:45.586813  409340 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0819 18:37:45.586821  409340 command_runner.go:130] > # blockio parameters.
	I0819 18:37:45.586828  409340 command_runner.go:130] > # blockio_reload = false
	I0819 18:37:45.586841  409340 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0819 18:37:45.586847  409340 command_runner.go:130] > # irqbalance daemon.
	I0819 18:37:45.586856  409340 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0819 18:37:45.586868  409340 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0819 18:37:45.586880  409340 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0819 18:37:45.586894  409340 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0819 18:37:45.586908  409340 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0819 18:37:45.586920  409340 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0819 18:37:45.586930  409340 command_runner.go:130] > # This option supports live configuration reload.
	I0819 18:37:45.586937  409340 command_runner.go:130] > # rdt_config_file = ""
	I0819 18:37:45.586947  409340 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0819 18:37:45.586960  409340 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0819 18:37:45.586999  409340 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0819 18:37:45.587009  409340 command_runner.go:130] > # separate_pull_cgroup = ""
	I0819 18:37:45.587021  409340 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0819 18:37:45.587034  409340 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0819 18:37:45.587043  409340 command_runner.go:130] > # will be added.
	I0819 18:37:45.587049  409340 command_runner.go:130] > # default_capabilities = [
	I0819 18:37:45.587058  409340 command_runner.go:130] > # 	"CHOWN",
	I0819 18:37:45.587065  409340 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0819 18:37:45.587074  409340 command_runner.go:130] > # 	"FSETID",
	I0819 18:37:45.587082  409340 command_runner.go:130] > # 	"FOWNER",
	I0819 18:37:45.587090  409340 command_runner.go:130] > # 	"SETGID",
	I0819 18:37:45.587097  409340 command_runner.go:130] > # 	"SETUID",
	I0819 18:37:45.587105  409340 command_runner.go:130] > # 	"SETPCAP",
	I0819 18:37:45.587113  409340 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0819 18:37:45.587125  409340 command_runner.go:130] > # 	"KILL",
	I0819 18:37:45.587134  409340 command_runner.go:130] > # ]
	I0819 18:37:45.587148  409340 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0819 18:37:45.587162  409340 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0819 18:37:45.587178  409340 command_runner.go:130] > # add_inheritable_capabilities = false
	I0819 18:37:45.587193  409340 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0819 18:37:45.587207  409340 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0819 18:37:45.587222  409340 command_runner.go:130] > default_sysctls = [
	I0819 18:37:45.587233  409340 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0819 18:37:45.587239  409340 command_runner.go:130] > ]
	I0819 18:37:45.587248  409340 command_runner.go:130] > # List of devices on the host that a
	I0819 18:37:45.587262  409340 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0819 18:37:45.587272  409340 command_runner.go:130] > # allowed_devices = [
	I0819 18:37:45.587281  409340 command_runner.go:130] > # 	"/dev/fuse",
	I0819 18:37:45.587286  409340 command_runner.go:130] > # ]
	I0819 18:37:45.587295  409340 command_runner.go:130] > # List of additional devices. specified as
	I0819 18:37:45.587310  409340 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0819 18:37:45.587322  409340 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0819 18:37:45.587335  409340 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0819 18:37:45.587344  409340 command_runner.go:130] > # additional_devices = [
	I0819 18:37:45.587351  409340 command_runner.go:130] > # ]
	I0819 18:37:45.587362  409340 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0819 18:37:45.587369  409340 command_runner.go:130] > # cdi_spec_dirs = [
	I0819 18:37:45.587377  409340 command_runner.go:130] > # 	"/etc/cdi",
	I0819 18:37:45.587386  409340 command_runner.go:130] > # 	"/var/run/cdi",
	I0819 18:37:45.587392  409340 command_runner.go:130] > # ]
	I0819 18:37:45.587406  409340 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0819 18:37:45.587420  409340 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0819 18:37:45.587429  409340 command_runner.go:130] > # Defaults to false.
	I0819 18:37:45.587441  409340 command_runner.go:130] > # device_ownership_from_security_context = false
	I0819 18:37:45.587456  409340 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0819 18:37:45.587467  409340 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0819 18:37:45.587474  409340 command_runner.go:130] > # hooks_dir = [
	I0819 18:37:45.587484  409340 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0819 18:37:45.587490  409340 command_runner.go:130] > # ]
	I0819 18:37:45.587505  409340 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0819 18:37:45.587515  409340 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0819 18:37:45.587523  409340 command_runner.go:130] > # its default mounts from the following two files:
	I0819 18:37:45.587530  409340 command_runner.go:130] > #
	I0819 18:37:45.587540  409340 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0819 18:37:45.587559  409340 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0819 18:37:45.587570  409340 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0819 18:37:45.587577  409340 command_runner.go:130] > #
	I0819 18:37:45.587586  409340 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0819 18:37:45.587600  409340 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0819 18:37:45.587613  409340 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0819 18:37:45.587624  409340 command_runner.go:130] > #      only add mounts it finds in this file.
	I0819 18:37:45.587631  409340 command_runner.go:130] > #
	I0819 18:37:45.587638  409340 command_runner.go:130] > # default_mounts_file = ""
	I0819 18:37:45.587649  409340 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0819 18:37:45.587659  409340 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0819 18:37:45.587666  409340 command_runner.go:130] > pids_limit = 1024
	I0819 18:37:45.587689  409340 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0819 18:37:45.587702  409340 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0819 18:37:45.587711  409340 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0819 18:37:45.587725  409340 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0819 18:37:45.587735  409340 command_runner.go:130] > # log_size_max = -1
	I0819 18:37:45.587746  409340 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0819 18:37:45.587756  409340 command_runner.go:130] > # log_to_journald = false
	I0819 18:37:45.587767  409340 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0819 18:37:45.587778  409340 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0819 18:37:45.587789  409340 command_runner.go:130] > # Path to directory for container attach sockets.
	I0819 18:37:45.587800  409340 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0819 18:37:45.587808  409340 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0819 18:37:45.587818  409340 command_runner.go:130] > # bind_mount_prefix = ""
	I0819 18:37:45.587826  409340 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0819 18:37:45.587836  409340 command_runner.go:130] > # read_only = false
	I0819 18:37:45.587847  409340 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0819 18:37:45.587860  409340 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0819 18:37:45.587870  409340 command_runner.go:130] > # live configuration reload.
	I0819 18:37:45.587877  409340 command_runner.go:130] > # log_level = "info"
	I0819 18:37:45.587888  409340 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0819 18:37:45.587898  409340 command_runner.go:130] > # This option supports live configuration reload.
	I0819 18:37:45.587908  409340 command_runner.go:130] > # log_filter = ""
	I0819 18:37:45.587917  409340 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0819 18:37:45.587923  409340 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0819 18:37:45.587939  409340 command_runner.go:130] > # separated by comma.
	I0819 18:37:45.587948  409340 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0819 18:37:45.587953  409340 command_runner.go:130] > # uid_mappings = ""
	I0819 18:37:45.587959  409340 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0819 18:37:45.587965  409340 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0819 18:37:45.587972  409340 command_runner.go:130] > # separated by comma.
	I0819 18:37:45.587985  409340 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0819 18:37:45.587993  409340 command_runner.go:130] > # gid_mappings = ""
	I0819 18:37:45.588002  409340 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0819 18:37:45.588013  409340 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0819 18:37:45.588025  409340 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0819 18:37:45.588040  409340 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0819 18:37:45.588051  409340 command_runner.go:130] > # minimum_mappable_uid = -1
	I0819 18:37:45.588060  409340 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0819 18:37:45.588072  409340 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0819 18:37:45.588081  409340 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0819 18:37:45.588097  409340 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0819 18:37:45.588107  409340 command_runner.go:130] > # minimum_mappable_gid = -1
	I0819 18:37:45.588118  409340 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0819 18:37:45.588132  409340 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0819 18:37:45.588148  409340 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0819 18:37:45.588157  409340 command_runner.go:130] > # ctr_stop_timeout = 30
	I0819 18:37:45.588166  409340 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0819 18:37:45.588178  409340 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0819 18:37:45.588188  409340 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0819 18:37:45.588199  409340 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0819 18:37:45.588209  409340 command_runner.go:130] > drop_infra_ctr = false
	I0819 18:37:45.588224  409340 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0819 18:37:45.588237  409340 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0819 18:37:45.588249  409340 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0819 18:37:45.588259  409340 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0819 18:37:45.588270  409340 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0819 18:37:45.588284  409340 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0819 18:37:45.588296  409340 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0819 18:37:45.588306  409340 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0819 18:37:45.588312  409340 command_runner.go:130] > # shared_cpuset = ""
	I0819 18:37:45.588334  409340 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0819 18:37:45.588345  409340 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0819 18:37:45.588352  409340 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0819 18:37:45.588367  409340 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0819 18:37:45.588380  409340 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0819 18:37:45.588392  409340 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0819 18:37:45.588405  409340 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0819 18:37:45.588417  409340 command_runner.go:130] > # enable_criu_support = false
	I0819 18:37:45.588426  409340 command_runner.go:130] > # Enable/disable the generation of the container,
	I0819 18:37:45.588438  409340 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0819 18:37:45.588446  409340 command_runner.go:130] > # enable_pod_events = false
	I0819 18:37:45.588459  409340 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0819 18:37:45.588469  409340 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0819 18:37:45.588478  409340 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0819 18:37:45.588488  409340 command_runner.go:130] > # default_runtime = "runc"
	I0819 18:37:45.588497  409340 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0819 18:37:45.588511  409340 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0819 18:37:45.588527  409340 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0819 18:37:45.588538  409340 command_runner.go:130] > # creation as a file is not desired either.
	I0819 18:37:45.588550  409340 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0819 18:37:45.588562  409340 command_runner.go:130] > # the hostname is being managed dynamically.
	I0819 18:37:45.588572  409340 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0819 18:37:45.588577  409340 command_runner.go:130] > # ]
	I0819 18:37:45.588590  409340 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0819 18:37:45.588604  409340 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0819 18:37:45.588617  409340 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0819 18:37:45.588628  409340 command_runner.go:130] > # Each entry in the table should follow the format:
	I0819 18:37:45.588633  409340 command_runner.go:130] > #
	I0819 18:37:45.588643  409340 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0819 18:37:45.588651  409340 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0819 18:37:45.588750  409340 command_runner.go:130] > # runtime_type = "oci"
	I0819 18:37:45.588765  409340 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0819 18:37:45.588772  409340 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0819 18:37:45.588782  409340 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0819 18:37:45.588790  409340 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0819 18:37:45.588800  409340 command_runner.go:130] > # monitor_env = []
	I0819 18:37:45.588814  409340 command_runner.go:130] > # privileged_without_host_devices = false
	I0819 18:37:45.588825  409340 command_runner.go:130] > # allowed_annotations = []
	I0819 18:37:45.588835  409340 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0819 18:37:45.588844  409340 command_runner.go:130] > # Where:
	I0819 18:37:45.588853  409340 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0819 18:37:45.588865  409340 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0819 18:37:45.588875  409340 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0819 18:37:45.588889  409340 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0819 18:37:45.588898  409340 command_runner.go:130] > #   in $PATH.
	I0819 18:37:45.588907  409340 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0819 18:37:45.588918  409340 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0819 18:37:45.588929  409340 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0819 18:37:45.588939  409340 command_runner.go:130] > #   state.
	I0819 18:37:45.588949  409340 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0819 18:37:45.588961  409340 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0819 18:37:45.588971  409340 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0819 18:37:45.588983  409340 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0819 18:37:45.588995  409340 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0819 18:37:45.589009  409340 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0819 18:37:45.589020  409340 command_runner.go:130] > #   The currently recognized values are:
	I0819 18:37:45.589030  409340 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0819 18:37:45.589045  409340 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0819 18:37:45.589056  409340 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0819 18:37:45.589063  409340 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0819 18:37:45.589072  409340 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0819 18:37:45.589078  409340 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0819 18:37:45.589086  409340 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0819 18:37:45.589093  409340 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0819 18:37:45.589100  409340 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0819 18:37:45.589109  409340 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0819 18:37:45.589119  409340 command_runner.go:130] > #   deprecated option "conmon".
	I0819 18:37:45.589132  409340 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0819 18:37:45.589143  409340 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0819 18:37:45.589156  409340 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0819 18:37:45.589166  409340 command_runner.go:130] > #   should be moved to the container's cgroup
	I0819 18:37:45.589178  409340 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0819 18:37:45.589197  409340 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0819 18:37:45.589211  409340 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0819 18:37:45.589225  409340 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0819 18:37:45.589233  409340 command_runner.go:130] > #
	I0819 18:37:45.589241  409340 command_runner.go:130] > # Using the seccomp notifier feature:
	I0819 18:37:45.589248  409340 command_runner.go:130] > #
	I0819 18:37:45.589258  409340 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0819 18:37:45.589272  409340 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0819 18:37:45.589280  409340 command_runner.go:130] > #
	I0819 18:37:45.589290  409340 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0819 18:37:45.589303  409340 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0819 18:37:45.589309  409340 command_runner.go:130] > #
	I0819 18:37:45.589315  409340 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0819 18:37:45.589321  409340 command_runner.go:130] > # feature.
	I0819 18:37:45.589325  409340 command_runner.go:130] > #
	I0819 18:37:45.589331  409340 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0819 18:37:45.589339  409340 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0819 18:37:45.589347  409340 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0819 18:37:45.589358  409340 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0819 18:37:45.589370  409340 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0819 18:37:45.589379  409340 command_runner.go:130] > #
	I0819 18:37:45.589389  409340 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0819 18:37:45.589402  409340 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0819 18:37:45.589411  409340 command_runner.go:130] > #
	I0819 18:37:45.589420  409340 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0819 18:37:45.589431  409340 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0819 18:37:45.589436  409340 command_runner.go:130] > #
	I0819 18:37:45.589447  409340 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0819 18:37:45.589458  409340 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0819 18:37:45.589467  409340 command_runner.go:130] > # limitation.
	I0819 18:37:45.589475  409340 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0819 18:37:45.589485  409340 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0819 18:37:45.589494  409340 command_runner.go:130] > runtime_type = "oci"
	I0819 18:37:45.589503  409340 command_runner.go:130] > runtime_root = "/run/runc"
	I0819 18:37:45.589525  409340 command_runner.go:130] > runtime_config_path = ""
	I0819 18:37:45.589536  409340 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0819 18:37:45.589552  409340 command_runner.go:130] > monitor_cgroup = "pod"
	I0819 18:37:45.589564  409340 command_runner.go:130] > monitor_exec_cgroup = ""
	I0819 18:37:45.589570  409340 command_runner.go:130] > monitor_env = [
	I0819 18:37:45.589579  409340 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0819 18:37:45.589587  409340 command_runner.go:130] > ]
	I0819 18:37:45.589595  409340 command_runner.go:130] > privileged_without_host_devices = false
	I0819 18:37:45.589609  409340 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0819 18:37:45.589620  409340 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0819 18:37:45.589633  409340 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0819 18:37:45.589647  409340 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0819 18:37:45.589656  409340 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0819 18:37:45.589668  409340 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0819 18:37:45.589683  409340 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0819 18:37:45.589699  409340 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0819 18:37:45.589711  409340 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0819 18:37:45.589725  409340 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0819 18:37:45.589731  409340 command_runner.go:130] > # Example:
	I0819 18:37:45.589738  409340 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0819 18:37:45.589745  409340 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0819 18:37:45.589752  409340 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0819 18:37:45.589761  409340 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0819 18:37:45.589767  409340 command_runner.go:130] > # cpuset = 0
	I0819 18:37:45.589774  409340 command_runner.go:130] > # cpushares = "0-1"
	I0819 18:37:45.589779  409340 command_runner.go:130] > # Where:
	I0819 18:37:45.589787  409340 command_runner.go:130] > # The workload name is workload-type.
	I0819 18:37:45.589798  409340 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0819 18:37:45.589807  409340 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0819 18:37:45.589816  409340 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0819 18:37:45.589828  409340 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0819 18:37:45.589838  409340 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0819 18:37:45.589849  409340 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0819 18:37:45.589859  409340 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0819 18:37:45.589867  409340 command_runner.go:130] > # Default value is set to true
	I0819 18:37:45.589872  409340 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0819 18:37:45.589880  409340 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0819 18:37:45.589885  409340 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0819 18:37:45.589896  409340 command_runner.go:130] > # Default value is set to 'false'
	I0819 18:37:45.589902  409340 command_runner.go:130] > # disable_hostport_mapping = false
	I0819 18:37:45.589908  409340 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0819 18:37:45.589913  409340 command_runner.go:130] > #
	I0819 18:37:45.589919  409340 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0819 18:37:45.589930  409340 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0819 18:37:45.589938  409340 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0819 18:37:45.589944  409340 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0819 18:37:45.589952  409340 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0819 18:37:45.589956  409340 command_runner.go:130] > [crio.image]
	I0819 18:37:45.589962  409340 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0819 18:37:45.589966  409340 command_runner.go:130] > # default_transport = "docker://"
	I0819 18:37:45.589974  409340 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0819 18:37:45.589983  409340 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0819 18:37:45.589987  409340 command_runner.go:130] > # global_auth_file = ""
	I0819 18:37:45.589993  409340 command_runner.go:130] > # The image used to instantiate infra containers.
	I0819 18:37:45.590000  409340 command_runner.go:130] > # This option supports live configuration reload.
	I0819 18:37:45.590007  409340 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0819 18:37:45.590015  409340 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0819 18:37:45.590021  409340 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0819 18:37:45.590028  409340 command_runner.go:130] > # This option supports live configuration reload.
	I0819 18:37:45.590032  409340 command_runner.go:130] > # pause_image_auth_file = ""
	I0819 18:37:45.590039  409340 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0819 18:37:45.590045  409340 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0819 18:37:45.590053  409340 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0819 18:37:45.590059  409340 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0819 18:37:45.590065  409340 command_runner.go:130] > # pause_command = "/pause"
	I0819 18:37:45.590071  409340 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0819 18:37:45.590078  409340 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0819 18:37:45.590084  409340 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0819 18:37:45.590090  409340 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0819 18:37:45.590096  409340 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0819 18:37:45.590101  409340 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0819 18:37:45.590107  409340 command_runner.go:130] > # pinned_images = [
	I0819 18:37:45.590110  409340 command_runner.go:130] > # ]
	I0819 18:37:45.590116  409340 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0819 18:37:45.590131  409340 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0819 18:37:45.590139  409340 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0819 18:37:45.590145  409340 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0819 18:37:45.590152  409340 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0819 18:37:45.590156  409340 command_runner.go:130] > # signature_policy = ""
	I0819 18:37:45.590160  409340 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0819 18:37:45.590169  409340 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0819 18:37:45.590175  409340 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0819 18:37:45.590183  409340 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0819 18:37:45.590189  409340 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0819 18:37:45.590194  409340 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0819 18:37:45.590200  409340 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0819 18:37:45.590208  409340 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0819 18:37:45.590213  409340 command_runner.go:130] > # changing them here.
	I0819 18:37:45.590221  409340 command_runner.go:130] > # insecure_registries = [
	I0819 18:37:45.590225  409340 command_runner.go:130] > # ]
	I0819 18:37:45.590231  409340 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0819 18:37:45.590238  409340 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0819 18:37:45.590242  409340 command_runner.go:130] > # image_volumes = "mkdir"
	I0819 18:37:45.590248  409340 command_runner.go:130] > # Temporary directory to use for storing big files
	I0819 18:37:45.590252  409340 command_runner.go:130] > # big_files_temporary_dir = ""
	I0819 18:37:45.590260  409340 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0819 18:37:45.590264  409340 command_runner.go:130] > # CNI plugins.
	I0819 18:37:45.590270  409340 command_runner.go:130] > [crio.network]
	I0819 18:37:45.590275  409340 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0819 18:37:45.590281  409340 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0819 18:37:45.590287  409340 command_runner.go:130] > # cni_default_network = ""
	I0819 18:37:45.590292  409340 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0819 18:37:45.590298  409340 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0819 18:37:45.590304  409340 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0819 18:37:45.590310  409340 command_runner.go:130] > # plugin_dirs = [
	I0819 18:37:45.590313  409340 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0819 18:37:45.590316  409340 command_runner.go:130] > # ]
	I0819 18:37:45.590322  409340 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0819 18:37:45.590327  409340 command_runner.go:130] > [crio.metrics]
	I0819 18:37:45.590332  409340 command_runner.go:130] > # Globally enable or disable metrics support.
	I0819 18:37:45.590343  409340 command_runner.go:130] > enable_metrics = true
	I0819 18:37:45.590350  409340 command_runner.go:130] > # Specify enabled metrics collectors.
	I0819 18:37:45.590355  409340 command_runner.go:130] > # Per default all metrics are enabled.
	I0819 18:37:45.590361  409340 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0819 18:37:45.590367  409340 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0819 18:37:45.590377  409340 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0819 18:37:45.590381  409340 command_runner.go:130] > # metrics_collectors = [
	I0819 18:37:45.590385  409340 command_runner.go:130] > # 	"operations",
	I0819 18:37:45.590389  409340 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0819 18:37:45.590396  409340 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0819 18:37:45.590400  409340 command_runner.go:130] > # 	"operations_errors",
	I0819 18:37:45.590406  409340 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0819 18:37:45.590410  409340 command_runner.go:130] > # 	"image_pulls_by_name",
	I0819 18:37:45.590417  409340 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0819 18:37:45.590423  409340 command_runner.go:130] > # 	"image_pulls_failures",
	I0819 18:37:45.590427  409340 command_runner.go:130] > # 	"image_pulls_successes",
	I0819 18:37:45.590432  409340 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0819 18:37:45.590438  409340 command_runner.go:130] > # 	"image_layer_reuse",
	I0819 18:37:45.590442  409340 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0819 18:37:45.590447  409340 command_runner.go:130] > # 	"containers_oom_total",
	I0819 18:37:45.590451  409340 command_runner.go:130] > # 	"containers_oom",
	I0819 18:37:45.590455  409340 command_runner.go:130] > # 	"processes_defunct",
	I0819 18:37:45.590459  409340 command_runner.go:130] > # 	"operations_total",
	I0819 18:37:45.590463  409340 command_runner.go:130] > # 	"operations_latency_seconds",
	I0819 18:37:45.590469  409340 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0819 18:37:45.590474  409340 command_runner.go:130] > # 	"operations_errors_total",
	I0819 18:37:45.590482  409340 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0819 18:37:45.590487  409340 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0819 18:37:45.590493  409340 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0819 18:37:45.590497  409340 command_runner.go:130] > # 	"image_pulls_success_total",
	I0819 18:37:45.590503  409340 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0819 18:37:45.590508  409340 command_runner.go:130] > # 	"containers_oom_count_total",
	I0819 18:37:45.590515  409340 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0819 18:37:45.590520  409340 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0819 18:37:45.590525  409340 command_runner.go:130] > # ]
	I0819 18:37:45.590530  409340 command_runner.go:130] > # The port on which the metrics server will listen.
	I0819 18:37:45.590540  409340 command_runner.go:130] > # metrics_port = 9090
	I0819 18:37:45.590548  409340 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0819 18:37:45.590552  409340 command_runner.go:130] > # metrics_socket = ""
	I0819 18:37:45.590556  409340 command_runner.go:130] > # The certificate for the secure metrics server.
	I0819 18:37:45.590564  409340 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0819 18:37:45.590570  409340 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0819 18:37:45.590575  409340 command_runner.go:130] > # certificate on any modification event.
	I0819 18:37:45.590578  409340 command_runner.go:130] > # metrics_cert = ""
	I0819 18:37:45.590583  409340 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0819 18:37:45.590590  409340 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0819 18:37:45.590594  409340 command_runner.go:130] > # metrics_key = ""
	I0819 18:37:45.590601  409340 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0819 18:37:45.590605  409340 command_runner.go:130] > [crio.tracing]
	I0819 18:37:45.590613  409340 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0819 18:37:45.590617  409340 command_runner.go:130] > # enable_tracing = false
	I0819 18:37:45.590625  409340 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0819 18:37:45.590629  409340 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0819 18:37:45.590638  409340 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0819 18:37:45.590642  409340 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0819 18:37:45.590648  409340 command_runner.go:130] > # CRI-O NRI configuration.
	I0819 18:37:45.590652  409340 command_runner.go:130] > [crio.nri]
	I0819 18:37:45.590656  409340 command_runner.go:130] > # Globally enable or disable NRI.
	I0819 18:37:45.590662  409340 command_runner.go:130] > # enable_nri = false
	I0819 18:37:45.590665  409340 command_runner.go:130] > # NRI socket to listen on.
	I0819 18:37:45.590670  409340 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0819 18:37:45.590676  409340 command_runner.go:130] > # NRI plugin directory to use.
	I0819 18:37:45.590680  409340 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0819 18:37:45.590685  409340 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0819 18:37:45.590690  409340 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0819 18:37:45.590695  409340 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0819 18:37:45.590701  409340 command_runner.go:130] > # nri_disable_connections = false
	I0819 18:37:45.590706  409340 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0819 18:37:45.590712  409340 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0819 18:37:45.590716  409340 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0819 18:37:45.590722  409340 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0819 18:37:45.590728  409340 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0819 18:37:45.590739  409340 command_runner.go:130] > [crio.stats]
	I0819 18:37:45.590747  409340 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0819 18:37:45.590753  409340 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0819 18:37:45.590757  409340 command_runner.go:130] > # stats_collection_period = 0
	I0819 18:37:45.591121  409340 command_runner.go:130] ! time="2024-08-19 18:37:45.544084769Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0819 18:37:45.591149  409340 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0819 18:37:45.591408  409340 cni.go:84] Creating CNI manager for ""
	I0819 18:37:45.591428  409340 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0819 18:37:45.591438  409340 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 18:37:45.591462  409340 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.168 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-528433 NodeName:multinode-528433 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.168"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.168 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 18:37:45.591632  409340 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.168
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-528433"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.168
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.168"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 18:37:45.591725  409340 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 18:37:45.602325  409340 command_runner.go:130] > kubeadm
	I0819 18:37:45.602341  409340 command_runner.go:130] > kubectl
	I0819 18:37:45.602346  409340 command_runner.go:130] > kubelet
	I0819 18:37:45.602363  409340 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 18:37:45.602417  409340 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 18:37:45.612327  409340 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0819 18:37:45.629294  409340 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 18:37:45.645186  409340 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0819 18:37:45.662722  409340 ssh_runner.go:195] Run: grep 192.168.39.168	control-plane.minikube.internal$ /etc/hosts
	I0819 18:37:45.666612  409340 command_runner.go:130] > 192.168.39.168	control-plane.minikube.internal
	I0819 18:37:45.666791  409340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 18:37:45.827550  409340 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 18:37:45.842818  409340 certs.go:68] Setting up /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/multinode-528433 for IP: 192.168.39.168
	I0819 18:37:45.842848  409340 certs.go:194] generating shared ca certs ...
	I0819 18:37:45.842865  409340 certs.go:226] acquiring lock for ca certs: {Name:mk639e03f593e0bccac045f6e9f5ba3b96cc81e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:37:45.843028  409340 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key
	I0819 18:37:45.843071  409340 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key
	I0819 18:37:45.843080  409340 certs.go:256] generating profile certs ...
	I0819 18:37:45.843155  409340 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/multinode-528433/client.key
	I0819 18:37:45.843217  409340 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/multinode-528433/apiserver.key.fe16ede1
	I0819 18:37:45.843274  409340 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/multinode-528433/proxy-client.key
	I0819 18:37:45.843286  409340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 18:37:45.843300  409340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 18:37:45.843312  409340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 18:37:45.843325  409340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 18:37:45.843335  409340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/multinode-528433/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0819 18:37:45.843366  409340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/multinode-528433/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0819 18:37:45.843380  409340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/multinode-528433/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0819 18:37:45.843390  409340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/multinode-528433/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0819 18:37:45.843438  409340 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem (1338 bytes)
	W0819 18:37:45.843471  409340 certs.go:480] ignoring /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009_empty.pem, impossibly tiny 0 bytes
	I0819 18:37:45.843481  409340 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 18:37:45.843504  409340 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem (1082 bytes)
	I0819 18:37:45.843525  409340 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem (1123 bytes)
	I0819 18:37:45.843547  409340 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem (1675 bytes)
	I0819 18:37:45.843582  409340 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem (1708 bytes)
	I0819 18:37:45.843610  409340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:37:45.843623  409340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem -> /usr/share/ca-certificates/380009.pem
	I0819 18:37:45.843633  409340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem -> /usr/share/ca-certificates/3800092.pem
	I0819 18:37:45.844326  409340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 18:37:45.872593  409340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 18:37:45.898888  409340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 18:37:45.926991  409340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 18:37:45.952841  409340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/multinode-528433/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0819 18:37:45.979410  409340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/multinode-528433/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 18:37:46.006798  409340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/multinode-528433/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 18:37:46.031459  409340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/multinode-528433/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 18:37:46.058439  409340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 18:37:46.084626  409340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem --> /usr/share/ca-certificates/380009.pem (1338 bytes)
	I0819 18:37:46.110595  409340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem --> /usr/share/ca-certificates/3800092.pem (1708 bytes)
	I0819 18:37:46.137678  409340 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 18:37:46.154758  409340 ssh_runner.go:195] Run: openssl version
	I0819 18:37:46.160491  409340 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0819 18:37:46.160621  409340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/380009.pem && ln -fs /usr/share/ca-certificates/380009.pem /etc/ssl/certs/380009.pem"
	I0819 18:37:46.172031  409340 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/380009.pem
	I0819 18:37:46.176509  409340 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug 19 17:56 /usr/share/ca-certificates/380009.pem
	I0819 18:37:46.176542  409340 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 17:56 /usr/share/ca-certificates/380009.pem
	I0819 18:37:46.176581  409340 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/380009.pem
	I0819 18:37:46.182440  409340 command_runner.go:130] > 51391683
	I0819 18:37:46.182520  409340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/380009.pem /etc/ssl/certs/51391683.0"
	I0819 18:37:46.192213  409340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3800092.pem && ln -fs /usr/share/ca-certificates/3800092.pem /etc/ssl/certs/3800092.pem"
	I0819 18:37:46.203095  409340 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3800092.pem
	I0819 18:37:46.207411  409340 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug 19 17:56 /usr/share/ca-certificates/3800092.pem
	I0819 18:37:46.207444  409340 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 17:56 /usr/share/ca-certificates/3800092.pem
	I0819 18:37:46.207477  409340 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3800092.pem
	I0819 18:37:46.213101  409340 command_runner.go:130] > 3ec20f2e
	I0819 18:37:46.213168  409340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3800092.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 18:37:46.223423  409340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 18:37:46.235417  409340 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:37:46.240155  409340 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug 19 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:37:46.240199  409340 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:37:46.240264  409340 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:37:46.245976  409340 command_runner.go:130] > b5213941
	I0819 18:37:46.246065  409340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 18:37:46.256008  409340 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 18:37:46.260648  409340 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 18:37:46.260673  409340 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0819 18:37:46.260678  409340 command_runner.go:130] > Device: 253,1	Inode: 4197398     Links: 1
	I0819 18:37:46.260685  409340 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0819 18:37:46.260691  409340 command_runner.go:130] > Access: 2024-08-19 18:30:56.064533706 +0000
	I0819 18:37:46.260696  409340 command_runner.go:130] > Modify: 2024-08-19 18:30:56.065534449 +0000
	I0819 18:37:46.260700  409340 command_runner.go:130] > Change: 2024-08-19 18:30:56.065534449 +0000
	I0819 18:37:46.260704  409340 command_runner.go:130] >  Birth: 2024-08-19 18:30:56.064533706 +0000
	I0819 18:37:46.260793  409340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 18:37:46.266499  409340 command_runner.go:130] > Certificate will not expire
	I0819 18:37:46.266575  409340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 18:37:46.272408  409340 command_runner.go:130] > Certificate will not expire
	I0819 18:37:46.272481  409340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 18:37:46.278497  409340 command_runner.go:130] > Certificate will not expire
	I0819 18:37:46.278568  409340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 18:37:46.284114  409340 command_runner.go:130] > Certificate will not expire
	I0819 18:37:46.284403  409340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 18:37:46.289927  409340 command_runner.go:130] > Certificate will not expire
	I0819 18:37:46.290077  409340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 18:37:46.295754  409340 command_runner.go:130] > Certificate will not expire
	I0819 18:37:46.295820  409340 kubeadm.go:392] StartCluster: {Name:multinode-528433 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:multinode-528433 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.168 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.107 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.113 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 18:37:46.295935  409340 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 18:37:46.296004  409340 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 18:37:46.335528  409340 command_runner.go:130] > ed1b7f887e7493a29296d3302ca41b9feb5ce631efe79f5c7346da1ce5f3f5aa
	I0819 18:37:46.335555  409340 command_runner.go:130] > 9e235ceb0a44969b8100fa98407fe1fbe8a39f89a722ec5fba50a7894d1c315b
	I0819 18:37:46.335561  409340 command_runner.go:130] > 057d837bfdf9b2c340d252182c5f52f95286a592b06ccc5f204badee2872440e
	I0819 18:37:46.335580  409340 command_runner.go:130] > a5d6d7978005d7389f31edd035fd6fa05cd87e4a3901ca69aa1b2f9f73576240
	I0819 18:37:46.335592  409340 command_runner.go:130] > e18ee04a7496876e00d3bf4eea0c2cc1bea22033e6265d9eb65c8556c18dbecc
	I0819 18:37:46.335598  409340 command_runner.go:130] > 7c29a242039f25f811c96462be446c25a6154d911bbeffded18a5c5d7b8f8ea4
	I0819 18:37:46.335604  409340 command_runner.go:130] > 8f8613599f748b0dac210582c28b8864da1a0b7e328a1299edffaeebf943a44a
	I0819 18:37:46.335611  409340 command_runner.go:130] > c65c30f3ad8d84072ebc470eb8e6aa5f850402138c0c5b057a93852df19a0f24
	I0819 18:37:46.335633  409340 cri.go:89] found id: "ed1b7f887e7493a29296d3302ca41b9feb5ce631efe79f5c7346da1ce5f3f5aa"
	I0819 18:37:46.335642  409340 cri.go:89] found id: "9e235ceb0a44969b8100fa98407fe1fbe8a39f89a722ec5fba50a7894d1c315b"
	I0819 18:37:46.335645  409340 cri.go:89] found id: "057d837bfdf9b2c340d252182c5f52f95286a592b06ccc5f204badee2872440e"
	I0819 18:37:46.335649  409340 cri.go:89] found id: "a5d6d7978005d7389f31edd035fd6fa05cd87e4a3901ca69aa1b2f9f73576240"
	I0819 18:37:46.335653  409340 cri.go:89] found id: "e18ee04a7496876e00d3bf4eea0c2cc1bea22033e6265d9eb65c8556c18dbecc"
	I0819 18:37:46.335657  409340 cri.go:89] found id: "7c29a242039f25f811c96462be446c25a6154d911bbeffded18a5c5d7b8f8ea4"
	I0819 18:37:46.335660  409340 cri.go:89] found id: "8f8613599f748b0dac210582c28b8864da1a0b7e328a1299edffaeebf943a44a"
	I0819 18:37:46.335662  409340 cri.go:89] found id: "c65c30f3ad8d84072ebc470eb8e6aa5f850402138c0c5b057a93852df19a0f24"
	I0819 18:37:46.335665  409340 cri.go:89] found id: ""
	I0819 18:37:46.335726  409340 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 19 18:39:33 multinode-528433 crio[2738]: time="2024-08-19 18:39:33.261172971Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092773261141438,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8bd60526-558f-4fb2-89d4-e0c48237895c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:39:33 multinode-528433 crio[2738]: time="2024-08-19 18:39:33.262050350Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6e1523cd-7183-45ee-8d0d-fd8fb70ae1de name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:39:33 multinode-528433 crio[2738]: time="2024-08-19 18:39:33.262129196Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6e1523cd-7183-45ee-8d0d-fd8fb70ae1de name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:39:33 multinode-528433 crio[2738]: time="2024-08-19 18:39:33.262676334Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:72a17195d6bf67942986a46e56fba67e75056f3f131edb583ec1fee36c6ae2d9,PodSandboxId:a66751ad9ff4b7dd8a62b46a4a4583c86d2fc242a5faa7a882286627ee3aa531,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724092707039774210,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7rfnn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0e711971-6865-4191-b5fa-b045b4653330,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f0cc386f169fbef2abbfeb9de505aff3998aa7d54b7f3eee2b29d3c03dec1da,PodSandboxId:1ae2b060ade861dc63c5234d1883d3f0cbef337e6aee2f6d619ac644361ab3ca,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724092673558617442,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-n2rkp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6bb98ad-bda2-447c-a80a-b344e03d1c91,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10caf3a7930bac1eeced86d6165d138f108504c200840b0130e4e5bc5ef69b80,PodSandboxId:a9614e5e446e321f3d7e05c2bed412acb3d46511060d4c40a1f27d7984f1c095,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724092673464668288,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-fz4lc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 492965e3-fe40-49d9-8d90-3d25bdc67d6a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dcb798727b65fc265db910cab3cfa2e0ac5496715c0adb579c5a659e0c767b8,PodSandboxId:8112158e3370e32ff89041b8d1ce455d489bfea82d7c2be21a684c5fbecbd714,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724092673447341747,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 066868c5-cc0d-43bb-bdaf-f8ef664a5829,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a21bb460a42c175e6fcfe880334b3416a31c862ad41f4046dde00c3a50bf99ac,PodSandboxId:fe97abf92a961c599d0942d24914be81d7f8fda0743f294108940b802968dcd0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724092673376821797,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p26jv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28ac0348-1a53-4a4c-b0f5-0771f9ab8179,},Annotations:map[string]string{io.ku
bernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:580fb4c199750cc4e95fcf711e440dc76ff14f3b53d8a6997f621ca5b7bb4518,PodSandboxId:ec850d38417e844c38d6c2cf40506877ec7dfbd96dbb3406587fee1007e86201,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724092668515792156,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-528433,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fee372227a6243e6c504b433e9dc3d8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:422d4bc5ba6865ba6386db8aac55e0668aac92c409da53ac44c1d7750424fec7,PodSandboxId:058fc9c59d6119baf38422cc75b4f90ab7826ad54f3af1afc0675a3af83ff043,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724092668494818663,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-528433,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43ec279f896b4ee770677d0bae22c4b1,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a81508e447df1c7bfda53e67d1b4030870a749ba659d35497fe4adecbcf41a9e,PodSandboxId:07a33f7aa1fc1db1a29caf20a03ba3e05f8eac70e4c6061af227896435a5b583,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724092668438039716,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-528433,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96878698fee7f503b18654c4aea536a8,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0f607c724637f67917051010258f5d5d9d65d9a1966825b84ccb41087c55584,PodSandboxId:d278625234eca5cd6eb49cbed77ae24c11a6b2dc250d04df0d1e742f9248f6c5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724092668381932769,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-528433,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4955a086665c86d028d1d703c01db303,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f224d23c4d8f44f6775dc540bb3177686565fe9dd4224d12bc016af418710837,PodSandboxId:5a66f35c811553073f2f3811564ee8a98cc8a9d42eac401ac3f3e5c2dec93f90,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724092339879158107,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7rfnn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0e711971-6865-4191-b5fa-b045b4653330,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed1b7f887e7493a29296d3302ca41b9feb5ce631efe79f5c7346da1ce5f3f5aa,PodSandboxId:ceb057fc9c46954d8f4a27e13f091e6ba329f2eb2c19345e5311fc805a372cc8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724092286333371591,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-fz4lc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 492965e3-fe40-49d9-8d90-3d25bdc67d6a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e235ceb0a44969b8100fa98407fe1fbe8a39f89a722ec5fba50a7894d1c315b,PodSandboxId:8de468fe51d3c4f2cff19a27ebefd5ee016ffd1fb280b0cfa04fb5d8edf263f1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724092286273438089,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 066868c5-cc0d-43bb-bdaf-f8ef664a5829,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:057d837bfdf9b2c340d252182c5f52f95286a592b06ccc5f204badee2872440e,PodSandboxId:d5e3379f56814476907e612492d03f333c231c34477a32ac79018389c6afdcd7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724092274155853097,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-n2rkp,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: c6bb98ad-bda2-447c-a80a-b344e03d1c91,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5d6d7978005d7389f31edd035fd6fa05cd87e4a3901ca69aa1b2f9f73576240,PodSandboxId:877c89858b3f669bbca9ae01e4e630458f04d5ff4a9d4bced862d0d1c1b0ba59,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724092271823692641,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p26jv,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 28ac0348-1a53-4a4c-b0f5-0771f9ab8179,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e18ee04a7496876e00d3bf4eea0c2cc1bea22033e6265d9eb65c8556c18dbecc,PodSandboxId:cb470e280bb52b1ef495582750d7a53f8e226406c8abe2072527d1b05a734c36,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724092259746908878,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-528433,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 4955a086665c86d028d1d703c01db303,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c29a242039f25f811c96462be446c25a6154d911bbeffded18a5c5d7b8f8ea4,PodSandboxId:d4079a671c6959278f96d3504ca0c4360fac69130c3ec5050df94d901fc2dd87,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724092259742342579,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-528433,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 43ec279f896b4ee770677d0bae22c4b1,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f8613599f748b0dac210582c28b8864da1a0b7e328a1299edffaeebf943a44a,PodSandboxId:25a8248374561b640ce1e6ecf7e2b9af1a1b78e773fa2083d765ad0735d9757b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724092259716943720,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-528433,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fee372227a6243e6c504b433e9dc3d8,},
Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c65c30f3ad8d84072ebc470eb8e6aa5f850402138c0c5b057a93852df19a0f24,PodSandboxId:203cae8e180f3260e3a8def82d9ad87ff2903a20f9dd92fd570436a9a7cb9291,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724092259655024951,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-528433,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96878698fee7f503b18654c4aea536a8,},Annotations:map
[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6e1523cd-7183-45ee-8d0d-fd8fb70ae1de name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:39:33 multinode-528433 crio[2738]: time="2024-08-19 18:39:33.307717836Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=12031bd6-104f-4bd6-a9fd-b28ee8b5afef name=/runtime.v1.RuntimeService/Version
	Aug 19 18:39:33 multinode-528433 crio[2738]: time="2024-08-19 18:39:33.307804244Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=12031bd6-104f-4bd6-a9fd-b28ee8b5afef name=/runtime.v1.RuntimeService/Version
	Aug 19 18:39:33 multinode-528433 crio[2738]: time="2024-08-19 18:39:33.309173244Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a25a3f20-b909-4f7f-a992-ed611b1e7bdc name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:39:33 multinode-528433 crio[2738]: time="2024-08-19 18:39:33.309828258Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092773309803026,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a25a3f20-b909-4f7f-a992-ed611b1e7bdc name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:39:33 multinode-528433 crio[2738]: time="2024-08-19 18:39:33.310786275Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bda754e4-0146-4380-9340-0a5457ca8593 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:39:33 multinode-528433 crio[2738]: time="2024-08-19 18:39:33.310901280Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bda754e4-0146-4380-9340-0a5457ca8593 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:39:33 multinode-528433 crio[2738]: time="2024-08-19 18:39:33.311325512Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:72a17195d6bf67942986a46e56fba67e75056f3f131edb583ec1fee36c6ae2d9,PodSandboxId:a66751ad9ff4b7dd8a62b46a4a4583c86d2fc242a5faa7a882286627ee3aa531,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724092707039774210,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7rfnn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0e711971-6865-4191-b5fa-b045b4653330,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f0cc386f169fbef2abbfeb9de505aff3998aa7d54b7f3eee2b29d3c03dec1da,PodSandboxId:1ae2b060ade861dc63c5234d1883d3f0cbef337e6aee2f6d619ac644361ab3ca,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724092673558617442,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-n2rkp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6bb98ad-bda2-447c-a80a-b344e03d1c91,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10caf3a7930bac1eeced86d6165d138f108504c200840b0130e4e5bc5ef69b80,PodSandboxId:a9614e5e446e321f3d7e05c2bed412acb3d46511060d4c40a1f27d7984f1c095,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724092673464668288,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-fz4lc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 492965e3-fe40-49d9-8d90-3d25bdc67d6a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dcb798727b65fc265db910cab3cfa2e0ac5496715c0adb579c5a659e0c767b8,PodSandboxId:8112158e3370e32ff89041b8d1ce455d489bfea82d7c2be21a684c5fbecbd714,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724092673447341747,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 066868c5-cc0d-43bb-bdaf-f8ef664a5829,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a21bb460a42c175e6fcfe880334b3416a31c862ad41f4046dde00c3a50bf99ac,PodSandboxId:fe97abf92a961c599d0942d24914be81d7f8fda0743f294108940b802968dcd0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724092673376821797,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p26jv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28ac0348-1a53-4a4c-b0f5-0771f9ab8179,},Annotations:map[string]string{io.ku
bernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:580fb4c199750cc4e95fcf711e440dc76ff14f3b53d8a6997f621ca5b7bb4518,PodSandboxId:ec850d38417e844c38d6c2cf40506877ec7dfbd96dbb3406587fee1007e86201,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724092668515792156,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-528433,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fee372227a6243e6c504b433e9dc3d8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:422d4bc5ba6865ba6386db8aac55e0668aac92c409da53ac44c1d7750424fec7,PodSandboxId:058fc9c59d6119baf38422cc75b4f90ab7826ad54f3af1afc0675a3af83ff043,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724092668494818663,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-528433,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43ec279f896b4ee770677d0bae22c4b1,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a81508e447df1c7bfda53e67d1b4030870a749ba659d35497fe4adecbcf41a9e,PodSandboxId:07a33f7aa1fc1db1a29caf20a03ba3e05f8eac70e4c6061af227896435a5b583,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724092668438039716,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-528433,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96878698fee7f503b18654c4aea536a8,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0f607c724637f67917051010258f5d5d9d65d9a1966825b84ccb41087c55584,PodSandboxId:d278625234eca5cd6eb49cbed77ae24c11a6b2dc250d04df0d1e742f9248f6c5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724092668381932769,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-528433,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4955a086665c86d028d1d703c01db303,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f224d23c4d8f44f6775dc540bb3177686565fe9dd4224d12bc016af418710837,PodSandboxId:5a66f35c811553073f2f3811564ee8a98cc8a9d42eac401ac3f3e5c2dec93f90,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724092339879158107,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7rfnn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0e711971-6865-4191-b5fa-b045b4653330,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed1b7f887e7493a29296d3302ca41b9feb5ce631efe79f5c7346da1ce5f3f5aa,PodSandboxId:ceb057fc9c46954d8f4a27e13f091e6ba329f2eb2c19345e5311fc805a372cc8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724092286333371591,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-fz4lc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 492965e3-fe40-49d9-8d90-3d25bdc67d6a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e235ceb0a44969b8100fa98407fe1fbe8a39f89a722ec5fba50a7894d1c315b,PodSandboxId:8de468fe51d3c4f2cff19a27ebefd5ee016ffd1fb280b0cfa04fb5d8edf263f1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724092286273438089,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 066868c5-cc0d-43bb-bdaf-f8ef664a5829,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:057d837bfdf9b2c340d252182c5f52f95286a592b06ccc5f204badee2872440e,PodSandboxId:d5e3379f56814476907e612492d03f333c231c34477a32ac79018389c6afdcd7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724092274155853097,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-n2rkp,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: c6bb98ad-bda2-447c-a80a-b344e03d1c91,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5d6d7978005d7389f31edd035fd6fa05cd87e4a3901ca69aa1b2f9f73576240,PodSandboxId:877c89858b3f669bbca9ae01e4e630458f04d5ff4a9d4bced862d0d1c1b0ba59,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724092271823692641,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p26jv,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 28ac0348-1a53-4a4c-b0f5-0771f9ab8179,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e18ee04a7496876e00d3bf4eea0c2cc1bea22033e6265d9eb65c8556c18dbecc,PodSandboxId:cb470e280bb52b1ef495582750d7a53f8e226406c8abe2072527d1b05a734c36,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724092259746908878,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-528433,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 4955a086665c86d028d1d703c01db303,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c29a242039f25f811c96462be446c25a6154d911bbeffded18a5c5d7b8f8ea4,PodSandboxId:d4079a671c6959278f96d3504ca0c4360fac69130c3ec5050df94d901fc2dd87,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724092259742342579,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-528433,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 43ec279f896b4ee770677d0bae22c4b1,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f8613599f748b0dac210582c28b8864da1a0b7e328a1299edffaeebf943a44a,PodSandboxId:25a8248374561b640ce1e6ecf7e2b9af1a1b78e773fa2083d765ad0735d9757b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724092259716943720,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-528433,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fee372227a6243e6c504b433e9dc3d8,},
Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c65c30f3ad8d84072ebc470eb8e6aa5f850402138c0c5b057a93852df19a0f24,PodSandboxId:203cae8e180f3260e3a8def82d9ad87ff2903a20f9dd92fd570436a9a7cb9291,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724092259655024951,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-528433,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96878698fee7f503b18654c4aea536a8,},Annotations:map
[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bda754e4-0146-4380-9340-0a5457ca8593 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:39:33 multinode-528433 crio[2738]: time="2024-08-19 18:39:33.355112316Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7147c379-7e81-4f42-9021-ddba67e95d59 name=/runtime.v1.RuntimeService/Version
	Aug 19 18:39:33 multinode-528433 crio[2738]: time="2024-08-19 18:39:33.355184654Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7147c379-7e81-4f42-9021-ddba67e95d59 name=/runtime.v1.RuntimeService/Version
	Aug 19 18:39:33 multinode-528433 crio[2738]: time="2024-08-19 18:39:33.356546125Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=35b0c0ac-5151-47ca-a8e9-4d6e4aeb85c0 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:39:33 multinode-528433 crio[2738]: time="2024-08-19 18:39:33.356999277Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092773356975548,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=35b0c0ac-5151-47ca-a8e9-4d6e4aeb85c0 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:39:33 multinode-528433 crio[2738]: time="2024-08-19 18:39:33.357709377Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=965fee31-b96a-42f0-8e91-f5e8b73dcc02 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:39:33 multinode-528433 crio[2738]: time="2024-08-19 18:39:33.357762705Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=965fee31-b96a-42f0-8e91-f5e8b73dcc02 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:39:33 multinode-528433 crio[2738]: time="2024-08-19 18:39:33.358103184Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:72a17195d6bf67942986a46e56fba67e75056f3f131edb583ec1fee36c6ae2d9,PodSandboxId:a66751ad9ff4b7dd8a62b46a4a4583c86d2fc242a5faa7a882286627ee3aa531,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724092707039774210,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7rfnn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0e711971-6865-4191-b5fa-b045b4653330,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f0cc386f169fbef2abbfeb9de505aff3998aa7d54b7f3eee2b29d3c03dec1da,PodSandboxId:1ae2b060ade861dc63c5234d1883d3f0cbef337e6aee2f6d619ac644361ab3ca,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724092673558617442,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-n2rkp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6bb98ad-bda2-447c-a80a-b344e03d1c91,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10caf3a7930bac1eeced86d6165d138f108504c200840b0130e4e5bc5ef69b80,PodSandboxId:a9614e5e446e321f3d7e05c2bed412acb3d46511060d4c40a1f27d7984f1c095,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724092673464668288,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-fz4lc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 492965e3-fe40-49d9-8d90-3d25bdc67d6a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dcb798727b65fc265db910cab3cfa2e0ac5496715c0adb579c5a659e0c767b8,PodSandboxId:8112158e3370e32ff89041b8d1ce455d489bfea82d7c2be21a684c5fbecbd714,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724092673447341747,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 066868c5-cc0d-43bb-bdaf-f8ef664a5829,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a21bb460a42c175e6fcfe880334b3416a31c862ad41f4046dde00c3a50bf99ac,PodSandboxId:fe97abf92a961c599d0942d24914be81d7f8fda0743f294108940b802968dcd0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724092673376821797,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p26jv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28ac0348-1a53-4a4c-b0f5-0771f9ab8179,},Annotations:map[string]string{io.ku
bernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:580fb4c199750cc4e95fcf711e440dc76ff14f3b53d8a6997f621ca5b7bb4518,PodSandboxId:ec850d38417e844c38d6c2cf40506877ec7dfbd96dbb3406587fee1007e86201,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724092668515792156,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-528433,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fee372227a6243e6c504b433e9dc3d8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:422d4bc5ba6865ba6386db8aac55e0668aac92c409da53ac44c1d7750424fec7,PodSandboxId:058fc9c59d6119baf38422cc75b4f90ab7826ad54f3af1afc0675a3af83ff043,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724092668494818663,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-528433,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43ec279f896b4ee770677d0bae22c4b1,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a81508e447df1c7bfda53e67d1b4030870a749ba659d35497fe4adecbcf41a9e,PodSandboxId:07a33f7aa1fc1db1a29caf20a03ba3e05f8eac70e4c6061af227896435a5b583,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724092668438039716,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-528433,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96878698fee7f503b18654c4aea536a8,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0f607c724637f67917051010258f5d5d9d65d9a1966825b84ccb41087c55584,PodSandboxId:d278625234eca5cd6eb49cbed77ae24c11a6b2dc250d04df0d1e742f9248f6c5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724092668381932769,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-528433,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4955a086665c86d028d1d703c01db303,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f224d23c4d8f44f6775dc540bb3177686565fe9dd4224d12bc016af418710837,PodSandboxId:5a66f35c811553073f2f3811564ee8a98cc8a9d42eac401ac3f3e5c2dec93f90,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724092339879158107,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7rfnn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0e711971-6865-4191-b5fa-b045b4653330,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed1b7f887e7493a29296d3302ca41b9feb5ce631efe79f5c7346da1ce5f3f5aa,PodSandboxId:ceb057fc9c46954d8f4a27e13f091e6ba329f2eb2c19345e5311fc805a372cc8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724092286333371591,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-fz4lc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 492965e3-fe40-49d9-8d90-3d25bdc67d6a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e235ceb0a44969b8100fa98407fe1fbe8a39f89a722ec5fba50a7894d1c315b,PodSandboxId:8de468fe51d3c4f2cff19a27ebefd5ee016ffd1fb280b0cfa04fb5d8edf263f1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724092286273438089,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 066868c5-cc0d-43bb-bdaf-f8ef664a5829,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:057d837bfdf9b2c340d252182c5f52f95286a592b06ccc5f204badee2872440e,PodSandboxId:d5e3379f56814476907e612492d03f333c231c34477a32ac79018389c6afdcd7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724092274155853097,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-n2rkp,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: c6bb98ad-bda2-447c-a80a-b344e03d1c91,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5d6d7978005d7389f31edd035fd6fa05cd87e4a3901ca69aa1b2f9f73576240,PodSandboxId:877c89858b3f669bbca9ae01e4e630458f04d5ff4a9d4bced862d0d1c1b0ba59,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724092271823692641,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p26jv,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 28ac0348-1a53-4a4c-b0f5-0771f9ab8179,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e18ee04a7496876e00d3bf4eea0c2cc1bea22033e6265d9eb65c8556c18dbecc,PodSandboxId:cb470e280bb52b1ef495582750d7a53f8e226406c8abe2072527d1b05a734c36,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724092259746908878,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-528433,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 4955a086665c86d028d1d703c01db303,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c29a242039f25f811c96462be446c25a6154d911bbeffded18a5c5d7b8f8ea4,PodSandboxId:d4079a671c6959278f96d3504ca0c4360fac69130c3ec5050df94d901fc2dd87,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724092259742342579,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-528433,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 43ec279f896b4ee770677d0bae22c4b1,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f8613599f748b0dac210582c28b8864da1a0b7e328a1299edffaeebf943a44a,PodSandboxId:25a8248374561b640ce1e6ecf7e2b9af1a1b78e773fa2083d765ad0735d9757b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724092259716943720,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-528433,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fee372227a6243e6c504b433e9dc3d8,},
Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c65c30f3ad8d84072ebc470eb8e6aa5f850402138c0c5b057a93852df19a0f24,PodSandboxId:203cae8e180f3260e3a8def82d9ad87ff2903a20f9dd92fd570436a9a7cb9291,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724092259655024951,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-528433,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96878698fee7f503b18654c4aea536a8,},Annotations:map
[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=965fee31-b96a-42f0-8e91-f5e8b73dcc02 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:39:33 multinode-528433 crio[2738]: time="2024-08-19 18:39:33.400407536Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=558e96fc-0a4e-4840-aae0-9bb7bd093f0c name=/runtime.v1.RuntimeService/Version
	Aug 19 18:39:33 multinode-528433 crio[2738]: time="2024-08-19 18:39:33.400481182Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=558e96fc-0a4e-4840-aae0-9bb7bd093f0c name=/runtime.v1.RuntimeService/Version
	Aug 19 18:39:33 multinode-528433 crio[2738]: time="2024-08-19 18:39:33.403024121Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5f3bde24-0375-4575-b044-1d34dafa8e31 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:39:33 multinode-528433 crio[2738]: time="2024-08-19 18:39:33.403647376Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092773403616674,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5f3bde24-0375-4575-b044-1d34dafa8e31 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:39:33 multinode-528433 crio[2738]: time="2024-08-19 18:39:33.404335921Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=72918f85-8c65-4cb7-9fa0-623ce30ecb2d name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:39:33 multinode-528433 crio[2738]: time="2024-08-19 18:39:33.404392779Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=72918f85-8c65-4cb7-9fa0-623ce30ecb2d name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:39:33 multinode-528433 crio[2738]: time="2024-08-19 18:39:33.405127517Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:72a17195d6bf67942986a46e56fba67e75056f3f131edb583ec1fee36c6ae2d9,PodSandboxId:a66751ad9ff4b7dd8a62b46a4a4583c86d2fc242a5faa7a882286627ee3aa531,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724092707039774210,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7rfnn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0e711971-6865-4191-b5fa-b045b4653330,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f0cc386f169fbef2abbfeb9de505aff3998aa7d54b7f3eee2b29d3c03dec1da,PodSandboxId:1ae2b060ade861dc63c5234d1883d3f0cbef337e6aee2f6d619ac644361ab3ca,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724092673558617442,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-n2rkp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6bb98ad-bda2-447c-a80a-b344e03d1c91,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10caf3a7930bac1eeced86d6165d138f108504c200840b0130e4e5bc5ef69b80,PodSandboxId:a9614e5e446e321f3d7e05c2bed412acb3d46511060d4c40a1f27d7984f1c095,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724092673464668288,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-fz4lc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 492965e3-fe40-49d9-8d90-3d25bdc67d6a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dcb798727b65fc265db910cab3cfa2e0ac5496715c0adb579c5a659e0c767b8,PodSandboxId:8112158e3370e32ff89041b8d1ce455d489bfea82d7c2be21a684c5fbecbd714,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724092673447341747,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 066868c5-cc0d-43bb-bdaf-f8ef664a5829,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a21bb460a42c175e6fcfe880334b3416a31c862ad41f4046dde00c3a50bf99ac,PodSandboxId:fe97abf92a961c599d0942d24914be81d7f8fda0743f294108940b802968dcd0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724092673376821797,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p26jv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28ac0348-1a53-4a4c-b0f5-0771f9ab8179,},Annotations:map[string]string{io.ku
bernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:580fb4c199750cc4e95fcf711e440dc76ff14f3b53d8a6997f621ca5b7bb4518,PodSandboxId:ec850d38417e844c38d6c2cf40506877ec7dfbd96dbb3406587fee1007e86201,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724092668515792156,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-528433,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fee372227a6243e6c504b433e9dc3d8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:422d4bc5ba6865ba6386db8aac55e0668aac92c409da53ac44c1d7750424fec7,PodSandboxId:058fc9c59d6119baf38422cc75b4f90ab7826ad54f3af1afc0675a3af83ff043,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724092668494818663,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-528433,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43ec279f896b4ee770677d0bae22c4b1,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a81508e447df1c7bfda53e67d1b4030870a749ba659d35497fe4adecbcf41a9e,PodSandboxId:07a33f7aa1fc1db1a29caf20a03ba3e05f8eac70e4c6061af227896435a5b583,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724092668438039716,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-528433,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96878698fee7f503b18654c4aea536a8,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0f607c724637f67917051010258f5d5d9d65d9a1966825b84ccb41087c55584,PodSandboxId:d278625234eca5cd6eb49cbed77ae24c11a6b2dc250d04df0d1e742f9248f6c5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724092668381932769,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-528433,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4955a086665c86d028d1d703c01db303,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f224d23c4d8f44f6775dc540bb3177686565fe9dd4224d12bc016af418710837,PodSandboxId:5a66f35c811553073f2f3811564ee8a98cc8a9d42eac401ac3f3e5c2dec93f90,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724092339879158107,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7rfnn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0e711971-6865-4191-b5fa-b045b4653330,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed1b7f887e7493a29296d3302ca41b9feb5ce631efe79f5c7346da1ce5f3f5aa,PodSandboxId:ceb057fc9c46954d8f4a27e13f091e6ba329f2eb2c19345e5311fc805a372cc8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724092286333371591,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-fz4lc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 492965e3-fe40-49d9-8d90-3d25bdc67d6a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e235ceb0a44969b8100fa98407fe1fbe8a39f89a722ec5fba50a7894d1c315b,PodSandboxId:8de468fe51d3c4f2cff19a27ebefd5ee016ffd1fb280b0cfa04fb5d8edf263f1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724092286273438089,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 066868c5-cc0d-43bb-bdaf-f8ef664a5829,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:057d837bfdf9b2c340d252182c5f52f95286a592b06ccc5f204badee2872440e,PodSandboxId:d5e3379f56814476907e612492d03f333c231c34477a32ac79018389c6afdcd7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724092274155853097,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-n2rkp,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: c6bb98ad-bda2-447c-a80a-b344e03d1c91,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5d6d7978005d7389f31edd035fd6fa05cd87e4a3901ca69aa1b2f9f73576240,PodSandboxId:877c89858b3f669bbca9ae01e4e630458f04d5ff4a9d4bced862d0d1c1b0ba59,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724092271823692641,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p26jv,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 28ac0348-1a53-4a4c-b0f5-0771f9ab8179,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e18ee04a7496876e00d3bf4eea0c2cc1bea22033e6265d9eb65c8556c18dbecc,PodSandboxId:cb470e280bb52b1ef495582750d7a53f8e226406c8abe2072527d1b05a734c36,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724092259746908878,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-528433,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 4955a086665c86d028d1d703c01db303,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c29a242039f25f811c96462be446c25a6154d911bbeffded18a5c5d7b8f8ea4,PodSandboxId:d4079a671c6959278f96d3504ca0c4360fac69130c3ec5050df94d901fc2dd87,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724092259742342579,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-528433,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 43ec279f896b4ee770677d0bae22c4b1,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f8613599f748b0dac210582c28b8864da1a0b7e328a1299edffaeebf943a44a,PodSandboxId:25a8248374561b640ce1e6ecf7e2b9af1a1b78e773fa2083d765ad0735d9757b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724092259716943720,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-528433,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fee372227a6243e6c504b433e9dc3d8,},
Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c65c30f3ad8d84072ebc470eb8e6aa5f850402138c0c5b057a93852df19a0f24,PodSandboxId:203cae8e180f3260e3a8def82d9ad87ff2903a20f9dd92fd570436a9a7cb9291,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724092259655024951,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-528433,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96878698fee7f503b18654c4aea536a8,},Annotations:map
[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=72918f85-8c65-4cb7-9fa0-623ce30ecb2d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	72a17195d6bf6       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   a66751ad9ff4b       busybox-7dff88458-7rfnn
	8f0cc386f169f       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      About a minute ago   Running             kindnet-cni               1                   1ae2b060ade86       kindnet-n2rkp
	10caf3a7930ba       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   1                   a9614e5e446e3       coredns-6f6b679f8f-fz4lc
	5dcb798727b65       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   8112158e3370e       storage-provisioner
	a21bb460a42c1       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      About a minute ago   Running             kube-proxy                1                   fe97abf92a961       kube-proxy-p26jv
	580fb4c199750       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      About a minute ago   Running             etcd                      1                   ec850d38417e8       etcd-multinode-528433
	422d4bc5ba686       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      About a minute ago   Running             kube-scheduler            1                   058fc9c59d611       kube-scheduler-multinode-528433
	a81508e447df1       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      About a minute ago   Running             kube-apiserver            1                   07a33f7aa1fc1       kube-apiserver-multinode-528433
	a0f607c724637       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      About a minute ago   Running             kube-controller-manager   1                   d278625234eca       kube-controller-manager-multinode-528433
	f224d23c4d8f4       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   7 minutes ago        Exited              busybox                   0                   5a66f35c81155       busybox-7dff88458-7rfnn
	ed1b7f887e749       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      8 minutes ago        Exited              coredns                   0                   ceb057fc9c469       coredns-6f6b679f8f-fz4lc
	9e235ceb0a449       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      8 minutes ago        Exited              storage-provisioner       0                   8de468fe51d3c       storage-provisioner
	057d837bfdf9b       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    8 minutes ago        Exited              kindnet-cni               0                   d5e3379f56814       kindnet-n2rkp
	a5d6d7978005d       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      8 minutes ago        Exited              kube-proxy                0                   877c89858b3f6       kube-proxy-p26jv
	e18ee04a74968       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      8 minutes ago        Exited              kube-controller-manager   0                   cb470e280bb52       kube-controller-manager-multinode-528433
	7c29a242039f2       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      8 minutes ago        Exited              kube-scheduler            0                   d4079a671c695       kube-scheduler-multinode-528433
	8f8613599f748       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      8 minutes ago        Exited              etcd                      0                   25a8248374561       etcd-multinode-528433
	c65c30f3ad8d8       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      8 minutes ago        Exited              kube-apiserver            0                   203cae8e180f3       kube-apiserver-multinode-528433
	
	
	==> coredns [10caf3a7930bac1eeced86d6165d138f108504c200840b0130e4e5bc5ef69b80] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:32985 - 55068 "HINFO IN 2276461329978003692.3688836495844611696. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.019112479s
	
	
	==> coredns [ed1b7f887e7493a29296d3302ca41b9feb5ce631efe79f5c7346da1ce5f3f5aa] <==
	[INFO] 10.244.1.2:45136 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001778026s
	[INFO] 10.244.1.2:39595 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000093282s
	[INFO] 10.244.1.2:52118 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000067848s
	[INFO] 10.244.1.2:35693 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001214491s
	[INFO] 10.244.1.2:53820 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000063204s
	[INFO] 10.244.1.2:36563 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000060887s
	[INFO] 10.244.1.2:41229 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000064173s
	[INFO] 10.244.0.3:37769 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000083799s
	[INFO] 10.244.0.3:54377 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000059108s
	[INFO] 10.244.0.3:34587 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000076758s
	[INFO] 10.244.0.3:47718 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000042003s
	[INFO] 10.244.1.2:51694 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000160518s
	[INFO] 10.244.1.2:54523 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000117392s
	[INFO] 10.244.1.2:45410 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000076187s
	[INFO] 10.244.1.2:36210 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000090522s
	[INFO] 10.244.0.3:44693 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122508s
	[INFO] 10.244.0.3:53188 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000110882s
	[INFO] 10.244.0.3:35460 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00009435s
	[INFO] 10.244.0.3:48546 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00006206s
	[INFO] 10.244.1.2:37874 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000157291s
	[INFO] 10.244.1.2:50300 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000102288s
	[INFO] 10.244.1.2:50241 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000095737s
	[INFO] 10.244.1.2:47032 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000069064s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-528433
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-528433
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9c2db9d51ec33b5c53a86e9ba3d384ee332e3411
	                    minikube.k8s.io/name=multinode-528433
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T18_31_05_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 18:31:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-528433
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 18:39:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 18:37:52 +0000   Mon, 19 Aug 2024 18:31:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 18:37:52 +0000   Mon, 19 Aug 2024 18:31:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 18:37:52 +0000   Mon, 19 Aug 2024 18:31:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 18:37:52 +0000   Mon, 19 Aug 2024 18:31:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.168
	  Hostname:    multinode-528433
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a5065be9838d4acd9d9f081f00a42b7b
	  System UUID:                a5065be9-838d-4acd-9d9f-081f00a42b7b
	  Boot ID:                    cd729d8e-64bb-410c-9f54-c5249111761b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-7rfnn                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m17s
	  kube-system                 coredns-6f6b679f8f-fz4lc                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     8m24s
	  kube-system                 etcd-multinode-528433                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         8m28s
	  kube-system                 kindnet-n2rkp                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      8m24s
	  kube-system                 kube-apiserver-multinode-528433             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m29s
	  kube-system                 kube-controller-manager-multinode-528433    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m29s
	  kube-system                 kube-proxy-p26jv                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m24s
	  kube-system                 kube-scheduler-multinode-528433             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m28s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 8m21s                kube-proxy       
	  Normal  Starting                 99s                  kube-proxy       
	  Normal  Starting                 8m34s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m34s                kubelet          Node multinode-528433 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m34s                kubelet          Node multinode-528433 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m34s                kubelet          Node multinode-528433 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m34s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 8m29s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m29s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    8m28s                kubelet          Node multinode-528433 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m28s                kubelet          Node multinode-528433 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  8m28s                kubelet          Node multinode-528433 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           8m24s                node-controller  Node multinode-528433 event: Registered Node multinode-528433 in Controller
	  Normal  NodeReady                8m8s                 kubelet          Node multinode-528433 status is now: NodeReady
	  Normal  Starting                 106s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  106s (x8 over 106s)  kubelet          Node multinode-528433 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    106s (x8 over 106s)  kubelet          Node multinode-528433 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     106s (x7 over 106s)  kubelet          Node multinode-528433 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  106s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           98s                  node-controller  Node multinode-528433 event: Registered Node multinode-528433 in Controller
	
	
	Name:               multinode-528433-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-528433-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9c2db9d51ec33b5c53a86e9ba3d384ee332e3411
	                    minikube.k8s.io/name=multinode-528433
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_19T18_38_30_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 18:38:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-528433-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 18:39:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 18:39:01 +0000   Mon, 19 Aug 2024 18:38:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 18:39:01 +0000   Mon, 19 Aug 2024 18:38:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 18:39:01 +0000   Mon, 19 Aug 2024 18:38:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 18:39:01 +0000   Mon, 19 Aug 2024 18:38:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.107
	  Hostname:    multinode-528433-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b87bdc29e80543e191b01af8a0a8ce51
	  System UUID:                b87bdc29-e805-43e1-91b0-1af8a0a8ce51
	  Boot ID:                    8993ba52-c500-46b0-af42-831d10624bba
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-vmbvk    0 (0%)        0 (0%)      0 (0%)           0 (0%)         68s
	  kube-system                 kindnet-l9wzp              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      7m39s
	  kube-system                 kube-proxy-7wbgt           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 58s                    kube-proxy       
	  Normal  Starting                 7m34s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  7m39s (x2 over 7m39s)  kubelet          Node multinode-528433-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m39s (x2 over 7m39s)  kubelet          Node multinode-528433-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m39s (x2 over 7m39s)  kubelet          Node multinode-528433-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m39s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                7m19s                  kubelet          Node multinode-528433-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  63s (x2 over 63s)      kubelet          Node multinode-528433-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    63s (x2 over 63s)      kubelet          Node multinode-528433-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     63s (x2 over 63s)      kubelet          Node multinode-528433-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  63s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           58s                    node-controller  Node multinode-528433-m02 event: Registered Node multinode-528433-m02 in Controller
	  Normal  NodeReady                43s                    kubelet          Node multinode-528433-m02 status is now: NodeReady
	
	
	Name:               multinode-528433-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-528433-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9c2db9d51ec33b5c53a86e9ba3d384ee332e3411
	                    minikube.k8s.io/name=multinode-528433
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_19T18_39_10_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 18:39:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-528433-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 18:39:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 18:39:30 +0000   Mon, 19 Aug 2024 18:39:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 18:39:30 +0000   Mon, 19 Aug 2024 18:39:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 18:39:30 +0000   Mon, 19 Aug 2024 18:39:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 18:39:30 +0000   Mon, 19 Aug 2024 18:39:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.113
	  Hostname:    multinode-528433-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 362e10829ef04868961d41f479c29b79
	  System UUID:                362e1082-9ef0-4868-961d-41f479c29b79
	  Boot ID:                    20eda2b7-6915-4357-bf5b-2ec82d98c9bf
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-xc2kd       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m42s
	  kube-system                 kube-proxy-m4pn8    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From           Message
	  ----    ------                   ----                   ----           -------
	  Normal  Starting                 5m48s                  kube-proxy     
	  Normal  Starting                 6m38s                  kube-proxy     
	  Normal  Starting                 18s                    kube-proxy     
	  Normal  NodeHasSufficientMemory  6m42s (x2 over 6m42s)  kubelet        Node multinode-528433-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m42s (x2 over 6m42s)  kubelet        Node multinode-528433-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m42s (x2 over 6m42s)  kubelet        Node multinode-528433-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m42s                  kubelet        Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m22s                  kubelet        Node multinode-528433-m03 status is now: NodeReady
	  Normal  NodeHasNoDiskPressure    5m53s (x2 over 5m53s)  kubelet        Node multinode-528433-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m53s (x2 over 5m53s)  kubelet        Node multinode-528433-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m53s                  kubelet        Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m53s (x2 over 5m53s)  kubelet        Node multinode-528433-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m33s                  kubelet        Node multinode-528433-m03 status is now: NodeReady
	  Normal  CIDRAssignmentFailed     23s                    cidrAllocator  Node multinode-528433-m03 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  23s (x2 over 23s)      kubelet        Node multinode-528433-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x2 over 23s)      kubelet        Node multinode-528433-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x2 over 23s)      kubelet        Node multinode-528433-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23s                    kubelet        Updated Node Allocatable limit across pods
	  Normal  NodeReady                3s                     kubelet        Node multinode-528433-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.056804] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.185782] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.130416] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.267325] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +4.001571] systemd-fstab-generator[759]: Ignoring "noauto" option for root device
	[  +4.088576] systemd-fstab-generator[890]: Ignoring "noauto" option for root device
	[  +0.058151] kauditd_printk_skb: 158 callbacks suppressed
	[Aug19 18:31] systemd-fstab-generator[1226]: Ignoring "noauto" option for root device
	[  +0.090579] kauditd_printk_skb: 69 callbacks suppressed
	[  +4.616981] systemd-fstab-generator[1326]: Ignoring "noauto" option for root device
	[  +1.080821] kauditd_printk_skb: 43 callbacks suppressed
	[ +15.760285] kauditd_printk_skb: 38 callbacks suppressed
	[Aug19 18:32] kauditd_printk_skb: 12 callbacks suppressed
	[Aug19 18:37] systemd-fstab-generator[2658]: Ignoring "noauto" option for root device
	[  +0.140159] systemd-fstab-generator[2670]: Ignoring "noauto" option for root device
	[  +0.168582] systemd-fstab-generator[2684]: Ignoring "noauto" option for root device
	[  +0.132434] systemd-fstab-generator[2696]: Ignoring "noauto" option for root device
	[  +0.277811] systemd-fstab-generator[2724]: Ignoring "noauto" option for root device
	[  +7.376340] systemd-fstab-generator[2820]: Ignoring "noauto" option for root device
	[  +0.086199] kauditd_printk_skb: 100 callbacks suppressed
	[  +1.705913] systemd-fstab-generator[2944]: Ignoring "noauto" option for root device
	[  +5.741353] kauditd_printk_skb: 74 callbacks suppressed
	[Aug19 18:38] systemd-fstab-generator[3781]: Ignoring "noauto" option for root device
	[  +0.117847] kauditd_printk_skb: 34 callbacks suppressed
	[ +21.190155] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [580fb4c199750cc4e95fcf711e440dc76ff14f3b53d8a6997f621ca5b7bb4518] <==
	{"level":"info","ts":"2024-08-19T18:37:49.022877Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e34fba8f5739efe8 switched to configuration voters=(16379515494576287720)"}
	{"level":"info","ts":"2024-08-19T18:37:49.022951Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"f729467791c9db0d","local-member-id":"e34fba8f5739efe8","added-peer-id":"e34fba8f5739efe8","added-peer-peer-urls":["https://192.168.39.168:2380"]}
	{"level":"info","ts":"2024-08-19T18:37:49.023064Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f729467791c9db0d","local-member-id":"e34fba8f5739efe8","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T18:37:49.023110Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T18:37:49.030626Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-19T18:37:49.032464Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"e34fba8f5739efe8","initial-advertise-peer-urls":["https://192.168.39.168:2380"],"listen-peer-urls":["https://192.168.39.168:2380"],"advertise-client-urls":["https://192.168.39.168:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.168:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-19T18:37:49.034494Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.168:2380"}
	{"level":"info","ts":"2024-08-19T18:37:49.040294Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.168:2380"}
	{"level":"info","ts":"2024-08-19T18:37:49.034283Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-19T18:37:50.747752Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e34fba8f5739efe8 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-19T18:37:50.747883Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e34fba8f5739efe8 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-19T18:37:50.747926Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e34fba8f5739efe8 received MsgPreVoteResp from e34fba8f5739efe8 at term 2"}
	{"level":"info","ts":"2024-08-19T18:37:50.747962Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e34fba8f5739efe8 became candidate at term 3"}
	{"level":"info","ts":"2024-08-19T18:37:50.747987Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e34fba8f5739efe8 received MsgVoteResp from e34fba8f5739efe8 at term 3"}
	{"level":"info","ts":"2024-08-19T18:37:50.748015Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e34fba8f5739efe8 became leader at term 3"}
	{"level":"info","ts":"2024-08-19T18:37:50.748040Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e34fba8f5739efe8 elected leader e34fba8f5739efe8 at term 3"}
	{"level":"info","ts":"2024-08-19T18:37:50.753384Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"e34fba8f5739efe8","local-member-attributes":"{Name:multinode-528433 ClientURLs:[https://192.168.39.168:2379]}","request-path":"/0/members/e34fba8f5739efe8/attributes","cluster-id":"f729467791c9db0d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-19T18:37:50.753718Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T18:37:50.753756Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-19T18:37:50.753799Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-19T18:37:50.753871Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T18:37:50.755039Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T18:37:50.755176Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T18:37:50.756060Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-19T18:37:50.756069Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.168:2379"}
	
	
	==> etcd [8f8613599f748b0dac210582c28b8864da1a0b7e328a1299edffaeebf943a44a] <==
	{"level":"info","ts":"2024-08-19T18:31:00.534680Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-19T18:31:00.534711Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-19T18:31:00.535368Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T18:31:00.536097Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.168:2379"}
	{"level":"info","ts":"2024-08-19T18:31:00.536333Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f729467791c9db0d","local-member-id":"e34fba8f5739efe8","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T18:31:00.536466Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T18:31:00.536504Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T18:31:00.536856Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T18:31:00.537647Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-19T18:31:54.507523Z","caller":"traceutil/trace.go:171","msg":"trace[340798663] transaction","detail":"{read_only:false; response_revision:444; number_of_response:1; }","duration":"226.650217ms","start":"2024-08-19T18:31:54.280843Z","end":"2024-08-19T18:31:54.507494Z","steps":["trace[340798663] 'process raft request'  (duration: 225.520166ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T18:32:51.542001Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"144.916938ms","expected-duration":"100ms","prefix":"","request":"header:<ID:17287227062303532188 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-528433-m03.17ed34e086fc271f\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-528433-m03.17ed34e086fc271f\" value_size:646 lease:8063855025448756070 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-08-19T18:32:51.542398Z","caller":"traceutil/trace.go:171","msg":"trace[393461625] linearizableReadLoop","detail":"{readStateIndex:614; appliedIndex:613; }","duration":"143.563732ms","start":"2024-08-19T18:32:51.398802Z","end":"2024-08-19T18:32:51.542366Z","steps":["trace[393461625] 'read index received'  (duration: 21.701µs)","trace[393461625] 'applied index is now lower than readState.Index'  (duration: 143.541099ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-19T18:32:51.542446Z","caller":"traceutil/trace.go:171","msg":"trace[1168984684] transaction","detail":"{read_only:false; response_revision:579; number_of_response:1; }","duration":"228.794162ms","start":"2024-08-19T18:32:51.313596Z","end":"2024-08-19T18:32:51.542391Z","steps":["trace[1168984684] 'process raft request'  (duration: 82.790504ms)","trace[1168984684] 'compare'  (duration: 144.763552ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-19T18:32:51.542589Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"143.777221ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-528433-m03\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T18:32:51.542642Z","caller":"traceutil/trace.go:171","msg":"trace[1152645336] range","detail":"{range_begin:/registry/minions/multinode-528433-m03; range_end:; response_count:0; response_revision:579; }","duration":"143.83216ms","start":"2024-08-19T18:32:51.398798Z","end":"2024-08-19T18:32:51.542630Z","steps":["trace[1152645336] 'agreement among raft nodes before linearized reading'  (duration: 143.692307ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T18:36:06.216837Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-19T18:36:06.217005Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-528433","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.168:2380"],"advertise-client-urls":["https://192.168.39.168:2379"]}
	{"level":"warn","ts":"2024-08-19T18:36:06.217170Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.168:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-19T18:36:06.217231Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.168:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-19T18:36:06.217482Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-19T18:36:06.217557Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-19T18:36:06.315480Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"e34fba8f5739efe8","current-leader-member-id":"e34fba8f5739efe8"}
	{"level":"info","ts":"2024-08-19T18:36:06.318420Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.168:2380"}
	{"level":"info","ts":"2024-08-19T18:36:06.318733Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.168:2380"}
	{"level":"info","ts":"2024-08-19T18:36:06.318760Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-528433","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.168:2380"],"advertise-client-urls":["https://192.168.39.168:2379"]}
	
	
	==> kernel <==
	 18:39:33 up 9 min,  0 users,  load average: 0.70, 0.33, 0.16
	Linux multinode-528433 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [057d837bfdf9b2c340d252182c5f52f95286a592b06ccc5f204badee2872440e] <==
	I0819 18:35:25.233417       1 main.go:322] Node multinode-528433-m03 has CIDR [10.244.3.0/24] 
	I0819 18:35:35.232814       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0819 18:35:35.232853       1 main.go:299] handling current node
	I0819 18:35:35.232875       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0819 18:35:35.232882       1 main.go:322] Node multinode-528433-m02 has CIDR [10.244.1.0/24] 
	I0819 18:35:35.233052       1 main.go:295] Handling node with IPs: map[192.168.39.113:{}]
	I0819 18:35:35.233091       1 main.go:322] Node multinode-528433-m03 has CIDR [10.244.3.0/24] 
	I0819 18:35:45.227209       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0819 18:35:45.227382       1 main.go:322] Node multinode-528433-m02 has CIDR [10.244.1.0/24] 
	I0819 18:35:45.227547       1 main.go:295] Handling node with IPs: map[192.168.39.113:{}]
	I0819 18:35:45.227576       1 main.go:322] Node multinode-528433-m03 has CIDR [10.244.3.0/24] 
	I0819 18:35:45.227639       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0819 18:35:45.227657       1 main.go:299] handling current node
	I0819 18:35:55.229537       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0819 18:35:55.229840       1 main.go:299] handling current node
	I0819 18:35:55.229882       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0819 18:35:55.229910       1 main.go:322] Node multinode-528433-m02 has CIDR [10.244.1.0/24] 
	I0819 18:35:55.230167       1 main.go:295] Handling node with IPs: map[192.168.39.113:{}]
	I0819 18:35:55.230217       1 main.go:322] Node multinode-528433-m03 has CIDR [10.244.3.0/24] 
	I0819 18:36:05.227048       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0819 18:36:05.227405       1 main.go:299] handling current node
	I0819 18:36:05.227499       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0819 18:36:05.227523       1 main.go:322] Node multinode-528433-m02 has CIDR [10.244.1.0/24] 
	I0819 18:36:05.227821       1 main.go:295] Handling node with IPs: map[192.168.39.113:{}]
	I0819 18:36:05.228197       1 main.go:322] Node multinode-528433-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [8f0cc386f169fbef2abbfeb9de505aff3998aa7d54b7f3eee2b29d3c03dec1da] <==
	I0819 18:38:44.521649       1 main.go:322] Node multinode-528433-m03 has CIDR [10.244.3.0/24] 
	I0819 18:38:54.520146       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0819 18:38:54.520196       1 main.go:299] handling current node
	I0819 18:38:54.520214       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0819 18:38:54.520223       1 main.go:322] Node multinode-528433-m02 has CIDR [10.244.1.0/24] 
	I0819 18:38:54.520444       1 main.go:295] Handling node with IPs: map[192.168.39.113:{}]
	I0819 18:38:54.520482       1 main.go:322] Node multinode-528433-m03 has CIDR [10.244.3.0/24] 
	I0819 18:39:04.521521       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0819 18:39:04.521584       1 main.go:322] Node multinode-528433-m02 has CIDR [10.244.1.0/24] 
	I0819 18:39:04.521743       1 main.go:295] Handling node with IPs: map[192.168.39.113:{}]
	I0819 18:39:04.521826       1 main.go:322] Node multinode-528433-m03 has CIDR [10.244.3.0/24] 
	I0819 18:39:04.521941       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0819 18:39:04.521977       1 main.go:299] handling current node
	I0819 18:39:14.521731       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0819 18:39:14.521827       1 main.go:299] handling current node
	I0819 18:39:14.521854       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0819 18:39:14.521872       1 main.go:322] Node multinode-528433-m02 has CIDR [10.244.1.0/24] 
	I0819 18:39:14.522028       1 main.go:295] Handling node with IPs: map[192.168.39.113:{}]
	I0819 18:39:14.522052       1 main.go:322] Node multinode-528433-m03 has CIDR [10.244.2.0/24] 
	I0819 18:39:24.521552       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0819 18:39:24.521660       1 main.go:322] Node multinode-528433-m02 has CIDR [10.244.1.0/24] 
	I0819 18:39:24.521796       1 main.go:295] Handling node with IPs: map[192.168.39.113:{}]
	I0819 18:39:24.521819       1 main.go:322] Node multinode-528433-m03 has CIDR [10.244.2.0/24] 
	I0819 18:39:24.521876       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0819 18:39:24.521894       1 main.go:299] handling current node
	
	
	==> kube-apiserver [a81508e447df1c7bfda53e67d1b4030870a749ba659d35497fe4adecbcf41a9e] <==
	I0819 18:37:52.092315       1 policy_source.go:224] refreshing policies
	I0819 18:37:52.094611       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0819 18:37:52.094663       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0819 18:37:52.102597       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0819 18:37:52.115479       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0819 18:37:52.116645       1 shared_informer.go:320] Caches are synced for configmaps
	I0819 18:37:52.116796       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0819 18:37:52.117049       1 aggregator.go:171] initial CRD sync complete...
	I0819 18:37:52.117081       1 autoregister_controller.go:144] Starting autoregister controller
	I0819 18:37:52.117087       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0819 18:37:52.117098       1 cache.go:39] Caches are synced for autoregister controller
	I0819 18:37:52.124432       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E0819 18:37:52.151542       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0819 18:37:52.155877       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0819 18:37:52.170545       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0819 18:37:52.200010       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0819 18:37:52.200052       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0819 18:37:53.003454       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0819 18:37:54.368065       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0819 18:37:54.495945       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0819 18:37:54.513785       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0819 18:37:54.610211       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0819 18:37:54.618781       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0819 18:37:55.627472       1 controller.go:615] quota admission added evaluator for: endpoints
	I0819 18:37:55.730939       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [c65c30f3ad8d84072ebc470eb8e6aa5f850402138c0c5b057a93852df19a0f24] <==
	I0819 18:31:09.837872       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0819 18:32:20.866555       1 conn.go:339] Error on socket receive: read tcp 192.168.39.168:8443->192.168.39.1:50128: use of closed network connection
	E0819 18:32:21.049628       1 conn.go:339] Error on socket receive: read tcp 192.168.39.168:8443->192.168.39.1:50148: use of closed network connection
	E0819 18:32:21.231814       1 conn.go:339] Error on socket receive: read tcp 192.168.39.168:8443->192.168.39.1:50162: use of closed network connection
	E0819 18:32:21.417330       1 conn.go:339] Error on socket receive: read tcp 192.168.39.168:8443->192.168.39.1:50178: use of closed network connection
	E0819 18:32:21.592602       1 conn.go:339] Error on socket receive: read tcp 192.168.39.168:8443->192.168.39.1:50184: use of closed network connection
	E0819 18:32:21.763020       1 conn.go:339] Error on socket receive: read tcp 192.168.39.168:8443->192.168.39.1:50200: use of closed network connection
	E0819 18:32:22.043931       1 conn.go:339] Error on socket receive: read tcp 192.168.39.168:8443->192.168.39.1:50220: use of closed network connection
	E0819 18:32:22.220122       1 conn.go:339] Error on socket receive: read tcp 192.168.39.168:8443->192.168.39.1:50228: use of closed network connection
	E0819 18:32:22.382643       1 conn.go:339] Error on socket receive: read tcp 192.168.39.168:8443->192.168.39.1:50238: use of closed network connection
	E0819 18:32:22.548609       1 conn.go:339] Error on socket receive: read tcp 192.168.39.168:8443->192.168.39.1:50252: use of closed network connection
	I0819 18:36:06.219982       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	W0819 18:36:06.228900       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:36:06.233592       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:36:06.233714       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:36:06.233799       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:36:06.234451       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:36:06.235952       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:36:06.236075       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:36:06.236173       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:36:06.236533       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:36:06.236647       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:36:06.236733       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:36:06.236906       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:36:06.240654       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [a0f607c724637f67917051010258f5d5d9d65d9a1966825b84ccb41087c55584] <==
	I0819 18:38:50.599635       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-528433-m02"
	I0819 18:38:53.818109       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="7.112299ms"
	I0819 18:38:53.819317       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="109.391µs"
	I0819 18:39:01.003866       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-528433-m02"
	I0819 18:39:09.406739       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-528433-m03"
	I0819 18:39:09.439330       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-528433-m03"
	I0819 18:39:09.658009       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-528433-m02"
	I0819 18:39:09.658134       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-528433-m03"
	I0819 18:39:10.591146       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-528433-m03\" does not exist"
	I0819 18:39:10.591750       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-528433-m02"
	I0819 18:39:10.615917       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-528433-m03" podCIDRs=["10.244.2.0/24"]
	I0819 18:39:10.617011       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-528433-m03"
	E0819 18:39:10.627357       1 range_allocator.go:427] "Failed to update node PodCIDR after multiple attempts" err="failed to patch node CIDR: Node \"multinode-528433-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.3.0/24\", \"10.244.2.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-528433-m03" podCIDRs=["10.244.3.0/24"]
	E0819 18:39:10.627563       1 range_allocator.go:433] "CIDR assignment for node failed. Releasing allocated CIDR" err="failed to patch node CIDR: Node \"multinode-528433-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.3.0/24\", \"10.244.2.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-528433-m03"
	E0819 18:39:10.627707       1 range_allocator.go:246] "Unhandled Error" err="error syncing 'multinode-528433-m03': failed to patch node CIDR: Node \"multinode-528433-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.3.0/24\", \"10.244.2.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid], requeuing" logger="UnhandledError"
	I0819 18:39:10.627841       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-528433-m03"
	I0819 18:39:10.633018       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-528433-m03"
	I0819 18:39:10.677179       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-528433-m03"
	I0819 18:39:10.683899       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-528433-m03"
	I0819 18:39:11.013058       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-528433-m03"
	I0819 18:39:20.927689       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-528433-m03"
	I0819 18:39:30.486892       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-528433-m02"
	I0819 18:39:30.486927       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-528433-m03"
	I0819 18:39:30.500660       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-528433-m03"
	I0819 18:39:30.618101       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-528433-m03"
	
	
	==> kube-controller-manager [e18ee04a7496876e00d3bf4eea0c2cc1bea22033e6265d9eb65c8556c18dbecc] <==
	I0819 18:33:39.804148       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-528433-m02"
	I0819 18:33:39.804370       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-528433-m03"
	I0819 18:33:40.892579       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-528433-m02"
	I0819 18:33:40.895353       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-528433-m03\" does not exist"
	I0819 18:33:40.905809       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-528433-m03" podCIDRs=["10.244.3.0/24"]
	I0819 18:33:40.905854       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-528433-m03"
	I0819 18:33:40.905876       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-528433-m03"
	I0819 18:33:40.913450       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-528433-m03"
	I0819 18:33:41.317937       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-528433-m03"
	I0819 18:33:41.679166       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-528433-m03"
	I0819 18:33:44.101043       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-528433-m03"
	I0819 18:33:51.155484       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-528433-m03"
	I0819 18:34:00.693444       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-528433-m03"
	I0819 18:34:00.693858       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-528433-m03"
	I0819 18:34:00.706602       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-528433-m03"
	I0819 18:34:04.106808       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-528433-m03"
	I0819 18:34:39.122142       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-528433-m02"
	I0819 18:34:39.122154       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-528433-m03"
	I0819 18:34:39.137007       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-528433-m02"
	I0819 18:34:39.177568       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="8.210783ms"
	I0819 18:34:39.178035       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="271.767µs"
	I0819 18:34:44.177894       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-528433-m03"
	I0819 18:34:44.195886       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-528433-m03"
	I0819 18:34:44.204180       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-528433-m02"
	I0819 18:34:54.286076       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-528433-m03"
	
	
	==> kube-proxy [a21bb460a42c175e6fcfe880334b3416a31c862ad41f4046dde00c3a50bf99ac] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 18:37:53.795720       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0819 18:37:53.809336       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.168"]
	E0819 18:37:53.809420       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 18:37:53.867853       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 18:37:53.867908       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 18:37:53.867938       1 server_linux.go:169] "Using iptables Proxier"
	I0819 18:37:53.870508       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 18:37:53.871520       1 server.go:483] "Version info" version="v1.31.0"
	I0819 18:37:53.871594       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 18:37:53.874937       1 config.go:197] "Starting service config controller"
	I0819 18:37:53.874987       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 18:37:53.875022       1 config.go:104] "Starting endpoint slice config controller"
	I0819 18:37:53.875026       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 18:37:53.875634       1 config.go:326] "Starting node config controller"
	I0819 18:37:53.875661       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 18:37:53.975968       1 shared_informer.go:320] Caches are synced for node config
	I0819 18:37:53.976062       1 shared_informer.go:320] Caches are synced for service config
	I0819 18:37:53.976103       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [a5d6d7978005d7389f31edd035fd6fa05cd87e4a3901ca69aa1b2f9f73576240] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 18:31:11.981056       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0819 18:31:11.999700       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.168"]
	E0819 18:31:11.999828       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 18:31:12.045508       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 18:31:12.045603       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 18:31:12.045649       1 server_linux.go:169] "Using iptables Proxier"
	I0819 18:31:12.048560       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 18:31:12.048935       1 server.go:483] "Version info" version="v1.31.0"
	I0819 18:31:12.048978       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 18:31:12.050571       1 config.go:197] "Starting service config controller"
	I0819 18:31:12.050632       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 18:31:12.050667       1 config.go:104] "Starting endpoint slice config controller"
	I0819 18:31:12.050683       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 18:31:12.051169       1 config.go:326] "Starting node config controller"
	I0819 18:31:12.051207       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 18:31:12.151303       1 shared_informer.go:320] Caches are synced for node config
	I0819 18:31:12.151352       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0819 18:31:12.151326       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [422d4bc5ba6865ba6386db8aac55e0668aac92c409da53ac44c1d7750424fec7] <==
	W0819 18:37:52.141428       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0819 18:37:52.141810       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 18:37:52.141489       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 18:37:52.141855       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 18:37:52.141536       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0819 18:37:52.141870       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 18:37:52.141594       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0819 18:37:52.141904       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 18:37:52.141644       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0819 18:37:52.141920       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 18:37:52.141992       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0819 18:37:52.142027       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 18:37:52.142085       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0819 18:37:52.142114       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 18:37:52.142161       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0819 18:37:52.142190       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 18:37:52.142338       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0819 18:37:52.142371       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 18:37:52.142435       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0819 18:37:52.142464       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 18:37:52.142508       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0819 18:37:52.142536       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 18:37:52.142646       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0819 18:37:52.142736       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0819 18:37:53.596865       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [7c29a242039f25f811c96462be446c25a6154d911bbeffded18a5c5d7b8f8ea4] <==
	E0819 18:31:02.172401       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 18:31:02.170462       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0819 18:31:02.172518       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 18:31:03.042988       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0819 18:31:03.043042       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 18:31:03.066752       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0819 18:31:03.066814       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0819 18:31:03.116325       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0819 18:31:03.116359       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 18:31:03.129828       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0819 18:31:03.129959       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 18:31:03.134948       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0819 18:31:03.135002       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 18:31:03.237227       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0819 18:31:03.237393       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 18:31:03.372054       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0819 18:31:03.372092       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 18:31:03.438285       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0819 18:31:03.438406       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 18:31:03.649531       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0819 18:31:03.649635       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0819 18:31:06.536851       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0819 18:36:06.211826       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0819 18:36:06.212066       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E0819 18:36:06.212990       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Aug 19 18:37:57 multinode-528433 kubelet[2951]: E0819 18:37:57.821896    2951 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092677821408892,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:37:57 multinode-528433 kubelet[2951]: E0819 18:37:57.822194    2951 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092677821408892,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:38:07 multinode-528433 kubelet[2951]: E0819 18:38:07.824684    2951 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092687824005530,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:38:07 multinode-528433 kubelet[2951]: E0819 18:38:07.825020    2951 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092687824005530,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:38:17 multinode-528433 kubelet[2951]: E0819 18:38:17.826673    2951 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092697826132051,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:38:17 multinode-528433 kubelet[2951]: E0819 18:38:17.826969    2951 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092697826132051,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:38:27 multinode-528433 kubelet[2951]: E0819 18:38:27.828526    2951 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092707828088309,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:38:27 multinode-528433 kubelet[2951]: E0819 18:38:27.829002    2951 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092707828088309,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:38:37 multinode-528433 kubelet[2951]: E0819 18:38:37.831909    2951 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092717831453724,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:38:37 multinode-528433 kubelet[2951]: E0819 18:38:37.831971    2951 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092717831453724,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:38:47 multinode-528433 kubelet[2951]: E0819 18:38:47.795607    2951 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 18:38:47 multinode-528433 kubelet[2951]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 18:38:47 multinode-528433 kubelet[2951]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 18:38:47 multinode-528433 kubelet[2951]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 18:38:47 multinode-528433 kubelet[2951]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 18:38:47 multinode-528433 kubelet[2951]: E0819 18:38:47.834090    2951 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092727833682086,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:38:47 multinode-528433 kubelet[2951]: E0819 18:38:47.834133    2951 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092727833682086,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:38:57 multinode-528433 kubelet[2951]: E0819 18:38:57.838444    2951 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092737836096957,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:38:57 multinode-528433 kubelet[2951]: E0819 18:38:57.838506    2951 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092737836096957,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:39:07 multinode-528433 kubelet[2951]: E0819 18:39:07.840963    2951 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092747840585891,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:39:07 multinode-528433 kubelet[2951]: E0819 18:39:07.841220    2951 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092747840585891,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:39:17 multinode-528433 kubelet[2951]: E0819 18:39:17.843915    2951 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092757843571294,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:39:17 multinode-528433 kubelet[2951]: E0819 18:39:17.843969    2951 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092757843571294,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:39:27 multinode-528433 kubelet[2951]: E0819 18:39:27.848914    2951 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092767847541663,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:39:27 multinode-528433 kubelet[2951]: E0819 18:39:27.849673    2951 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092767847541663,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 18:39:32.947518  410491 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19468-372744/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-528433 -n multinode-528433
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-528433 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (331.63s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-528433 stop
E0819 18:40:24.366116  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/functional-499773/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-528433 stop: exit status 82 (2m0.473435641s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-528433-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-528433 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-528433 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-528433 status: exit status 3 (18.765719781s)

                                                
                                                
-- stdout --
	multinode-528433
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-528433-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 18:41:56.484093  411150 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.107:22: connect: no route to host
	E0819 18:41:56.484139  411150 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.107:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-528433 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-528433 -n multinode-528433
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-528433 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-528433 logs -n 25: (1.494716585s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-528433 ssh -n                                                                 | multinode-528433 | jenkins | v1.33.1 | 19 Aug 24 18:33 UTC | 19 Aug 24 18:33 UTC |
	|         | multinode-528433-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-528433 cp multinode-528433-m02:/home/docker/cp-test.txt                       | multinode-528433 | jenkins | v1.33.1 | 19 Aug 24 18:33 UTC | 19 Aug 24 18:33 UTC |
	|         | multinode-528433:/home/docker/cp-test_multinode-528433-m02_multinode-528433.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-528433 ssh -n                                                                 | multinode-528433 | jenkins | v1.33.1 | 19 Aug 24 18:33 UTC | 19 Aug 24 18:33 UTC |
	|         | multinode-528433-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-528433 ssh -n multinode-528433 sudo cat                                       | multinode-528433 | jenkins | v1.33.1 | 19 Aug 24 18:33 UTC | 19 Aug 24 18:33 UTC |
	|         | /home/docker/cp-test_multinode-528433-m02_multinode-528433.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-528433 cp multinode-528433-m02:/home/docker/cp-test.txt                       | multinode-528433 | jenkins | v1.33.1 | 19 Aug 24 18:33 UTC | 19 Aug 24 18:33 UTC |
	|         | multinode-528433-m03:/home/docker/cp-test_multinode-528433-m02_multinode-528433-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-528433 ssh -n                                                                 | multinode-528433 | jenkins | v1.33.1 | 19 Aug 24 18:33 UTC | 19 Aug 24 18:33 UTC |
	|         | multinode-528433-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-528433 ssh -n multinode-528433-m03 sudo cat                                   | multinode-528433 | jenkins | v1.33.1 | 19 Aug 24 18:33 UTC | 19 Aug 24 18:33 UTC |
	|         | /home/docker/cp-test_multinode-528433-m02_multinode-528433-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-528433 cp testdata/cp-test.txt                                                | multinode-528433 | jenkins | v1.33.1 | 19 Aug 24 18:33 UTC | 19 Aug 24 18:33 UTC |
	|         | multinode-528433-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-528433 ssh -n                                                                 | multinode-528433 | jenkins | v1.33.1 | 19 Aug 24 18:33 UTC | 19 Aug 24 18:33 UTC |
	|         | multinode-528433-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-528433 cp multinode-528433-m03:/home/docker/cp-test.txt                       | multinode-528433 | jenkins | v1.33.1 | 19 Aug 24 18:33 UTC | 19 Aug 24 18:33 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1208495116/001/cp-test_multinode-528433-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-528433 ssh -n                                                                 | multinode-528433 | jenkins | v1.33.1 | 19 Aug 24 18:33 UTC | 19 Aug 24 18:33 UTC |
	|         | multinode-528433-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-528433 cp multinode-528433-m03:/home/docker/cp-test.txt                       | multinode-528433 | jenkins | v1.33.1 | 19 Aug 24 18:33 UTC | 19 Aug 24 18:33 UTC |
	|         | multinode-528433:/home/docker/cp-test_multinode-528433-m03_multinode-528433.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-528433 ssh -n                                                                 | multinode-528433 | jenkins | v1.33.1 | 19 Aug 24 18:33 UTC | 19 Aug 24 18:33 UTC |
	|         | multinode-528433-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-528433 ssh -n multinode-528433 sudo cat                                       | multinode-528433 | jenkins | v1.33.1 | 19 Aug 24 18:33 UTC | 19 Aug 24 18:33 UTC |
	|         | /home/docker/cp-test_multinode-528433-m03_multinode-528433.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-528433 cp multinode-528433-m03:/home/docker/cp-test.txt                       | multinode-528433 | jenkins | v1.33.1 | 19 Aug 24 18:33 UTC | 19 Aug 24 18:33 UTC |
	|         | multinode-528433-m02:/home/docker/cp-test_multinode-528433-m03_multinode-528433-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-528433 ssh -n                                                                 | multinode-528433 | jenkins | v1.33.1 | 19 Aug 24 18:33 UTC | 19 Aug 24 18:33 UTC |
	|         | multinode-528433-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-528433 ssh -n multinode-528433-m02 sudo cat                                   | multinode-528433 | jenkins | v1.33.1 | 19 Aug 24 18:33 UTC | 19 Aug 24 18:33 UTC |
	|         | /home/docker/cp-test_multinode-528433-m03_multinode-528433-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-528433 node stop m03                                                          | multinode-528433 | jenkins | v1.33.1 | 19 Aug 24 18:33 UTC | 19 Aug 24 18:33 UTC |
	| node    | multinode-528433 node start                                                             | multinode-528433 | jenkins | v1.33.1 | 19 Aug 24 18:33 UTC | 19 Aug 24 18:34 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-528433                                                                | multinode-528433 | jenkins | v1.33.1 | 19 Aug 24 18:34 UTC |                     |
	| stop    | -p multinode-528433                                                                     | multinode-528433 | jenkins | v1.33.1 | 19 Aug 24 18:34 UTC |                     |
	| start   | -p multinode-528433                                                                     | multinode-528433 | jenkins | v1.33.1 | 19 Aug 24 18:36 UTC | 19 Aug 24 18:39 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-528433                                                                | multinode-528433 | jenkins | v1.33.1 | 19 Aug 24 18:39 UTC |                     |
	| node    | multinode-528433 node delete                                                            | multinode-528433 | jenkins | v1.33.1 | 19 Aug 24 18:39 UTC | 19 Aug 24 18:39 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-528433 stop                                                                   | multinode-528433 | jenkins | v1.33.1 | 19 Aug 24 18:39 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 18:36:05
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 18:36:05.241418  409340 out.go:345] Setting OutFile to fd 1 ...
	I0819 18:36:05.241693  409340 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:36:05.241704  409340 out.go:358] Setting ErrFile to fd 2...
	I0819 18:36:05.241708  409340 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:36:05.241899  409340 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19468-372744/.minikube/bin
	I0819 18:36:05.242459  409340 out.go:352] Setting JSON to false
	I0819 18:36:05.243480  409340 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":8308,"bootTime":1724084257,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 18:36:05.243543  409340 start.go:139] virtualization: kvm guest
	I0819 18:36:05.245989  409340 out.go:177] * [multinode-528433] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 18:36:05.247617  409340 out.go:177]   - MINIKUBE_LOCATION=19468
	I0819 18:36:05.247644  409340 notify.go:220] Checking for updates...
	I0819 18:36:05.250302  409340 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 18:36:05.251820  409340 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19468-372744/kubeconfig
	I0819 18:36:05.253233  409340 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19468-372744/.minikube
	I0819 18:36:05.254530  409340 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 18:36:05.255854  409340 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 18:36:05.257617  409340 config.go:182] Loaded profile config "multinode-528433": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:36:05.257697  409340 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 18:36:05.258125  409340 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:36:05.258169  409340 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:36:05.273549  409340 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41741
	I0819 18:36:05.274024  409340 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:36:05.274603  409340 main.go:141] libmachine: Using API Version  1
	I0819 18:36:05.274624  409340 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:36:05.274995  409340 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:36:05.275201  409340 main.go:141] libmachine: (multinode-528433) Calling .DriverName
	I0819 18:36:05.310759  409340 out.go:177] * Using the kvm2 driver based on existing profile
	I0819 18:36:05.312080  409340 start.go:297] selected driver: kvm2
	I0819 18:36:05.312102  409340 start.go:901] validating driver "kvm2" against &{Name:multinode-528433 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.0 ClusterName:multinode-528433 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.168 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.107 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.113 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ing
ress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 18:36:05.312261  409340 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 18:36:05.312563  409340 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 18:36:05.312634  409340 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19468-372744/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 18:36:05.327995  409340 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 18:36:05.328690  409340 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 18:36:05.328766  409340 cni.go:84] Creating CNI manager for ""
	I0819 18:36:05.328778  409340 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0819 18:36:05.328841  409340 start.go:340] cluster config:
	{Name:multinode-528433 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-528433 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.168 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.107 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.113 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false
kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 18:36:05.328980  409340 iso.go:125] acquiring lock: {Name:mk4c0ac1c3202b1a296739df622960e7a0bd8566 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 18:36:05.330798  409340 out.go:177] * Starting "multinode-528433" primary control-plane node in "multinode-528433" cluster
	I0819 18:36:05.332102  409340 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 18:36:05.332144  409340 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0819 18:36:05.332162  409340 cache.go:56] Caching tarball of preloaded images
	I0819 18:36:05.332248  409340 preload.go:172] Found /home/jenkins/minikube-integration/19468-372744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 18:36:05.332262  409340 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 18:36:05.332397  409340 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/multinode-528433/config.json ...
	I0819 18:36:05.332628  409340 start.go:360] acquireMachinesLock for multinode-528433: {Name:mk24ba67a747357e9ce40f1e460d2bb0bc59cc75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 18:36:05.332683  409340 start.go:364] duration metric: took 33.324µs to acquireMachinesLock for "multinode-528433"
	I0819 18:36:05.332704  409340 start.go:96] Skipping create...Using existing machine configuration
	I0819 18:36:05.332714  409340 fix.go:54] fixHost starting: 
	I0819 18:36:05.332980  409340 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:36:05.333030  409340 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:36:05.347703  409340 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36757
	I0819 18:36:05.348203  409340 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:36:05.348717  409340 main.go:141] libmachine: Using API Version  1
	I0819 18:36:05.348740  409340 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:36:05.349054  409340 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:36:05.349247  409340 main.go:141] libmachine: (multinode-528433) Calling .DriverName
	I0819 18:36:05.349414  409340 main.go:141] libmachine: (multinode-528433) Calling .GetState
	I0819 18:36:05.350843  409340 fix.go:112] recreateIfNeeded on multinode-528433: state=Running err=<nil>
	W0819 18:36:05.350865  409340 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 18:36:05.352601  409340 out.go:177] * Updating the running kvm2 "multinode-528433" VM ...
	I0819 18:36:05.353730  409340 machine.go:93] provisionDockerMachine start ...
	I0819 18:36:05.353747  409340 main.go:141] libmachine: (multinode-528433) Calling .DriverName
	I0819 18:36:05.353959  409340 main.go:141] libmachine: (multinode-528433) Calling .GetSSHHostname
	I0819 18:36:05.356553  409340 main.go:141] libmachine: (multinode-528433) DBG | domain multinode-528433 has defined MAC address 52:54:00:78:95:69 in network mk-multinode-528433
	I0819 18:36:05.356948  409340 main.go:141] libmachine: (multinode-528433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:95:69", ip: ""} in network mk-multinode-528433: {Iface:virbr1 ExpiryTime:2024-08-19 19:30:36 +0000 UTC Type:0 Mac:52:54:00:78:95:69 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-528433 Clientid:01:52:54:00:78:95:69}
	I0819 18:36:05.356969  409340 main.go:141] libmachine: (multinode-528433) DBG | domain multinode-528433 has defined IP address 192.168.39.168 and MAC address 52:54:00:78:95:69 in network mk-multinode-528433
	I0819 18:36:05.357144  409340 main.go:141] libmachine: (multinode-528433) Calling .GetSSHPort
	I0819 18:36:05.357314  409340 main.go:141] libmachine: (multinode-528433) Calling .GetSSHKeyPath
	I0819 18:36:05.357483  409340 main.go:141] libmachine: (multinode-528433) Calling .GetSSHKeyPath
	I0819 18:36:05.357621  409340 main.go:141] libmachine: (multinode-528433) Calling .GetSSHUsername
	I0819 18:36:05.357774  409340 main.go:141] libmachine: Using SSH client type: native
	I0819 18:36:05.357963  409340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0819 18:36:05.357981  409340 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 18:36:05.465212  409340 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-528433
	
	I0819 18:36:05.465249  409340 main.go:141] libmachine: (multinode-528433) Calling .GetMachineName
	I0819 18:36:05.465499  409340 buildroot.go:166] provisioning hostname "multinode-528433"
	I0819 18:36:05.465536  409340 main.go:141] libmachine: (multinode-528433) Calling .GetMachineName
	I0819 18:36:05.465716  409340 main.go:141] libmachine: (multinode-528433) Calling .GetSSHHostname
	I0819 18:36:05.468392  409340 main.go:141] libmachine: (multinode-528433) DBG | domain multinode-528433 has defined MAC address 52:54:00:78:95:69 in network mk-multinode-528433
	I0819 18:36:05.468774  409340 main.go:141] libmachine: (multinode-528433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:95:69", ip: ""} in network mk-multinode-528433: {Iface:virbr1 ExpiryTime:2024-08-19 19:30:36 +0000 UTC Type:0 Mac:52:54:00:78:95:69 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-528433 Clientid:01:52:54:00:78:95:69}
	I0819 18:36:05.468817  409340 main.go:141] libmachine: (multinode-528433) DBG | domain multinode-528433 has defined IP address 192.168.39.168 and MAC address 52:54:00:78:95:69 in network mk-multinode-528433
	I0819 18:36:05.468966  409340 main.go:141] libmachine: (multinode-528433) Calling .GetSSHPort
	I0819 18:36:05.469141  409340 main.go:141] libmachine: (multinode-528433) Calling .GetSSHKeyPath
	I0819 18:36:05.469316  409340 main.go:141] libmachine: (multinode-528433) Calling .GetSSHKeyPath
	I0819 18:36:05.469474  409340 main.go:141] libmachine: (multinode-528433) Calling .GetSSHUsername
	I0819 18:36:05.469683  409340 main.go:141] libmachine: Using SSH client type: native
	I0819 18:36:05.469851  409340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0819 18:36:05.469863  409340 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-528433 && echo "multinode-528433" | sudo tee /etc/hostname
	I0819 18:36:05.591349  409340 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-528433
	
	I0819 18:36:05.591382  409340 main.go:141] libmachine: (multinode-528433) Calling .GetSSHHostname
	I0819 18:36:05.594036  409340 main.go:141] libmachine: (multinode-528433) DBG | domain multinode-528433 has defined MAC address 52:54:00:78:95:69 in network mk-multinode-528433
	I0819 18:36:05.594427  409340 main.go:141] libmachine: (multinode-528433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:95:69", ip: ""} in network mk-multinode-528433: {Iface:virbr1 ExpiryTime:2024-08-19 19:30:36 +0000 UTC Type:0 Mac:52:54:00:78:95:69 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-528433 Clientid:01:52:54:00:78:95:69}
	I0819 18:36:05.594455  409340 main.go:141] libmachine: (multinode-528433) DBG | domain multinode-528433 has defined IP address 192.168.39.168 and MAC address 52:54:00:78:95:69 in network mk-multinode-528433
	I0819 18:36:05.594619  409340 main.go:141] libmachine: (multinode-528433) Calling .GetSSHPort
	I0819 18:36:05.594811  409340 main.go:141] libmachine: (multinode-528433) Calling .GetSSHKeyPath
	I0819 18:36:05.595006  409340 main.go:141] libmachine: (multinode-528433) Calling .GetSSHKeyPath
	I0819 18:36:05.595173  409340 main.go:141] libmachine: (multinode-528433) Calling .GetSSHUsername
	I0819 18:36:05.595343  409340 main.go:141] libmachine: Using SSH client type: native
	I0819 18:36:05.595561  409340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0819 18:36:05.595581  409340 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-528433' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-528433/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-528433' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 18:36:05.705410  409340 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 18:36:05.705443  409340 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19468-372744/.minikube CaCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19468-372744/.minikube}
	I0819 18:36:05.705472  409340 buildroot.go:174] setting up certificates
	I0819 18:36:05.705482  409340 provision.go:84] configureAuth start
	I0819 18:36:05.705500  409340 main.go:141] libmachine: (multinode-528433) Calling .GetMachineName
	I0819 18:36:05.705778  409340 main.go:141] libmachine: (multinode-528433) Calling .GetIP
	I0819 18:36:05.708341  409340 main.go:141] libmachine: (multinode-528433) DBG | domain multinode-528433 has defined MAC address 52:54:00:78:95:69 in network mk-multinode-528433
	I0819 18:36:05.708663  409340 main.go:141] libmachine: (multinode-528433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:95:69", ip: ""} in network mk-multinode-528433: {Iface:virbr1 ExpiryTime:2024-08-19 19:30:36 +0000 UTC Type:0 Mac:52:54:00:78:95:69 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-528433 Clientid:01:52:54:00:78:95:69}
	I0819 18:36:05.708687  409340 main.go:141] libmachine: (multinode-528433) DBG | domain multinode-528433 has defined IP address 192.168.39.168 and MAC address 52:54:00:78:95:69 in network mk-multinode-528433
	I0819 18:36:05.708825  409340 main.go:141] libmachine: (multinode-528433) Calling .GetSSHHostname
	I0819 18:36:05.711069  409340 main.go:141] libmachine: (multinode-528433) DBG | domain multinode-528433 has defined MAC address 52:54:00:78:95:69 in network mk-multinode-528433
	I0819 18:36:05.711415  409340 main.go:141] libmachine: (multinode-528433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:95:69", ip: ""} in network mk-multinode-528433: {Iface:virbr1 ExpiryTime:2024-08-19 19:30:36 +0000 UTC Type:0 Mac:52:54:00:78:95:69 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-528433 Clientid:01:52:54:00:78:95:69}
	I0819 18:36:05.711444  409340 main.go:141] libmachine: (multinode-528433) DBG | domain multinode-528433 has defined IP address 192.168.39.168 and MAC address 52:54:00:78:95:69 in network mk-multinode-528433
	I0819 18:36:05.711568  409340 provision.go:143] copyHostCerts
	I0819 18:36:05.711601  409340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem
	I0819 18:36:05.711637  409340 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem, removing ...
	I0819 18:36:05.711659  409340 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem
	I0819 18:36:05.711756  409340 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem (1082 bytes)
	I0819 18:36:05.711887  409340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem
	I0819 18:36:05.711916  409340 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem, removing ...
	I0819 18:36:05.711925  409340 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem
	I0819 18:36:05.711978  409340 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem (1123 bytes)
	I0819 18:36:05.712069  409340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem
	I0819 18:36:05.712100  409340 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem, removing ...
	I0819 18:36:05.712108  409340 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem
	I0819 18:36:05.712144  409340 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem (1675 bytes)
	I0819 18:36:05.712235  409340 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem org=jenkins.multinode-528433 san=[127.0.0.1 192.168.39.168 localhost minikube multinode-528433]
	I0819 18:36:05.888467  409340 provision.go:177] copyRemoteCerts
	I0819 18:36:05.888541  409340 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 18:36:05.888569  409340 main.go:141] libmachine: (multinode-528433) Calling .GetSSHHostname
	I0819 18:36:05.891196  409340 main.go:141] libmachine: (multinode-528433) DBG | domain multinode-528433 has defined MAC address 52:54:00:78:95:69 in network mk-multinode-528433
	I0819 18:36:05.891499  409340 main.go:141] libmachine: (multinode-528433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:95:69", ip: ""} in network mk-multinode-528433: {Iface:virbr1 ExpiryTime:2024-08-19 19:30:36 +0000 UTC Type:0 Mac:52:54:00:78:95:69 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-528433 Clientid:01:52:54:00:78:95:69}
	I0819 18:36:05.891542  409340 main.go:141] libmachine: (multinode-528433) DBG | domain multinode-528433 has defined IP address 192.168.39.168 and MAC address 52:54:00:78:95:69 in network mk-multinode-528433
	I0819 18:36:05.891723  409340 main.go:141] libmachine: (multinode-528433) Calling .GetSSHPort
	I0819 18:36:05.891942  409340 main.go:141] libmachine: (multinode-528433) Calling .GetSSHKeyPath
	I0819 18:36:05.892101  409340 main.go:141] libmachine: (multinode-528433) Calling .GetSSHUsername
	I0819 18:36:05.892248  409340 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/multinode-528433/id_rsa Username:docker}
	I0819 18:36:05.979819  409340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 18:36:05.979897  409340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 18:36:06.016416  409340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 18:36:06.016495  409340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0819 18:36:06.052421  409340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 18:36:06.052495  409340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 18:36:06.084002  409340 provision.go:87] duration metric: took 378.505809ms to configureAuth
	I0819 18:36:06.084029  409340 buildroot.go:189] setting minikube options for container-runtime
	I0819 18:36:06.084264  409340 config.go:182] Loaded profile config "multinode-528433": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:36:06.084341  409340 main.go:141] libmachine: (multinode-528433) Calling .GetSSHHostname
	I0819 18:36:06.086967  409340 main.go:141] libmachine: (multinode-528433) DBG | domain multinode-528433 has defined MAC address 52:54:00:78:95:69 in network mk-multinode-528433
	I0819 18:36:06.087346  409340 main.go:141] libmachine: (multinode-528433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:95:69", ip: ""} in network mk-multinode-528433: {Iface:virbr1 ExpiryTime:2024-08-19 19:30:36 +0000 UTC Type:0 Mac:52:54:00:78:95:69 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-528433 Clientid:01:52:54:00:78:95:69}
	I0819 18:36:06.087367  409340 main.go:141] libmachine: (multinode-528433) DBG | domain multinode-528433 has defined IP address 192.168.39.168 and MAC address 52:54:00:78:95:69 in network mk-multinode-528433
	I0819 18:36:06.087562  409340 main.go:141] libmachine: (multinode-528433) Calling .GetSSHPort
	I0819 18:36:06.087797  409340 main.go:141] libmachine: (multinode-528433) Calling .GetSSHKeyPath
	I0819 18:36:06.087948  409340 main.go:141] libmachine: (multinode-528433) Calling .GetSSHKeyPath
	I0819 18:36:06.088081  409340 main.go:141] libmachine: (multinode-528433) Calling .GetSSHUsername
	I0819 18:36:06.088228  409340 main.go:141] libmachine: Using SSH client type: native
	I0819 18:36:06.088431  409340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0819 18:36:06.088447  409340 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 18:37:36.975094  409340 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 18:37:36.975130  409340 machine.go:96] duration metric: took 1m31.621385972s to provisionDockerMachine
	I0819 18:37:36.975149  409340 start.go:293] postStartSetup for "multinode-528433" (driver="kvm2")
	I0819 18:37:36.975163  409340 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 18:37:36.975189  409340 main.go:141] libmachine: (multinode-528433) Calling .DriverName
	I0819 18:37:36.975616  409340 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 18:37:36.975648  409340 main.go:141] libmachine: (multinode-528433) Calling .GetSSHHostname
	I0819 18:37:36.979355  409340 main.go:141] libmachine: (multinode-528433) DBG | domain multinode-528433 has defined MAC address 52:54:00:78:95:69 in network mk-multinode-528433
	I0819 18:37:36.979897  409340 main.go:141] libmachine: (multinode-528433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:95:69", ip: ""} in network mk-multinode-528433: {Iface:virbr1 ExpiryTime:2024-08-19 19:30:36 +0000 UTC Type:0 Mac:52:54:00:78:95:69 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-528433 Clientid:01:52:54:00:78:95:69}
	I0819 18:37:36.979935  409340 main.go:141] libmachine: (multinode-528433) DBG | domain multinode-528433 has defined IP address 192.168.39.168 and MAC address 52:54:00:78:95:69 in network mk-multinode-528433
	I0819 18:37:36.980094  409340 main.go:141] libmachine: (multinode-528433) Calling .GetSSHPort
	I0819 18:37:36.980300  409340 main.go:141] libmachine: (multinode-528433) Calling .GetSSHKeyPath
	I0819 18:37:36.980517  409340 main.go:141] libmachine: (multinode-528433) Calling .GetSSHUsername
	I0819 18:37:36.980680  409340 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/multinode-528433/id_rsa Username:docker}
	I0819 18:37:37.062741  409340 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 18:37:37.067178  409340 command_runner.go:130] > NAME=Buildroot
	I0819 18:37:37.067203  409340 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0819 18:37:37.067208  409340 command_runner.go:130] > ID=buildroot
	I0819 18:37:37.067213  409340 command_runner.go:130] > VERSION_ID=2023.02.9
	I0819 18:37:37.067218  409340 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0819 18:37:37.067286  409340 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 18:37:37.067312  409340 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-372744/.minikube/addons for local assets ...
	I0819 18:37:37.067394  409340 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-372744/.minikube/files for local assets ...
	I0819 18:37:37.067491  409340 filesync.go:149] local asset: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem -> 3800092.pem in /etc/ssl/certs
	I0819 18:37:37.067504  409340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem -> /etc/ssl/certs/3800092.pem
	I0819 18:37:37.067642  409340 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 18:37:37.076788  409340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem --> /etc/ssl/certs/3800092.pem (1708 bytes)
	I0819 18:37:37.100746  409340 start.go:296] duration metric: took 125.579857ms for postStartSetup
	I0819 18:37:37.100792  409340 fix.go:56] duration metric: took 1m31.768078659s for fixHost
	I0819 18:37:37.100815  409340 main.go:141] libmachine: (multinode-528433) Calling .GetSSHHostname
	I0819 18:37:37.104040  409340 main.go:141] libmachine: (multinode-528433) DBG | domain multinode-528433 has defined MAC address 52:54:00:78:95:69 in network mk-multinode-528433
	I0819 18:37:37.104523  409340 main.go:141] libmachine: (multinode-528433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:95:69", ip: ""} in network mk-multinode-528433: {Iface:virbr1 ExpiryTime:2024-08-19 19:30:36 +0000 UTC Type:0 Mac:52:54:00:78:95:69 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-528433 Clientid:01:52:54:00:78:95:69}
	I0819 18:37:37.104558  409340 main.go:141] libmachine: (multinode-528433) DBG | domain multinode-528433 has defined IP address 192.168.39.168 and MAC address 52:54:00:78:95:69 in network mk-multinode-528433
	I0819 18:37:37.104788  409340 main.go:141] libmachine: (multinode-528433) Calling .GetSSHPort
	I0819 18:37:37.104975  409340 main.go:141] libmachine: (multinode-528433) Calling .GetSSHKeyPath
	I0819 18:37:37.105152  409340 main.go:141] libmachine: (multinode-528433) Calling .GetSSHKeyPath
	I0819 18:37:37.105286  409340 main.go:141] libmachine: (multinode-528433) Calling .GetSSHUsername
	I0819 18:37:37.105478  409340 main.go:141] libmachine: Using SSH client type: native
	I0819 18:37:37.105657  409340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0819 18:37:37.105667  409340 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 18:37:37.208978  409340 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724092657.181349835
	
	I0819 18:37:37.209009  409340 fix.go:216] guest clock: 1724092657.181349835
	I0819 18:37:37.209020  409340 fix.go:229] Guest: 2024-08-19 18:37:37.181349835 +0000 UTC Remote: 2024-08-19 18:37:37.100796894 +0000 UTC m=+91.897693888 (delta=80.552941ms)
	I0819 18:37:37.209069  409340 fix.go:200] guest clock delta is within tolerance: 80.552941ms
	I0819 18:37:37.209076  409340 start.go:83] releasing machines lock for "multinode-528433", held for 1m31.876380758s
	I0819 18:37:37.209102  409340 main.go:141] libmachine: (multinode-528433) Calling .DriverName
	I0819 18:37:37.209366  409340 main.go:141] libmachine: (multinode-528433) Calling .GetIP
	I0819 18:37:37.212281  409340 main.go:141] libmachine: (multinode-528433) DBG | domain multinode-528433 has defined MAC address 52:54:00:78:95:69 in network mk-multinode-528433
	I0819 18:37:37.212786  409340 main.go:141] libmachine: (multinode-528433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:95:69", ip: ""} in network mk-multinode-528433: {Iface:virbr1 ExpiryTime:2024-08-19 19:30:36 +0000 UTC Type:0 Mac:52:54:00:78:95:69 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-528433 Clientid:01:52:54:00:78:95:69}
	I0819 18:37:37.212818  409340 main.go:141] libmachine: (multinode-528433) DBG | domain multinode-528433 has defined IP address 192.168.39.168 and MAC address 52:54:00:78:95:69 in network mk-multinode-528433
	I0819 18:37:37.212994  409340 main.go:141] libmachine: (multinode-528433) Calling .DriverName
	I0819 18:37:37.213610  409340 main.go:141] libmachine: (multinode-528433) Calling .DriverName
	I0819 18:37:37.213809  409340 main.go:141] libmachine: (multinode-528433) Calling .DriverName
	I0819 18:37:37.213930  409340 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 18:37:37.213990  409340 main.go:141] libmachine: (multinode-528433) Calling .GetSSHHostname
	I0819 18:37:37.214031  409340 ssh_runner.go:195] Run: cat /version.json
	I0819 18:37:37.214053  409340 main.go:141] libmachine: (multinode-528433) Calling .GetSSHHostname
	I0819 18:37:37.216850  409340 main.go:141] libmachine: (multinode-528433) DBG | domain multinode-528433 has defined MAC address 52:54:00:78:95:69 in network mk-multinode-528433
	I0819 18:37:37.217075  409340 main.go:141] libmachine: (multinode-528433) DBG | domain multinode-528433 has defined MAC address 52:54:00:78:95:69 in network mk-multinode-528433
	I0819 18:37:37.217272  409340 main.go:141] libmachine: (multinode-528433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:95:69", ip: ""} in network mk-multinode-528433: {Iface:virbr1 ExpiryTime:2024-08-19 19:30:36 +0000 UTC Type:0 Mac:52:54:00:78:95:69 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-528433 Clientid:01:52:54:00:78:95:69}
	I0819 18:37:37.217299  409340 main.go:141] libmachine: (multinode-528433) DBG | domain multinode-528433 has defined IP address 192.168.39.168 and MAC address 52:54:00:78:95:69 in network mk-multinode-528433
	I0819 18:37:37.217510  409340 main.go:141] libmachine: (multinode-528433) Calling .GetSSHPort
	I0819 18:37:37.217618  409340 main.go:141] libmachine: (multinode-528433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:95:69", ip: ""} in network mk-multinode-528433: {Iface:virbr1 ExpiryTime:2024-08-19 19:30:36 +0000 UTC Type:0 Mac:52:54:00:78:95:69 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-528433 Clientid:01:52:54:00:78:95:69}
	I0819 18:37:37.217654  409340 main.go:141] libmachine: (multinode-528433) DBG | domain multinode-528433 has defined IP address 192.168.39.168 and MAC address 52:54:00:78:95:69 in network mk-multinode-528433
	I0819 18:37:37.217682  409340 main.go:141] libmachine: (multinode-528433) Calling .GetSSHKeyPath
	I0819 18:37:37.217840  409340 main.go:141] libmachine: (multinode-528433) Calling .GetSSHUsername
	I0819 18:37:37.217922  409340 main.go:141] libmachine: (multinode-528433) Calling .GetSSHPort
	I0819 18:37:37.218007  409340 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/multinode-528433/id_rsa Username:docker}
	I0819 18:37:37.218106  409340 main.go:141] libmachine: (multinode-528433) Calling .GetSSHKeyPath
	I0819 18:37:37.218256  409340 main.go:141] libmachine: (multinode-528433) Calling .GetSSHUsername
	I0819 18:37:37.218413  409340 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/multinode-528433/id_rsa Username:docker}
	I0819 18:37:37.293455  409340 command_runner.go:130] > {"iso_version": "v1.33.1-1723740674-19452", "kicbase_version": "v0.0.44-1723650208-19443", "minikube_version": "v1.33.1", "commit": "3bcdc720eef782394bf386d06fca73d1934e08fb"}
	I0819 18:37:37.293749  409340 ssh_runner.go:195] Run: systemctl --version
	I0819 18:37:37.319282  409340 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0819 18:37:37.320112  409340 command_runner.go:130] > systemd 252 (252)
	I0819 18:37:37.320151  409340 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0819 18:37:37.320215  409340 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 18:37:37.490885  409340 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0819 18:37:37.496979  409340 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0819 18:37:37.497040  409340 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 18:37:37.497109  409340 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 18:37:37.506544  409340 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0819 18:37:37.506570  409340 start.go:495] detecting cgroup driver to use...
	I0819 18:37:37.506648  409340 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 18:37:37.526272  409340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 18:37:37.541307  409340 docker.go:217] disabling cri-docker service (if available) ...
	I0819 18:37:37.541375  409340 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 18:37:37.556301  409340 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 18:37:37.571492  409340 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 18:37:37.720875  409340 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 18:37:37.856449  409340 docker.go:233] disabling docker service ...
	I0819 18:37:37.856528  409340 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 18:37:37.872304  409340 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 18:37:37.886136  409340 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 18:37:38.027196  409340 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 18:37:38.163145  409340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 18:37:38.177742  409340 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 18:37:38.197266  409340 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0819 18:37:38.197796  409340 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 18:37:38.197862  409340 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:37:38.208611  409340 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 18:37:38.208692  409340 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:37:38.219806  409340 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:37:38.230464  409340 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:37:38.241087  409340 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 18:37:38.251997  409340 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:37:38.262590  409340 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:37:38.274329  409340 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:37:38.284996  409340 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 18:37:38.294585  409340 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0819 18:37:38.294757  409340 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 18:37:38.304591  409340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 18:37:38.439597  409340 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 18:37:45.311807  409340 ssh_runner.go:235] Completed: sudo systemctl restart crio: (6.872162836s)
	I0819 18:37:45.311843  409340 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 18:37:45.311894  409340 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 18:37:45.316736  409340 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0819 18:37:45.316769  409340 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0819 18:37:45.316782  409340 command_runner.go:130] > Device: 0,22	Inode: 1323        Links: 1
	I0819 18:37:45.316792  409340 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0819 18:37:45.316808  409340 command_runner.go:130] > Access: 2024-08-19 18:37:45.165858708 +0000
	I0819 18:37:45.316816  409340 command_runner.go:130] > Modify: 2024-08-19 18:37:45.165858708 +0000
	I0819 18:37:45.316824  409340 command_runner.go:130] > Change: 2024-08-19 18:37:45.165858708 +0000
	I0819 18:37:45.316829  409340 command_runner.go:130] >  Birth: -
	I0819 18:37:45.316860  409340 start.go:563] Will wait 60s for crictl version
	I0819 18:37:45.316945  409340 ssh_runner.go:195] Run: which crictl
	I0819 18:37:45.321654  409340 command_runner.go:130] > /usr/bin/crictl
	I0819 18:37:45.321716  409340 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 18:37:45.358596  409340 command_runner.go:130] > Version:  0.1.0
	I0819 18:37:45.358623  409340 command_runner.go:130] > RuntimeName:  cri-o
	I0819 18:37:45.358629  409340 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0819 18:37:45.358635  409340 command_runner.go:130] > RuntimeApiVersion:  v1
	I0819 18:37:45.359707  409340 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 18:37:45.359776  409340 ssh_runner.go:195] Run: crio --version
	I0819 18:37:45.392589  409340 command_runner.go:130] > crio version 1.29.1
	I0819 18:37:45.392617  409340 command_runner.go:130] > Version:        1.29.1
	I0819 18:37:45.392633  409340 command_runner.go:130] > GitCommit:      unknown
	I0819 18:37:45.392637  409340 command_runner.go:130] > GitCommitDate:  unknown
	I0819 18:37:45.392641  409340 command_runner.go:130] > GitTreeState:   clean
	I0819 18:37:45.392647  409340 command_runner.go:130] > BuildDate:      2024-08-15T22:11:01Z
	I0819 18:37:45.392651  409340 command_runner.go:130] > GoVersion:      go1.21.6
	I0819 18:37:45.392655  409340 command_runner.go:130] > Compiler:       gc
	I0819 18:37:45.392660  409340 command_runner.go:130] > Platform:       linux/amd64
	I0819 18:37:45.392663  409340 command_runner.go:130] > Linkmode:       dynamic
	I0819 18:37:45.392668  409340 command_runner.go:130] > BuildTags:      
	I0819 18:37:45.392673  409340 command_runner.go:130] >   containers_image_ostree_stub
	I0819 18:37:45.392677  409340 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0819 18:37:45.392681  409340 command_runner.go:130] >   btrfs_noversion
	I0819 18:37:45.392689  409340 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0819 18:37:45.392693  409340 command_runner.go:130] >   libdm_no_deferred_remove
	I0819 18:37:45.392699  409340 command_runner.go:130] >   seccomp
	I0819 18:37:45.392710  409340 command_runner.go:130] > LDFlags:          unknown
	I0819 18:37:45.392717  409340 command_runner.go:130] > SeccompEnabled:   true
	I0819 18:37:45.392721  409340 command_runner.go:130] > AppArmorEnabled:  false
	I0819 18:37:45.392799  409340 ssh_runner.go:195] Run: crio --version
	I0819 18:37:45.423212  409340 command_runner.go:130] > crio version 1.29.1
	I0819 18:37:45.423236  409340 command_runner.go:130] > Version:        1.29.1
	I0819 18:37:45.423243  409340 command_runner.go:130] > GitCommit:      unknown
	I0819 18:37:45.423247  409340 command_runner.go:130] > GitCommitDate:  unknown
	I0819 18:37:45.423251  409340 command_runner.go:130] > GitTreeState:   clean
	I0819 18:37:45.423257  409340 command_runner.go:130] > BuildDate:      2024-08-15T22:11:01Z
	I0819 18:37:45.423263  409340 command_runner.go:130] > GoVersion:      go1.21.6
	I0819 18:37:45.423268  409340 command_runner.go:130] > Compiler:       gc
	I0819 18:37:45.423276  409340 command_runner.go:130] > Platform:       linux/amd64
	I0819 18:37:45.423282  409340 command_runner.go:130] > Linkmode:       dynamic
	I0819 18:37:45.423293  409340 command_runner.go:130] > BuildTags:      
	I0819 18:37:45.423303  409340 command_runner.go:130] >   containers_image_ostree_stub
	I0819 18:37:45.423310  409340 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0819 18:37:45.423319  409340 command_runner.go:130] >   btrfs_noversion
	I0819 18:37:45.423329  409340 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0819 18:37:45.423338  409340 command_runner.go:130] >   libdm_no_deferred_remove
	I0819 18:37:45.423342  409340 command_runner.go:130] >   seccomp
	I0819 18:37:45.423351  409340 command_runner.go:130] > LDFlags:          unknown
	I0819 18:37:45.423361  409340 command_runner.go:130] > SeccompEnabled:   true
	I0819 18:37:45.423372  409340 command_runner.go:130] > AppArmorEnabled:  false
	I0819 18:37:45.425600  409340 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 18:37:45.427114  409340 main.go:141] libmachine: (multinode-528433) Calling .GetIP
	I0819 18:37:45.429627  409340 main.go:141] libmachine: (multinode-528433) DBG | domain multinode-528433 has defined MAC address 52:54:00:78:95:69 in network mk-multinode-528433
	I0819 18:37:45.429961  409340 main.go:141] libmachine: (multinode-528433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:95:69", ip: ""} in network mk-multinode-528433: {Iface:virbr1 ExpiryTime:2024-08-19 19:30:36 +0000 UTC Type:0 Mac:52:54:00:78:95:69 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-528433 Clientid:01:52:54:00:78:95:69}
	I0819 18:37:45.429991  409340 main.go:141] libmachine: (multinode-528433) DBG | domain multinode-528433 has defined IP address 192.168.39.168 and MAC address 52:54:00:78:95:69 in network mk-multinode-528433
	I0819 18:37:45.430197  409340 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 18:37:45.434470  409340 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0819 18:37:45.434572  409340 kubeadm.go:883] updating cluster {Name:multinode-528433 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.0 ClusterName:multinode-528433 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.168 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.107 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.113 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fa
lse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 18:37:45.434732  409340 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 18:37:45.434785  409340 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 18:37:45.494067  409340 command_runner.go:130] > {
	I0819 18:37:45.494096  409340 command_runner.go:130] >   "images": [
	I0819 18:37:45.494100  409340 command_runner.go:130] >     {
	I0819 18:37:45.494109  409340 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0819 18:37:45.494116  409340 command_runner.go:130] >       "repoTags": [
	I0819 18:37:45.494126  409340 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0819 18:37:45.494132  409340 command_runner.go:130] >       ],
	I0819 18:37:45.494139  409340 command_runner.go:130] >       "repoDigests": [
	I0819 18:37:45.494154  409340 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0819 18:37:45.494166  409340 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0819 18:37:45.494173  409340 command_runner.go:130] >       ],
	I0819 18:37:45.494177  409340 command_runner.go:130] >       "size": "87165492",
	I0819 18:37:45.494181  409340 command_runner.go:130] >       "uid": null,
	I0819 18:37:45.494185  409340 command_runner.go:130] >       "username": "",
	I0819 18:37:45.494193  409340 command_runner.go:130] >       "spec": null,
	I0819 18:37:45.494198  409340 command_runner.go:130] >       "pinned": false
	I0819 18:37:45.494203  409340 command_runner.go:130] >     },
	I0819 18:37:45.494219  409340 command_runner.go:130] >     {
	I0819 18:37:45.494233  409340 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0819 18:37:45.494242  409340 command_runner.go:130] >       "repoTags": [
	I0819 18:37:45.494251  409340 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0819 18:37:45.494260  409340 command_runner.go:130] >       ],
	I0819 18:37:45.494266  409340 command_runner.go:130] >       "repoDigests": [
	I0819 18:37:45.494277  409340 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0819 18:37:45.494285  409340 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0819 18:37:45.494292  409340 command_runner.go:130] >       ],
	I0819 18:37:45.494299  409340 command_runner.go:130] >       "size": "87190579",
	I0819 18:37:45.494309  409340 command_runner.go:130] >       "uid": null,
	I0819 18:37:45.494326  409340 command_runner.go:130] >       "username": "",
	I0819 18:37:45.494335  409340 command_runner.go:130] >       "spec": null,
	I0819 18:37:45.494345  409340 command_runner.go:130] >       "pinned": false
	I0819 18:37:45.494351  409340 command_runner.go:130] >     },
	I0819 18:37:45.494359  409340 command_runner.go:130] >     {
	I0819 18:37:45.494367  409340 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0819 18:37:45.494387  409340 command_runner.go:130] >       "repoTags": [
	I0819 18:37:45.494400  409340 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0819 18:37:45.494409  409340 command_runner.go:130] >       ],
	I0819 18:37:45.494419  409340 command_runner.go:130] >       "repoDigests": [
	I0819 18:37:45.494434  409340 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0819 18:37:45.494448  409340 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0819 18:37:45.494455  409340 command_runner.go:130] >       ],
	I0819 18:37:45.494459  409340 command_runner.go:130] >       "size": "1363676",
	I0819 18:37:45.494465  409340 command_runner.go:130] >       "uid": null,
	I0819 18:37:45.494472  409340 command_runner.go:130] >       "username": "",
	I0819 18:37:45.494479  409340 command_runner.go:130] >       "spec": null,
	I0819 18:37:45.494489  409340 command_runner.go:130] >       "pinned": false
	I0819 18:37:45.494498  409340 command_runner.go:130] >     },
	I0819 18:37:45.494504  409340 command_runner.go:130] >     {
	I0819 18:37:45.494516  409340 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0819 18:37:45.494525  409340 command_runner.go:130] >       "repoTags": [
	I0819 18:37:45.494535  409340 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0819 18:37:45.494542  409340 command_runner.go:130] >       ],
	I0819 18:37:45.494546  409340 command_runner.go:130] >       "repoDigests": [
	I0819 18:37:45.494557  409340 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0819 18:37:45.494577  409340 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0819 18:37:45.494585  409340 command_runner.go:130] >       ],
	I0819 18:37:45.494593  409340 command_runner.go:130] >       "size": "31470524",
	I0819 18:37:45.494601  409340 command_runner.go:130] >       "uid": null,
	I0819 18:37:45.494608  409340 command_runner.go:130] >       "username": "",
	I0819 18:37:45.494617  409340 command_runner.go:130] >       "spec": null,
	I0819 18:37:45.494625  409340 command_runner.go:130] >       "pinned": false
	I0819 18:37:45.494629  409340 command_runner.go:130] >     },
	I0819 18:37:45.494636  409340 command_runner.go:130] >     {
	I0819 18:37:45.494646  409340 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0819 18:37:45.494656  409340 command_runner.go:130] >       "repoTags": [
	I0819 18:37:45.494664  409340 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0819 18:37:45.494673  409340 command_runner.go:130] >       ],
	I0819 18:37:45.494680  409340 command_runner.go:130] >       "repoDigests": [
	I0819 18:37:45.494696  409340 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0819 18:37:45.494710  409340 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0819 18:37:45.494716  409340 command_runner.go:130] >       ],
	I0819 18:37:45.494720  409340 command_runner.go:130] >       "size": "61245718",
	I0819 18:37:45.494729  409340 command_runner.go:130] >       "uid": null,
	I0819 18:37:45.494736  409340 command_runner.go:130] >       "username": "nonroot",
	I0819 18:37:45.494745  409340 command_runner.go:130] >       "spec": null,
	I0819 18:37:45.494752  409340 command_runner.go:130] >       "pinned": false
	I0819 18:37:45.494761  409340 command_runner.go:130] >     },
	I0819 18:37:45.494767  409340 command_runner.go:130] >     {
	I0819 18:37:45.494779  409340 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0819 18:37:45.494788  409340 command_runner.go:130] >       "repoTags": [
	I0819 18:37:45.494796  409340 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0819 18:37:45.494802  409340 command_runner.go:130] >       ],
	I0819 18:37:45.494807  409340 command_runner.go:130] >       "repoDigests": [
	I0819 18:37:45.494820  409340 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0819 18:37:45.494835  409340 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0819 18:37:45.494843  409340 command_runner.go:130] >       ],
	I0819 18:37:45.494850  409340 command_runner.go:130] >       "size": "149009664",
	I0819 18:37:45.494858  409340 command_runner.go:130] >       "uid": {
	I0819 18:37:45.494865  409340 command_runner.go:130] >         "value": "0"
	I0819 18:37:45.494873  409340 command_runner.go:130] >       },
	I0819 18:37:45.494879  409340 command_runner.go:130] >       "username": "",
	I0819 18:37:45.494886  409340 command_runner.go:130] >       "spec": null,
	I0819 18:37:45.494890  409340 command_runner.go:130] >       "pinned": false
	I0819 18:37:45.494911  409340 command_runner.go:130] >     },
	I0819 18:37:45.494916  409340 command_runner.go:130] >     {
	I0819 18:37:45.494929  409340 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0819 18:37:45.494938  409340 command_runner.go:130] >       "repoTags": [
	I0819 18:37:45.494946  409340 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0819 18:37:45.494954  409340 command_runner.go:130] >       ],
	I0819 18:37:45.494961  409340 command_runner.go:130] >       "repoDigests": [
	I0819 18:37:45.494974  409340 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0819 18:37:45.494981  409340 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0819 18:37:45.494990  409340 command_runner.go:130] >       ],
	I0819 18:37:45.494997  409340 command_runner.go:130] >       "size": "95233506",
	I0819 18:37:45.495006  409340 command_runner.go:130] >       "uid": {
	I0819 18:37:45.495014  409340 command_runner.go:130] >         "value": "0"
	I0819 18:37:45.495022  409340 command_runner.go:130] >       },
	I0819 18:37:45.495028  409340 command_runner.go:130] >       "username": "",
	I0819 18:37:45.495037  409340 command_runner.go:130] >       "spec": null,
	I0819 18:37:45.495043  409340 command_runner.go:130] >       "pinned": false
	I0819 18:37:45.495048  409340 command_runner.go:130] >     },
	I0819 18:37:45.495056  409340 command_runner.go:130] >     {
	I0819 18:37:45.495063  409340 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0819 18:37:45.495070  409340 command_runner.go:130] >       "repoTags": [
	I0819 18:37:45.495079  409340 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0819 18:37:45.495088  409340 command_runner.go:130] >       ],
	I0819 18:37:45.495095  409340 command_runner.go:130] >       "repoDigests": [
	I0819 18:37:45.495117  409340 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0819 18:37:45.495132  409340 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0819 18:37:45.495141  409340 command_runner.go:130] >       ],
	I0819 18:37:45.495147  409340 command_runner.go:130] >       "size": "89437512",
	I0819 18:37:45.495153  409340 command_runner.go:130] >       "uid": {
	I0819 18:37:45.495159  409340 command_runner.go:130] >         "value": "0"
	I0819 18:37:45.495165  409340 command_runner.go:130] >       },
	I0819 18:37:45.495172  409340 command_runner.go:130] >       "username": "",
	I0819 18:37:45.495178  409340 command_runner.go:130] >       "spec": null,
	I0819 18:37:45.495185  409340 command_runner.go:130] >       "pinned": false
	I0819 18:37:45.495190  409340 command_runner.go:130] >     },
	I0819 18:37:45.495195  409340 command_runner.go:130] >     {
	I0819 18:37:45.495204  409340 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0819 18:37:45.495218  409340 command_runner.go:130] >       "repoTags": [
	I0819 18:37:45.495226  409340 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0819 18:37:45.495233  409340 command_runner.go:130] >       ],
	I0819 18:37:45.495237  409340 command_runner.go:130] >       "repoDigests": [
	I0819 18:37:45.495251  409340 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0819 18:37:45.495265  409340 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0819 18:37:45.495274  409340 command_runner.go:130] >       ],
	I0819 18:37:45.495285  409340 command_runner.go:130] >       "size": "92728217",
	I0819 18:37:45.495294  409340 command_runner.go:130] >       "uid": null,
	I0819 18:37:45.495300  409340 command_runner.go:130] >       "username": "",
	I0819 18:37:45.495309  409340 command_runner.go:130] >       "spec": null,
	I0819 18:37:45.495315  409340 command_runner.go:130] >       "pinned": false
	I0819 18:37:45.495322  409340 command_runner.go:130] >     },
	I0819 18:37:45.495326  409340 command_runner.go:130] >     {
	I0819 18:37:45.495335  409340 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0819 18:37:45.495345  409340 command_runner.go:130] >       "repoTags": [
	I0819 18:37:45.495354  409340 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0819 18:37:45.495362  409340 command_runner.go:130] >       ],
	I0819 18:37:45.495369  409340 command_runner.go:130] >       "repoDigests": [
	I0819 18:37:45.495383  409340 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0819 18:37:45.495398  409340 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0819 18:37:45.495405  409340 command_runner.go:130] >       ],
	I0819 18:37:45.495410  409340 command_runner.go:130] >       "size": "68420936",
	I0819 18:37:45.495418  409340 command_runner.go:130] >       "uid": {
	I0819 18:37:45.495424  409340 command_runner.go:130] >         "value": "0"
	I0819 18:37:45.495430  409340 command_runner.go:130] >       },
	I0819 18:37:45.495441  409340 command_runner.go:130] >       "username": "",
	I0819 18:37:45.495447  409340 command_runner.go:130] >       "spec": null,
	I0819 18:37:45.495457  409340 command_runner.go:130] >       "pinned": false
	I0819 18:37:45.495463  409340 command_runner.go:130] >     },
	I0819 18:37:45.495470  409340 command_runner.go:130] >     {
	I0819 18:37:45.495480  409340 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0819 18:37:45.495489  409340 command_runner.go:130] >       "repoTags": [
	I0819 18:37:45.495495  409340 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0819 18:37:45.495500  409340 command_runner.go:130] >       ],
	I0819 18:37:45.495506  409340 command_runner.go:130] >       "repoDigests": [
	I0819 18:37:45.495520  409340 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0819 18:37:45.495534  409340 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0819 18:37:45.495543  409340 command_runner.go:130] >       ],
	I0819 18:37:45.495550  409340 command_runner.go:130] >       "size": "742080",
	I0819 18:37:45.495558  409340 command_runner.go:130] >       "uid": {
	I0819 18:37:45.495564  409340 command_runner.go:130] >         "value": "65535"
	I0819 18:37:45.495573  409340 command_runner.go:130] >       },
	I0819 18:37:45.495580  409340 command_runner.go:130] >       "username": "",
	I0819 18:37:45.495587  409340 command_runner.go:130] >       "spec": null,
	I0819 18:37:45.495594  409340 command_runner.go:130] >       "pinned": true
	I0819 18:37:45.495602  409340 command_runner.go:130] >     }
	I0819 18:37:45.495607  409340 command_runner.go:130] >   ]
	I0819 18:37:45.495615  409340 command_runner.go:130] > }
	I0819 18:37:45.495881  409340 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 18:37:45.495898  409340 crio.go:433] Images already preloaded, skipping extraction
	I0819 18:37:45.495978  409340 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 18:37:45.534149  409340 command_runner.go:130] > {
	I0819 18:37:45.534177  409340 command_runner.go:130] >   "images": [
	I0819 18:37:45.534181  409340 command_runner.go:130] >     {
	I0819 18:37:45.534192  409340 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0819 18:37:45.534198  409340 command_runner.go:130] >       "repoTags": [
	I0819 18:37:45.534205  409340 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0819 18:37:45.534210  409340 command_runner.go:130] >       ],
	I0819 18:37:45.534216  409340 command_runner.go:130] >       "repoDigests": [
	I0819 18:37:45.534229  409340 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0819 18:37:45.534240  409340 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0819 18:37:45.534248  409340 command_runner.go:130] >       ],
	I0819 18:37:45.534256  409340 command_runner.go:130] >       "size": "87165492",
	I0819 18:37:45.534264  409340 command_runner.go:130] >       "uid": null,
	I0819 18:37:45.534268  409340 command_runner.go:130] >       "username": "",
	I0819 18:37:45.534279  409340 command_runner.go:130] >       "spec": null,
	I0819 18:37:45.534285  409340 command_runner.go:130] >       "pinned": false
	I0819 18:37:45.534289  409340 command_runner.go:130] >     },
	I0819 18:37:45.534295  409340 command_runner.go:130] >     {
	I0819 18:37:45.534302  409340 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0819 18:37:45.534308  409340 command_runner.go:130] >       "repoTags": [
	I0819 18:37:45.534316  409340 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0819 18:37:45.534324  409340 command_runner.go:130] >       ],
	I0819 18:37:45.534332  409340 command_runner.go:130] >       "repoDigests": [
	I0819 18:37:45.534346  409340 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0819 18:37:45.534359  409340 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0819 18:37:45.534364  409340 command_runner.go:130] >       ],
	I0819 18:37:45.534368  409340 command_runner.go:130] >       "size": "87190579",
	I0819 18:37:45.534374  409340 command_runner.go:130] >       "uid": null,
	I0819 18:37:45.534382  409340 command_runner.go:130] >       "username": "",
	I0819 18:37:45.534388  409340 command_runner.go:130] >       "spec": null,
	I0819 18:37:45.534392  409340 command_runner.go:130] >       "pinned": false
	I0819 18:37:45.534398  409340 command_runner.go:130] >     },
	I0819 18:37:45.534404  409340 command_runner.go:130] >     {
	I0819 18:37:45.534417  409340 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0819 18:37:45.534427  409340 command_runner.go:130] >       "repoTags": [
	I0819 18:37:45.534438  409340 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0819 18:37:45.534453  409340 command_runner.go:130] >       ],
	I0819 18:37:45.534462  409340 command_runner.go:130] >       "repoDigests": [
	I0819 18:37:45.534471  409340 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0819 18:37:45.534481  409340 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0819 18:37:45.534484  409340 command_runner.go:130] >       ],
	I0819 18:37:45.534489  409340 command_runner.go:130] >       "size": "1363676",
	I0819 18:37:45.534496  409340 command_runner.go:130] >       "uid": null,
	I0819 18:37:45.534503  409340 command_runner.go:130] >       "username": "",
	I0819 18:37:45.534514  409340 command_runner.go:130] >       "spec": null,
	I0819 18:37:45.534523  409340 command_runner.go:130] >       "pinned": false
	I0819 18:37:45.534532  409340 command_runner.go:130] >     },
	I0819 18:37:45.534537  409340 command_runner.go:130] >     {
	I0819 18:37:45.534548  409340 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0819 18:37:45.534557  409340 command_runner.go:130] >       "repoTags": [
	I0819 18:37:45.534565  409340 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0819 18:37:45.534571  409340 command_runner.go:130] >       ],
	I0819 18:37:45.534575  409340 command_runner.go:130] >       "repoDigests": [
	I0819 18:37:45.534587  409340 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0819 18:37:45.534608  409340 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0819 18:37:45.534616  409340 command_runner.go:130] >       ],
	I0819 18:37:45.534623  409340 command_runner.go:130] >       "size": "31470524",
	I0819 18:37:45.534632  409340 command_runner.go:130] >       "uid": null,
	I0819 18:37:45.534642  409340 command_runner.go:130] >       "username": "",
	I0819 18:37:45.534648  409340 command_runner.go:130] >       "spec": null,
	I0819 18:37:45.534656  409340 command_runner.go:130] >       "pinned": false
	I0819 18:37:45.534659  409340 command_runner.go:130] >     },
	I0819 18:37:45.534663  409340 command_runner.go:130] >     {
	I0819 18:37:45.534673  409340 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0819 18:37:45.534683  409340 command_runner.go:130] >       "repoTags": [
	I0819 18:37:45.534692  409340 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0819 18:37:45.534701  409340 command_runner.go:130] >       ],
	I0819 18:37:45.534708  409340 command_runner.go:130] >       "repoDigests": [
	I0819 18:37:45.534720  409340 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0819 18:37:45.534734  409340 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0819 18:37:45.534741  409340 command_runner.go:130] >       ],
	I0819 18:37:45.534745  409340 command_runner.go:130] >       "size": "61245718",
	I0819 18:37:45.534761  409340 command_runner.go:130] >       "uid": null,
	I0819 18:37:45.534771  409340 command_runner.go:130] >       "username": "nonroot",
	I0819 18:37:45.534777  409340 command_runner.go:130] >       "spec": null,
	I0819 18:37:45.534787  409340 command_runner.go:130] >       "pinned": false
	I0819 18:37:45.534792  409340 command_runner.go:130] >     },
	I0819 18:37:45.534800  409340 command_runner.go:130] >     {
	I0819 18:37:45.534810  409340 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0819 18:37:45.534824  409340 command_runner.go:130] >       "repoTags": [
	I0819 18:37:45.534830  409340 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0819 18:37:45.534836  409340 command_runner.go:130] >       ],
	I0819 18:37:45.534843  409340 command_runner.go:130] >       "repoDigests": [
	I0819 18:37:45.534857  409340 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0819 18:37:45.534871  409340 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0819 18:37:45.534880  409340 command_runner.go:130] >       ],
	I0819 18:37:45.534888  409340 command_runner.go:130] >       "size": "149009664",
	I0819 18:37:45.534909  409340 command_runner.go:130] >       "uid": {
	I0819 18:37:45.534916  409340 command_runner.go:130] >         "value": "0"
	I0819 18:37:45.534920  409340 command_runner.go:130] >       },
	I0819 18:37:45.534927  409340 command_runner.go:130] >       "username": "",
	I0819 18:37:45.534936  409340 command_runner.go:130] >       "spec": null,
	I0819 18:37:45.534943  409340 command_runner.go:130] >       "pinned": false
	I0819 18:37:45.534951  409340 command_runner.go:130] >     },
	I0819 18:37:45.534962  409340 command_runner.go:130] >     {
	I0819 18:37:45.534975  409340 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0819 18:37:45.534981  409340 command_runner.go:130] >       "repoTags": [
	I0819 18:37:45.534991  409340 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0819 18:37:45.534997  409340 command_runner.go:130] >       ],
	I0819 18:37:45.535005  409340 command_runner.go:130] >       "repoDigests": [
	I0819 18:37:45.535014  409340 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0819 18:37:45.535028  409340 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0819 18:37:45.535036  409340 command_runner.go:130] >       ],
	I0819 18:37:45.535043  409340 command_runner.go:130] >       "size": "95233506",
	I0819 18:37:45.535051  409340 command_runner.go:130] >       "uid": {
	I0819 18:37:45.535057  409340 command_runner.go:130] >         "value": "0"
	I0819 18:37:45.535062  409340 command_runner.go:130] >       },
	I0819 18:37:45.535072  409340 command_runner.go:130] >       "username": "",
	I0819 18:37:45.535087  409340 command_runner.go:130] >       "spec": null,
	I0819 18:37:45.535096  409340 command_runner.go:130] >       "pinned": false
	I0819 18:37:45.535103  409340 command_runner.go:130] >     },
	I0819 18:37:45.535111  409340 command_runner.go:130] >     {
	I0819 18:37:45.535120  409340 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0819 18:37:45.535129  409340 command_runner.go:130] >       "repoTags": [
	I0819 18:37:45.535135  409340 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0819 18:37:45.535139  409340 command_runner.go:130] >       ],
	I0819 18:37:45.535144  409340 command_runner.go:130] >       "repoDigests": [
	I0819 18:37:45.535166  409340 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0819 18:37:45.535176  409340 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0819 18:37:45.535180  409340 command_runner.go:130] >       ],
	I0819 18:37:45.535187  409340 command_runner.go:130] >       "size": "89437512",
	I0819 18:37:45.535191  409340 command_runner.go:130] >       "uid": {
	I0819 18:37:45.535197  409340 command_runner.go:130] >         "value": "0"
	I0819 18:37:45.535201  409340 command_runner.go:130] >       },
	I0819 18:37:45.535205  409340 command_runner.go:130] >       "username": "",
	I0819 18:37:45.535210  409340 command_runner.go:130] >       "spec": null,
	I0819 18:37:45.535217  409340 command_runner.go:130] >       "pinned": false
	I0819 18:37:45.535220  409340 command_runner.go:130] >     },
	I0819 18:37:45.535224  409340 command_runner.go:130] >     {
	I0819 18:37:45.535229  409340 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0819 18:37:45.535236  409340 command_runner.go:130] >       "repoTags": [
	I0819 18:37:45.535241  409340 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0819 18:37:45.535246  409340 command_runner.go:130] >       ],
	I0819 18:37:45.535251  409340 command_runner.go:130] >       "repoDigests": [
	I0819 18:37:45.535263  409340 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0819 18:37:45.535276  409340 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0819 18:37:45.535284  409340 command_runner.go:130] >       ],
	I0819 18:37:45.535290  409340 command_runner.go:130] >       "size": "92728217",
	I0819 18:37:45.535299  409340 command_runner.go:130] >       "uid": null,
	I0819 18:37:45.535305  409340 command_runner.go:130] >       "username": "",
	I0819 18:37:45.535311  409340 command_runner.go:130] >       "spec": null,
	I0819 18:37:45.535316  409340 command_runner.go:130] >       "pinned": false
	I0819 18:37:45.535324  409340 command_runner.go:130] >     },
	I0819 18:37:45.535329  409340 command_runner.go:130] >     {
	I0819 18:37:45.535354  409340 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0819 18:37:45.535360  409340 command_runner.go:130] >       "repoTags": [
	I0819 18:37:45.535368  409340 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0819 18:37:45.535372  409340 command_runner.go:130] >       ],
	I0819 18:37:45.535378  409340 command_runner.go:130] >       "repoDigests": [
	I0819 18:37:45.535392  409340 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0819 18:37:45.535407  409340 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0819 18:37:45.535416  409340 command_runner.go:130] >       ],
	I0819 18:37:45.535423  409340 command_runner.go:130] >       "size": "68420936",
	I0819 18:37:45.535435  409340 command_runner.go:130] >       "uid": {
	I0819 18:37:45.535441  409340 command_runner.go:130] >         "value": "0"
	I0819 18:37:45.535448  409340 command_runner.go:130] >       },
	I0819 18:37:45.535452  409340 command_runner.go:130] >       "username": "",
	I0819 18:37:45.535455  409340 command_runner.go:130] >       "spec": null,
	I0819 18:37:45.535460  409340 command_runner.go:130] >       "pinned": false
	I0819 18:37:45.535465  409340 command_runner.go:130] >     },
	I0819 18:37:45.535469  409340 command_runner.go:130] >     {
	I0819 18:37:45.535475  409340 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0819 18:37:45.535480  409340 command_runner.go:130] >       "repoTags": [
	I0819 18:37:45.535485  409340 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0819 18:37:45.535488  409340 command_runner.go:130] >       ],
	I0819 18:37:45.535493  409340 command_runner.go:130] >       "repoDigests": [
	I0819 18:37:45.535499  409340 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0819 18:37:45.535506  409340 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0819 18:37:45.535512  409340 command_runner.go:130] >       ],
	I0819 18:37:45.535516  409340 command_runner.go:130] >       "size": "742080",
	I0819 18:37:45.535520  409340 command_runner.go:130] >       "uid": {
	I0819 18:37:45.535524  409340 command_runner.go:130] >         "value": "65535"
	I0819 18:37:45.535531  409340 command_runner.go:130] >       },
	I0819 18:37:45.535534  409340 command_runner.go:130] >       "username": "",
	I0819 18:37:45.535539  409340 command_runner.go:130] >       "spec": null,
	I0819 18:37:45.535545  409340 command_runner.go:130] >       "pinned": true
	I0819 18:37:45.535548  409340 command_runner.go:130] >     }
	I0819 18:37:45.535553  409340 command_runner.go:130] >   ]
	I0819 18:37:45.535556  409340 command_runner.go:130] > }
	I0819 18:37:45.535743  409340 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 18:37:45.535762  409340 cache_images.go:84] Images are preloaded, skipping loading
	I0819 18:37:45.535771  409340 kubeadm.go:934] updating node { 192.168.39.168 8443 v1.31.0 crio true true} ...
	I0819 18:37:45.535891  409340 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-528433 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.168
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:multinode-528433 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 18:37:45.535961  409340 ssh_runner.go:195] Run: crio config
	I0819 18:37:45.585162  409340 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0819 18:37:45.585196  409340 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0819 18:37:45.585207  409340 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0819 18:37:45.585212  409340 command_runner.go:130] > #
	I0819 18:37:45.585223  409340 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0819 18:37:45.585232  409340 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0819 18:37:45.585239  409340 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0819 18:37:45.585247  409340 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0819 18:37:45.585251  409340 command_runner.go:130] > # reload'.
	I0819 18:37:45.585257  409340 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0819 18:37:45.585263  409340 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0819 18:37:45.585273  409340 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0819 18:37:45.585283  409340 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0819 18:37:45.585291  409340 command_runner.go:130] > [crio]
	I0819 18:37:45.585300  409340 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0819 18:37:45.585308  409340 command_runner.go:130] > # containers images, in this directory.
	I0819 18:37:45.585446  409340 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0819 18:37:45.585469  409340 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0819 18:37:45.585477  409340 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0819 18:37:45.585487  409340 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0819 18:37:45.585494  409340 command_runner.go:130] > # imagestore = ""
	I0819 18:37:45.585508  409340 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0819 18:37:45.585518  409340 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0819 18:37:45.585529  409340 command_runner.go:130] > storage_driver = "overlay"
	I0819 18:37:45.585540  409340 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0819 18:37:45.585552  409340 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0819 18:37:45.585565  409340 command_runner.go:130] > storage_option = [
	I0819 18:37:45.585622  409340 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0819 18:37:45.585638  409340 command_runner.go:130] > ]
	I0819 18:37:45.585645  409340 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0819 18:37:45.585652  409340 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0819 18:37:45.585658  409340 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0819 18:37:45.585664  409340 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0819 18:37:45.585672  409340 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0819 18:37:45.585677  409340 command_runner.go:130] > # always happen on a node reboot
	I0819 18:37:45.585684  409340 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0819 18:37:45.585696  409340 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0819 18:37:45.585708  409340 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0819 18:37:45.585718  409340 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0819 18:37:45.585727  409340 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0819 18:37:45.585742  409340 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0819 18:37:45.585758  409340 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0819 18:37:45.585769  409340 command_runner.go:130] > # internal_wipe = true
	I0819 18:37:45.585783  409340 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0819 18:37:45.585791  409340 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0819 18:37:45.585863  409340 command_runner.go:130] > # internal_repair = false
	I0819 18:37:45.585879  409340 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0819 18:37:45.585888  409340 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0819 18:37:45.585898  409340 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0819 18:37:45.585910  409340 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0819 18:37:45.585922  409340 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0819 18:37:45.585931  409340 command_runner.go:130] > [crio.api]
	I0819 18:37:45.585942  409340 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0819 18:37:45.585953  409340 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0819 18:37:45.585965  409340 command_runner.go:130] > # IP address on which the stream server will listen.
	I0819 18:37:45.585988  409340 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0819 18:37:45.586005  409340 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0819 18:37:45.586018  409340 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0819 18:37:45.586037  409340 command_runner.go:130] > # stream_port = "0"
	I0819 18:37:45.586050  409340 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0819 18:37:45.586060  409340 command_runner.go:130] > # stream_enable_tls = false
	I0819 18:37:45.586072  409340 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0819 18:37:45.586083  409340 command_runner.go:130] > # stream_idle_timeout = ""
	I0819 18:37:45.586093  409340 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0819 18:37:45.586106  409340 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0819 18:37:45.586115  409340 command_runner.go:130] > # minutes.
	I0819 18:37:45.586124  409340 command_runner.go:130] > # stream_tls_cert = ""
	I0819 18:37:45.586138  409340 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0819 18:37:45.586152  409340 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0819 18:37:45.586162  409340 command_runner.go:130] > # stream_tls_key = ""
	I0819 18:37:45.586173  409340 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0819 18:37:45.586185  409340 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0819 18:37:45.586212  409340 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0819 18:37:45.586235  409340 command_runner.go:130] > # stream_tls_ca = ""
	I0819 18:37:45.586250  409340 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0819 18:37:45.586261  409340 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0819 18:37:45.586273  409340 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0819 18:37:45.586284  409340 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0819 18:37:45.586297  409340 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0819 18:37:45.586310  409340 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0819 18:37:45.586320  409340 command_runner.go:130] > [crio.runtime]
	I0819 18:37:45.586331  409340 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0819 18:37:45.586343  409340 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0819 18:37:45.586352  409340 command_runner.go:130] > # "nofile=1024:2048"
	I0819 18:37:45.586362  409340 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0819 18:37:45.586371  409340 command_runner.go:130] > # default_ulimits = [
	I0819 18:37:45.586378  409340 command_runner.go:130] > # ]
	I0819 18:37:45.586390  409340 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0819 18:37:45.586400  409340 command_runner.go:130] > # no_pivot = false
	I0819 18:37:45.586412  409340 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0819 18:37:45.586424  409340 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0819 18:37:45.586435  409340 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0819 18:37:45.586446  409340 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0819 18:37:45.586457  409340 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0819 18:37:45.586478  409340 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0819 18:37:45.586489  409340 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0819 18:37:45.586498  409340 command_runner.go:130] > # Cgroup setting for conmon
	I0819 18:37:45.586511  409340 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0819 18:37:45.586520  409340 command_runner.go:130] > conmon_cgroup = "pod"
	I0819 18:37:45.586533  409340 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0819 18:37:45.586544  409340 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0819 18:37:45.586555  409340 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0819 18:37:45.586567  409340 command_runner.go:130] > conmon_env = [
	I0819 18:37:45.586576  409340 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0819 18:37:45.586581  409340 command_runner.go:130] > ]
	I0819 18:37:45.586589  409340 command_runner.go:130] > # Additional environment variables to set for all the
	I0819 18:37:45.586597  409340 command_runner.go:130] > # containers. These are overridden if set in the
	I0819 18:37:45.586605  409340 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0819 18:37:45.586612  409340 command_runner.go:130] > # default_env = [
	I0819 18:37:45.586617  409340 command_runner.go:130] > # ]
	I0819 18:37:45.586626  409340 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0819 18:37:45.586637  409340 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0819 18:37:45.586642  409340 command_runner.go:130] > # selinux = false
	I0819 18:37:45.586649  409340 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0819 18:37:45.586654  409340 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0819 18:37:45.586660  409340 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0819 18:37:45.586664  409340 command_runner.go:130] > # seccomp_profile = ""
	I0819 18:37:45.586669  409340 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0819 18:37:45.586674  409340 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0819 18:37:45.586683  409340 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0819 18:37:45.586687  409340 command_runner.go:130] > # which might increase security.
	I0819 18:37:45.586695  409340 command_runner.go:130] > # This option is currently deprecated,
	I0819 18:37:45.586700  409340 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0819 18:37:45.586704  409340 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0819 18:37:45.586715  409340 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0819 18:37:45.586725  409340 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0819 18:37:45.586736  409340 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0819 18:37:45.586746  409340 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0819 18:37:45.586753  409340 command_runner.go:130] > # This option supports live configuration reload.
	I0819 18:37:45.586764  409340 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0819 18:37:45.586783  409340 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0819 18:37:45.586793  409340 command_runner.go:130] > # the cgroup blockio controller.
	I0819 18:37:45.586800  409340 command_runner.go:130] > # blockio_config_file = ""
	I0819 18:37:45.586813  409340 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0819 18:37:45.586821  409340 command_runner.go:130] > # blockio parameters.
	I0819 18:37:45.586828  409340 command_runner.go:130] > # blockio_reload = false
	I0819 18:37:45.586841  409340 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0819 18:37:45.586847  409340 command_runner.go:130] > # irqbalance daemon.
	I0819 18:37:45.586856  409340 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0819 18:37:45.586868  409340 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0819 18:37:45.586880  409340 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0819 18:37:45.586894  409340 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0819 18:37:45.586908  409340 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0819 18:37:45.586920  409340 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0819 18:37:45.586930  409340 command_runner.go:130] > # This option supports live configuration reload.
	I0819 18:37:45.586937  409340 command_runner.go:130] > # rdt_config_file = ""
	I0819 18:37:45.586947  409340 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0819 18:37:45.586960  409340 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0819 18:37:45.586999  409340 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0819 18:37:45.587009  409340 command_runner.go:130] > # separate_pull_cgroup = ""
	I0819 18:37:45.587021  409340 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0819 18:37:45.587034  409340 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0819 18:37:45.587043  409340 command_runner.go:130] > # will be added.
	I0819 18:37:45.587049  409340 command_runner.go:130] > # default_capabilities = [
	I0819 18:37:45.587058  409340 command_runner.go:130] > # 	"CHOWN",
	I0819 18:37:45.587065  409340 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0819 18:37:45.587074  409340 command_runner.go:130] > # 	"FSETID",
	I0819 18:37:45.587082  409340 command_runner.go:130] > # 	"FOWNER",
	I0819 18:37:45.587090  409340 command_runner.go:130] > # 	"SETGID",
	I0819 18:37:45.587097  409340 command_runner.go:130] > # 	"SETUID",
	I0819 18:37:45.587105  409340 command_runner.go:130] > # 	"SETPCAP",
	I0819 18:37:45.587113  409340 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0819 18:37:45.587125  409340 command_runner.go:130] > # 	"KILL",
	I0819 18:37:45.587134  409340 command_runner.go:130] > # ]
	I0819 18:37:45.587148  409340 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0819 18:37:45.587162  409340 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0819 18:37:45.587178  409340 command_runner.go:130] > # add_inheritable_capabilities = false
	I0819 18:37:45.587193  409340 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0819 18:37:45.587207  409340 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0819 18:37:45.587222  409340 command_runner.go:130] > default_sysctls = [
	I0819 18:37:45.587233  409340 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0819 18:37:45.587239  409340 command_runner.go:130] > ]
	I0819 18:37:45.587248  409340 command_runner.go:130] > # List of devices on the host that a
	I0819 18:37:45.587262  409340 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0819 18:37:45.587272  409340 command_runner.go:130] > # allowed_devices = [
	I0819 18:37:45.587281  409340 command_runner.go:130] > # 	"/dev/fuse",
	I0819 18:37:45.587286  409340 command_runner.go:130] > # ]
	I0819 18:37:45.587295  409340 command_runner.go:130] > # List of additional devices. specified as
	I0819 18:37:45.587310  409340 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0819 18:37:45.587322  409340 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0819 18:37:45.587335  409340 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0819 18:37:45.587344  409340 command_runner.go:130] > # additional_devices = [
	I0819 18:37:45.587351  409340 command_runner.go:130] > # ]
	I0819 18:37:45.587362  409340 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0819 18:37:45.587369  409340 command_runner.go:130] > # cdi_spec_dirs = [
	I0819 18:37:45.587377  409340 command_runner.go:130] > # 	"/etc/cdi",
	I0819 18:37:45.587386  409340 command_runner.go:130] > # 	"/var/run/cdi",
	I0819 18:37:45.587392  409340 command_runner.go:130] > # ]
	I0819 18:37:45.587406  409340 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0819 18:37:45.587420  409340 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0819 18:37:45.587429  409340 command_runner.go:130] > # Defaults to false.
	I0819 18:37:45.587441  409340 command_runner.go:130] > # device_ownership_from_security_context = false
	I0819 18:37:45.587456  409340 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0819 18:37:45.587467  409340 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0819 18:37:45.587474  409340 command_runner.go:130] > # hooks_dir = [
	I0819 18:37:45.587484  409340 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0819 18:37:45.587490  409340 command_runner.go:130] > # ]
	I0819 18:37:45.587505  409340 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0819 18:37:45.587515  409340 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0819 18:37:45.587523  409340 command_runner.go:130] > # its default mounts from the following two files:
	I0819 18:37:45.587530  409340 command_runner.go:130] > #
	I0819 18:37:45.587540  409340 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0819 18:37:45.587559  409340 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0819 18:37:45.587570  409340 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0819 18:37:45.587577  409340 command_runner.go:130] > #
	I0819 18:37:45.587586  409340 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0819 18:37:45.587600  409340 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0819 18:37:45.587613  409340 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0819 18:37:45.587624  409340 command_runner.go:130] > #      only add mounts it finds in this file.
	I0819 18:37:45.587631  409340 command_runner.go:130] > #
	I0819 18:37:45.587638  409340 command_runner.go:130] > # default_mounts_file = ""
	I0819 18:37:45.587649  409340 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0819 18:37:45.587659  409340 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0819 18:37:45.587666  409340 command_runner.go:130] > pids_limit = 1024
	I0819 18:37:45.587689  409340 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0819 18:37:45.587702  409340 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0819 18:37:45.587711  409340 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0819 18:37:45.587725  409340 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0819 18:37:45.587735  409340 command_runner.go:130] > # log_size_max = -1
	I0819 18:37:45.587746  409340 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0819 18:37:45.587756  409340 command_runner.go:130] > # log_to_journald = false
	I0819 18:37:45.587767  409340 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0819 18:37:45.587778  409340 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0819 18:37:45.587789  409340 command_runner.go:130] > # Path to directory for container attach sockets.
	I0819 18:37:45.587800  409340 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0819 18:37:45.587808  409340 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0819 18:37:45.587818  409340 command_runner.go:130] > # bind_mount_prefix = ""
	I0819 18:37:45.587826  409340 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0819 18:37:45.587836  409340 command_runner.go:130] > # read_only = false
	I0819 18:37:45.587847  409340 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0819 18:37:45.587860  409340 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0819 18:37:45.587870  409340 command_runner.go:130] > # live configuration reload.
	I0819 18:37:45.587877  409340 command_runner.go:130] > # log_level = "info"
	I0819 18:37:45.587888  409340 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0819 18:37:45.587898  409340 command_runner.go:130] > # This option supports live configuration reload.
	I0819 18:37:45.587908  409340 command_runner.go:130] > # log_filter = ""
	I0819 18:37:45.587917  409340 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0819 18:37:45.587923  409340 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0819 18:37:45.587939  409340 command_runner.go:130] > # separated by comma.
	I0819 18:37:45.587948  409340 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0819 18:37:45.587953  409340 command_runner.go:130] > # uid_mappings = ""
	I0819 18:37:45.587959  409340 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0819 18:37:45.587965  409340 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0819 18:37:45.587972  409340 command_runner.go:130] > # separated by comma.
	I0819 18:37:45.587985  409340 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0819 18:37:45.587993  409340 command_runner.go:130] > # gid_mappings = ""
	I0819 18:37:45.588002  409340 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0819 18:37:45.588013  409340 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0819 18:37:45.588025  409340 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0819 18:37:45.588040  409340 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0819 18:37:45.588051  409340 command_runner.go:130] > # minimum_mappable_uid = -1
	I0819 18:37:45.588060  409340 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0819 18:37:45.588072  409340 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0819 18:37:45.588081  409340 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0819 18:37:45.588097  409340 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0819 18:37:45.588107  409340 command_runner.go:130] > # minimum_mappable_gid = -1
	I0819 18:37:45.588118  409340 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0819 18:37:45.588132  409340 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0819 18:37:45.588148  409340 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0819 18:37:45.588157  409340 command_runner.go:130] > # ctr_stop_timeout = 30
	I0819 18:37:45.588166  409340 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0819 18:37:45.588178  409340 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0819 18:37:45.588188  409340 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0819 18:37:45.588199  409340 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0819 18:37:45.588209  409340 command_runner.go:130] > drop_infra_ctr = false
	I0819 18:37:45.588224  409340 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0819 18:37:45.588237  409340 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0819 18:37:45.588249  409340 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0819 18:37:45.588259  409340 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0819 18:37:45.588270  409340 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0819 18:37:45.588284  409340 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0819 18:37:45.588296  409340 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0819 18:37:45.588306  409340 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0819 18:37:45.588312  409340 command_runner.go:130] > # shared_cpuset = ""
	I0819 18:37:45.588334  409340 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0819 18:37:45.588345  409340 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0819 18:37:45.588352  409340 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0819 18:37:45.588367  409340 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0819 18:37:45.588380  409340 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0819 18:37:45.588392  409340 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0819 18:37:45.588405  409340 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0819 18:37:45.588417  409340 command_runner.go:130] > # enable_criu_support = false
	I0819 18:37:45.588426  409340 command_runner.go:130] > # Enable/disable the generation of the container,
	I0819 18:37:45.588438  409340 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0819 18:37:45.588446  409340 command_runner.go:130] > # enable_pod_events = false
	I0819 18:37:45.588459  409340 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0819 18:37:45.588469  409340 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0819 18:37:45.588478  409340 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0819 18:37:45.588488  409340 command_runner.go:130] > # default_runtime = "runc"
	I0819 18:37:45.588497  409340 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0819 18:37:45.588511  409340 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0819 18:37:45.588527  409340 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0819 18:37:45.588538  409340 command_runner.go:130] > # creation as a file is not desired either.
	I0819 18:37:45.588550  409340 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0819 18:37:45.588562  409340 command_runner.go:130] > # the hostname is being managed dynamically.
	I0819 18:37:45.588572  409340 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0819 18:37:45.588577  409340 command_runner.go:130] > # ]
	I0819 18:37:45.588590  409340 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0819 18:37:45.588604  409340 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0819 18:37:45.588617  409340 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0819 18:37:45.588628  409340 command_runner.go:130] > # Each entry in the table should follow the format:
	I0819 18:37:45.588633  409340 command_runner.go:130] > #
	I0819 18:37:45.588643  409340 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0819 18:37:45.588651  409340 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0819 18:37:45.588750  409340 command_runner.go:130] > # runtime_type = "oci"
	I0819 18:37:45.588765  409340 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0819 18:37:45.588772  409340 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0819 18:37:45.588782  409340 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0819 18:37:45.588790  409340 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0819 18:37:45.588800  409340 command_runner.go:130] > # monitor_env = []
	I0819 18:37:45.588814  409340 command_runner.go:130] > # privileged_without_host_devices = false
	I0819 18:37:45.588825  409340 command_runner.go:130] > # allowed_annotations = []
	I0819 18:37:45.588835  409340 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0819 18:37:45.588844  409340 command_runner.go:130] > # Where:
	I0819 18:37:45.588853  409340 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0819 18:37:45.588865  409340 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0819 18:37:45.588875  409340 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0819 18:37:45.588889  409340 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0819 18:37:45.588898  409340 command_runner.go:130] > #   in $PATH.
	I0819 18:37:45.588907  409340 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0819 18:37:45.588918  409340 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0819 18:37:45.588929  409340 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0819 18:37:45.588939  409340 command_runner.go:130] > #   state.
	I0819 18:37:45.588949  409340 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0819 18:37:45.588961  409340 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0819 18:37:45.588971  409340 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0819 18:37:45.588983  409340 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0819 18:37:45.588995  409340 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0819 18:37:45.589009  409340 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0819 18:37:45.589020  409340 command_runner.go:130] > #   The currently recognized values are:
	I0819 18:37:45.589030  409340 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0819 18:37:45.589045  409340 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0819 18:37:45.589056  409340 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0819 18:37:45.589063  409340 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0819 18:37:45.589072  409340 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0819 18:37:45.589078  409340 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0819 18:37:45.589086  409340 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0819 18:37:45.589093  409340 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0819 18:37:45.589100  409340 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0819 18:37:45.589109  409340 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0819 18:37:45.589119  409340 command_runner.go:130] > #   deprecated option "conmon".
	I0819 18:37:45.589132  409340 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0819 18:37:45.589143  409340 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0819 18:37:45.589156  409340 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0819 18:37:45.589166  409340 command_runner.go:130] > #   should be moved to the container's cgroup
	I0819 18:37:45.589178  409340 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0819 18:37:45.589197  409340 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0819 18:37:45.589211  409340 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0819 18:37:45.589225  409340 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0819 18:37:45.589233  409340 command_runner.go:130] > #
	I0819 18:37:45.589241  409340 command_runner.go:130] > # Using the seccomp notifier feature:
	I0819 18:37:45.589248  409340 command_runner.go:130] > #
	I0819 18:37:45.589258  409340 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0819 18:37:45.589272  409340 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0819 18:37:45.589280  409340 command_runner.go:130] > #
	I0819 18:37:45.589290  409340 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0819 18:37:45.589303  409340 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0819 18:37:45.589309  409340 command_runner.go:130] > #
	I0819 18:37:45.589315  409340 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0819 18:37:45.589321  409340 command_runner.go:130] > # feature.
	I0819 18:37:45.589325  409340 command_runner.go:130] > #
	I0819 18:37:45.589331  409340 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0819 18:37:45.589339  409340 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0819 18:37:45.589347  409340 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0819 18:37:45.589358  409340 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0819 18:37:45.589370  409340 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0819 18:37:45.589379  409340 command_runner.go:130] > #
	I0819 18:37:45.589389  409340 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0819 18:37:45.589402  409340 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0819 18:37:45.589411  409340 command_runner.go:130] > #
	I0819 18:37:45.589420  409340 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0819 18:37:45.589431  409340 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0819 18:37:45.589436  409340 command_runner.go:130] > #
	I0819 18:37:45.589447  409340 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0819 18:37:45.589458  409340 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0819 18:37:45.589467  409340 command_runner.go:130] > # limitation.
	I0819 18:37:45.589475  409340 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0819 18:37:45.589485  409340 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0819 18:37:45.589494  409340 command_runner.go:130] > runtime_type = "oci"
	I0819 18:37:45.589503  409340 command_runner.go:130] > runtime_root = "/run/runc"
	I0819 18:37:45.589525  409340 command_runner.go:130] > runtime_config_path = ""
	I0819 18:37:45.589536  409340 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0819 18:37:45.589552  409340 command_runner.go:130] > monitor_cgroup = "pod"
	I0819 18:37:45.589564  409340 command_runner.go:130] > monitor_exec_cgroup = ""
	I0819 18:37:45.589570  409340 command_runner.go:130] > monitor_env = [
	I0819 18:37:45.589579  409340 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0819 18:37:45.589587  409340 command_runner.go:130] > ]
	I0819 18:37:45.589595  409340 command_runner.go:130] > privileged_without_host_devices = false
	I0819 18:37:45.589609  409340 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0819 18:37:45.589620  409340 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0819 18:37:45.589633  409340 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0819 18:37:45.589647  409340 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0819 18:37:45.589656  409340 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0819 18:37:45.589668  409340 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0819 18:37:45.589683  409340 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0819 18:37:45.589699  409340 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0819 18:37:45.589711  409340 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0819 18:37:45.589725  409340 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0819 18:37:45.589731  409340 command_runner.go:130] > # Example:
	I0819 18:37:45.589738  409340 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0819 18:37:45.589745  409340 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0819 18:37:45.589752  409340 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0819 18:37:45.589761  409340 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0819 18:37:45.589767  409340 command_runner.go:130] > # cpuset = 0
	I0819 18:37:45.589774  409340 command_runner.go:130] > # cpushares = "0-1"
	I0819 18:37:45.589779  409340 command_runner.go:130] > # Where:
	I0819 18:37:45.589787  409340 command_runner.go:130] > # The workload name is workload-type.
	I0819 18:37:45.589798  409340 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0819 18:37:45.589807  409340 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0819 18:37:45.589816  409340 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0819 18:37:45.589828  409340 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0819 18:37:45.589838  409340 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0819 18:37:45.589849  409340 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0819 18:37:45.589859  409340 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0819 18:37:45.589867  409340 command_runner.go:130] > # Default value is set to true
	I0819 18:37:45.589872  409340 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0819 18:37:45.589880  409340 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0819 18:37:45.589885  409340 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0819 18:37:45.589896  409340 command_runner.go:130] > # Default value is set to 'false'
	I0819 18:37:45.589902  409340 command_runner.go:130] > # disable_hostport_mapping = false
	I0819 18:37:45.589908  409340 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0819 18:37:45.589913  409340 command_runner.go:130] > #
	I0819 18:37:45.589919  409340 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0819 18:37:45.589930  409340 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0819 18:37:45.589938  409340 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0819 18:37:45.589944  409340 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0819 18:37:45.589952  409340 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0819 18:37:45.589956  409340 command_runner.go:130] > [crio.image]
	I0819 18:37:45.589962  409340 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0819 18:37:45.589966  409340 command_runner.go:130] > # default_transport = "docker://"
	I0819 18:37:45.589974  409340 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0819 18:37:45.589983  409340 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0819 18:37:45.589987  409340 command_runner.go:130] > # global_auth_file = ""
	I0819 18:37:45.589993  409340 command_runner.go:130] > # The image used to instantiate infra containers.
	I0819 18:37:45.590000  409340 command_runner.go:130] > # This option supports live configuration reload.
	I0819 18:37:45.590007  409340 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0819 18:37:45.590015  409340 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0819 18:37:45.590021  409340 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0819 18:37:45.590028  409340 command_runner.go:130] > # This option supports live configuration reload.
	I0819 18:37:45.590032  409340 command_runner.go:130] > # pause_image_auth_file = ""
	I0819 18:37:45.590039  409340 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0819 18:37:45.590045  409340 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0819 18:37:45.590053  409340 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0819 18:37:45.590059  409340 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0819 18:37:45.590065  409340 command_runner.go:130] > # pause_command = "/pause"
	I0819 18:37:45.590071  409340 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0819 18:37:45.590078  409340 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0819 18:37:45.590084  409340 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0819 18:37:45.590090  409340 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0819 18:37:45.590096  409340 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0819 18:37:45.590101  409340 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0819 18:37:45.590107  409340 command_runner.go:130] > # pinned_images = [
	I0819 18:37:45.590110  409340 command_runner.go:130] > # ]
	I0819 18:37:45.590116  409340 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0819 18:37:45.590131  409340 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0819 18:37:45.590139  409340 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0819 18:37:45.590145  409340 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0819 18:37:45.590152  409340 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0819 18:37:45.590156  409340 command_runner.go:130] > # signature_policy = ""
	I0819 18:37:45.590160  409340 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0819 18:37:45.590169  409340 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0819 18:37:45.590175  409340 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0819 18:37:45.590183  409340 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0819 18:37:45.590189  409340 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0819 18:37:45.590194  409340 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0819 18:37:45.590200  409340 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0819 18:37:45.590208  409340 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0819 18:37:45.590213  409340 command_runner.go:130] > # changing them here.
	I0819 18:37:45.590221  409340 command_runner.go:130] > # insecure_registries = [
	I0819 18:37:45.590225  409340 command_runner.go:130] > # ]
	I0819 18:37:45.590231  409340 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0819 18:37:45.590238  409340 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0819 18:37:45.590242  409340 command_runner.go:130] > # image_volumes = "mkdir"
	I0819 18:37:45.590248  409340 command_runner.go:130] > # Temporary directory to use for storing big files
	I0819 18:37:45.590252  409340 command_runner.go:130] > # big_files_temporary_dir = ""
	I0819 18:37:45.590260  409340 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0819 18:37:45.590264  409340 command_runner.go:130] > # CNI plugins.
	I0819 18:37:45.590270  409340 command_runner.go:130] > [crio.network]
	I0819 18:37:45.590275  409340 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0819 18:37:45.590281  409340 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0819 18:37:45.590287  409340 command_runner.go:130] > # cni_default_network = ""
	I0819 18:37:45.590292  409340 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0819 18:37:45.590298  409340 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0819 18:37:45.590304  409340 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0819 18:37:45.590310  409340 command_runner.go:130] > # plugin_dirs = [
	I0819 18:37:45.590313  409340 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0819 18:37:45.590316  409340 command_runner.go:130] > # ]
	I0819 18:37:45.590322  409340 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0819 18:37:45.590327  409340 command_runner.go:130] > [crio.metrics]
	I0819 18:37:45.590332  409340 command_runner.go:130] > # Globally enable or disable metrics support.
	I0819 18:37:45.590343  409340 command_runner.go:130] > enable_metrics = true
	I0819 18:37:45.590350  409340 command_runner.go:130] > # Specify enabled metrics collectors.
	I0819 18:37:45.590355  409340 command_runner.go:130] > # Per default all metrics are enabled.
	I0819 18:37:45.590361  409340 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0819 18:37:45.590367  409340 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0819 18:37:45.590377  409340 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0819 18:37:45.590381  409340 command_runner.go:130] > # metrics_collectors = [
	I0819 18:37:45.590385  409340 command_runner.go:130] > # 	"operations",
	I0819 18:37:45.590389  409340 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0819 18:37:45.590396  409340 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0819 18:37:45.590400  409340 command_runner.go:130] > # 	"operations_errors",
	I0819 18:37:45.590406  409340 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0819 18:37:45.590410  409340 command_runner.go:130] > # 	"image_pulls_by_name",
	I0819 18:37:45.590417  409340 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0819 18:37:45.590423  409340 command_runner.go:130] > # 	"image_pulls_failures",
	I0819 18:37:45.590427  409340 command_runner.go:130] > # 	"image_pulls_successes",
	I0819 18:37:45.590432  409340 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0819 18:37:45.590438  409340 command_runner.go:130] > # 	"image_layer_reuse",
	I0819 18:37:45.590442  409340 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0819 18:37:45.590447  409340 command_runner.go:130] > # 	"containers_oom_total",
	I0819 18:37:45.590451  409340 command_runner.go:130] > # 	"containers_oom",
	I0819 18:37:45.590455  409340 command_runner.go:130] > # 	"processes_defunct",
	I0819 18:37:45.590459  409340 command_runner.go:130] > # 	"operations_total",
	I0819 18:37:45.590463  409340 command_runner.go:130] > # 	"operations_latency_seconds",
	I0819 18:37:45.590469  409340 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0819 18:37:45.590474  409340 command_runner.go:130] > # 	"operations_errors_total",
	I0819 18:37:45.590482  409340 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0819 18:37:45.590487  409340 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0819 18:37:45.590493  409340 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0819 18:37:45.590497  409340 command_runner.go:130] > # 	"image_pulls_success_total",
	I0819 18:37:45.590503  409340 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0819 18:37:45.590508  409340 command_runner.go:130] > # 	"containers_oom_count_total",
	I0819 18:37:45.590515  409340 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0819 18:37:45.590520  409340 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0819 18:37:45.590525  409340 command_runner.go:130] > # ]
	I0819 18:37:45.590530  409340 command_runner.go:130] > # The port on which the metrics server will listen.
	I0819 18:37:45.590540  409340 command_runner.go:130] > # metrics_port = 9090
	I0819 18:37:45.590548  409340 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0819 18:37:45.590552  409340 command_runner.go:130] > # metrics_socket = ""
	I0819 18:37:45.590556  409340 command_runner.go:130] > # The certificate for the secure metrics server.
	I0819 18:37:45.590564  409340 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0819 18:37:45.590570  409340 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0819 18:37:45.590575  409340 command_runner.go:130] > # certificate on any modification event.
	I0819 18:37:45.590578  409340 command_runner.go:130] > # metrics_cert = ""
	I0819 18:37:45.590583  409340 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0819 18:37:45.590590  409340 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0819 18:37:45.590594  409340 command_runner.go:130] > # metrics_key = ""
	I0819 18:37:45.590601  409340 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0819 18:37:45.590605  409340 command_runner.go:130] > [crio.tracing]
	I0819 18:37:45.590613  409340 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0819 18:37:45.590617  409340 command_runner.go:130] > # enable_tracing = false
	I0819 18:37:45.590625  409340 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0819 18:37:45.590629  409340 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0819 18:37:45.590638  409340 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0819 18:37:45.590642  409340 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0819 18:37:45.590648  409340 command_runner.go:130] > # CRI-O NRI configuration.
	I0819 18:37:45.590652  409340 command_runner.go:130] > [crio.nri]
	I0819 18:37:45.590656  409340 command_runner.go:130] > # Globally enable or disable NRI.
	I0819 18:37:45.590662  409340 command_runner.go:130] > # enable_nri = false
	I0819 18:37:45.590665  409340 command_runner.go:130] > # NRI socket to listen on.
	I0819 18:37:45.590670  409340 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0819 18:37:45.590676  409340 command_runner.go:130] > # NRI plugin directory to use.
	I0819 18:37:45.590680  409340 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0819 18:37:45.590685  409340 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0819 18:37:45.590690  409340 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0819 18:37:45.590695  409340 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0819 18:37:45.590701  409340 command_runner.go:130] > # nri_disable_connections = false
	I0819 18:37:45.590706  409340 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0819 18:37:45.590712  409340 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0819 18:37:45.590716  409340 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0819 18:37:45.590722  409340 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0819 18:37:45.590728  409340 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0819 18:37:45.590739  409340 command_runner.go:130] > [crio.stats]
	I0819 18:37:45.590747  409340 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0819 18:37:45.590753  409340 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0819 18:37:45.590757  409340 command_runner.go:130] > # stats_collection_period = 0
	I0819 18:37:45.591121  409340 command_runner.go:130] ! time="2024-08-19 18:37:45.544084769Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0819 18:37:45.591149  409340 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0819 18:37:45.591408  409340 cni.go:84] Creating CNI manager for ""
	I0819 18:37:45.591428  409340 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0819 18:37:45.591438  409340 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 18:37:45.591462  409340 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.168 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-528433 NodeName:multinode-528433 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.168"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.168 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 18:37:45.591632  409340 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.168
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-528433"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.168
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.168"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 18:37:45.591725  409340 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 18:37:45.602325  409340 command_runner.go:130] > kubeadm
	I0819 18:37:45.602341  409340 command_runner.go:130] > kubectl
	I0819 18:37:45.602346  409340 command_runner.go:130] > kubelet
	I0819 18:37:45.602363  409340 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 18:37:45.602417  409340 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 18:37:45.612327  409340 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0819 18:37:45.629294  409340 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 18:37:45.645186  409340 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0819 18:37:45.662722  409340 ssh_runner.go:195] Run: grep 192.168.39.168	control-plane.minikube.internal$ /etc/hosts
	I0819 18:37:45.666612  409340 command_runner.go:130] > 192.168.39.168	control-plane.minikube.internal
	I0819 18:37:45.666791  409340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 18:37:45.827550  409340 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 18:37:45.842818  409340 certs.go:68] Setting up /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/multinode-528433 for IP: 192.168.39.168
	I0819 18:37:45.842848  409340 certs.go:194] generating shared ca certs ...
	I0819 18:37:45.842865  409340 certs.go:226] acquiring lock for ca certs: {Name:mk639e03f593e0bccac045f6e9f5ba3b96cc81e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:37:45.843028  409340 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key
	I0819 18:37:45.843071  409340 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key
	I0819 18:37:45.843080  409340 certs.go:256] generating profile certs ...
	I0819 18:37:45.843155  409340 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/multinode-528433/client.key
	I0819 18:37:45.843217  409340 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/multinode-528433/apiserver.key.fe16ede1
	I0819 18:37:45.843274  409340 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/multinode-528433/proxy-client.key
	I0819 18:37:45.843286  409340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 18:37:45.843300  409340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 18:37:45.843312  409340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 18:37:45.843325  409340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 18:37:45.843335  409340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/multinode-528433/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0819 18:37:45.843366  409340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/multinode-528433/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0819 18:37:45.843380  409340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/multinode-528433/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0819 18:37:45.843390  409340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/multinode-528433/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0819 18:37:45.843438  409340 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem (1338 bytes)
	W0819 18:37:45.843471  409340 certs.go:480] ignoring /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009_empty.pem, impossibly tiny 0 bytes
	I0819 18:37:45.843481  409340 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 18:37:45.843504  409340 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem (1082 bytes)
	I0819 18:37:45.843525  409340 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem (1123 bytes)
	I0819 18:37:45.843547  409340 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem (1675 bytes)
	I0819 18:37:45.843582  409340 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem (1708 bytes)
	I0819 18:37:45.843610  409340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:37:45.843623  409340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem -> /usr/share/ca-certificates/380009.pem
	I0819 18:37:45.843633  409340 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem -> /usr/share/ca-certificates/3800092.pem
	I0819 18:37:45.844326  409340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 18:37:45.872593  409340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 18:37:45.898888  409340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 18:37:45.926991  409340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 18:37:45.952841  409340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/multinode-528433/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0819 18:37:45.979410  409340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/multinode-528433/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 18:37:46.006798  409340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/multinode-528433/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 18:37:46.031459  409340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/multinode-528433/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 18:37:46.058439  409340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 18:37:46.084626  409340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem --> /usr/share/ca-certificates/380009.pem (1338 bytes)
	I0819 18:37:46.110595  409340 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem --> /usr/share/ca-certificates/3800092.pem (1708 bytes)
	I0819 18:37:46.137678  409340 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 18:37:46.154758  409340 ssh_runner.go:195] Run: openssl version
	I0819 18:37:46.160491  409340 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0819 18:37:46.160621  409340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/380009.pem && ln -fs /usr/share/ca-certificates/380009.pem /etc/ssl/certs/380009.pem"
	I0819 18:37:46.172031  409340 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/380009.pem
	I0819 18:37:46.176509  409340 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug 19 17:56 /usr/share/ca-certificates/380009.pem
	I0819 18:37:46.176542  409340 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 17:56 /usr/share/ca-certificates/380009.pem
	I0819 18:37:46.176581  409340 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/380009.pem
	I0819 18:37:46.182440  409340 command_runner.go:130] > 51391683
	I0819 18:37:46.182520  409340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/380009.pem /etc/ssl/certs/51391683.0"
	I0819 18:37:46.192213  409340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3800092.pem && ln -fs /usr/share/ca-certificates/3800092.pem /etc/ssl/certs/3800092.pem"
	I0819 18:37:46.203095  409340 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3800092.pem
	I0819 18:37:46.207411  409340 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug 19 17:56 /usr/share/ca-certificates/3800092.pem
	I0819 18:37:46.207444  409340 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 17:56 /usr/share/ca-certificates/3800092.pem
	I0819 18:37:46.207477  409340 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3800092.pem
	I0819 18:37:46.213101  409340 command_runner.go:130] > 3ec20f2e
	I0819 18:37:46.213168  409340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3800092.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 18:37:46.223423  409340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 18:37:46.235417  409340 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:37:46.240155  409340 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug 19 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:37:46.240199  409340 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:37:46.240264  409340 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:37:46.245976  409340 command_runner.go:130] > b5213941
	I0819 18:37:46.246065  409340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 18:37:46.256008  409340 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 18:37:46.260648  409340 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 18:37:46.260673  409340 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0819 18:37:46.260678  409340 command_runner.go:130] > Device: 253,1	Inode: 4197398     Links: 1
	I0819 18:37:46.260685  409340 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0819 18:37:46.260691  409340 command_runner.go:130] > Access: 2024-08-19 18:30:56.064533706 +0000
	I0819 18:37:46.260696  409340 command_runner.go:130] > Modify: 2024-08-19 18:30:56.065534449 +0000
	I0819 18:37:46.260700  409340 command_runner.go:130] > Change: 2024-08-19 18:30:56.065534449 +0000
	I0819 18:37:46.260704  409340 command_runner.go:130] >  Birth: 2024-08-19 18:30:56.064533706 +0000
	I0819 18:37:46.260793  409340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 18:37:46.266499  409340 command_runner.go:130] > Certificate will not expire
	I0819 18:37:46.266575  409340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 18:37:46.272408  409340 command_runner.go:130] > Certificate will not expire
	I0819 18:37:46.272481  409340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 18:37:46.278497  409340 command_runner.go:130] > Certificate will not expire
	I0819 18:37:46.278568  409340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 18:37:46.284114  409340 command_runner.go:130] > Certificate will not expire
	I0819 18:37:46.284403  409340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 18:37:46.289927  409340 command_runner.go:130] > Certificate will not expire
	I0819 18:37:46.290077  409340 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 18:37:46.295754  409340 command_runner.go:130] > Certificate will not expire
	I0819 18:37:46.295820  409340 kubeadm.go:392] StartCluster: {Name:multinode-528433 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:multinode-528433 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.168 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.107 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.113 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 18:37:46.295935  409340 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 18:37:46.296004  409340 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 18:37:46.335528  409340 command_runner.go:130] > ed1b7f887e7493a29296d3302ca41b9feb5ce631efe79f5c7346da1ce5f3f5aa
	I0819 18:37:46.335555  409340 command_runner.go:130] > 9e235ceb0a44969b8100fa98407fe1fbe8a39f89a722ec5fba50a7894d1c315b
	I0819 18:37:46.335561  409340 command_runner.go:130] > 057d837bfdf9b2c340d252182c5f52f95286a592b06ccc5f204badee2872440e
	I0819 18:37:46.335580  409340 command_runner.go:130] > a5d6d7978005d7389f31edd035fd6fa05cd87e4a3901ca69aa1b2f9f73576240
	I0819 18:37:46.335592  409340 command_runner.go:130] > e18ee04a7496876e00d3bf4eea0c2cc1bea22033e6265d9eb65c8556c18dbecc
	I0819 18:37:46.335598  409340 command_runner.go:130] > 7c29a242039f25f811c96462be446c25a6154d911bbeffded18a5c5d7b8f8ea4
	I0819 18:37:46.335604  409340 command_runner.go:130] > 8f8613599f748b0dac210582c28b8864da1a0b7e328a1299edffaeebf943a44a
	I0819 18:37:46.335611  409340 command_runner.go:130] > c65c30f3ad8d84072ebc470eb8e6aa5f850402138c0c5b057a93852df19a0f24
	I0819 18:37:46.335633  409340 cri.go:89] found id: "ed1b7f887e7493a29296d3302ca41b9feb5ce631efe79f5c7346da1ce5f3f5aa"
	I0819 18:37:46.335642  409340 cri.go:89] found id: "9e235ceb0a44969b8100fa98407fe1fbe8a39f89a722ec5fba50a7894d1c315b"
	I0819 18:37:46.335645  409340 cri.go:89] found id: "057d837bfdf9b2c340d252182c5f52f95286a592b06ccc5f204badee2872440e"
	I0819 18:37:46.335649  409340 cri.go:89] found id: "a5d6d7978005d7389f31edd035fd6fa05cd87e4a3901ca69aa1b2f9f73576240"
	I0819 18:37:46.335653  409340 cri.go:89] found id: "e18ee04a7496876e00d3bf4eea0c2cc1bea22033e6265d9eb65c8556c18dbecc"
	I0819 18:37:46.335657  409340 cri.go:89] found id: "7c29a242039f25f811c96462be446c25a6154d911bbeffded18a5c5d7b8f8ea4"
	I0819 18:37:46.335660  409340 cri.go:89] found id: "8f8613599f748b0dac210582c28b8864da1a0b7e328a1299edffaeebf943a44a"
	I0819 18:37:46.335662  409340 cri.go:89] found id: "c65c30f3ad8d84072ebc470eb8e6aa5f850402138c0c5b057a93852df19a0f24"
	I0819 18:37:46.335665  409340 cri.go:89] found id: ""
	I0819 18:37:46.335726  409340 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 19 18:41:57 multinode-528433 crio[2738]: time="2024-08-19 18:41:57.082587024Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092917082558505,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=aff7575c-06e0-4056-86e2-80aedc5ce21a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:41:57 multinode-528433 crio[2738]: time="2024-08-19 18:41:57.083098620Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b6d00af6-1510-40f9-b5eb-3bc648e23ae4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:41:57 multinode-528433 crio[2738]: time="2024-08-19 18:41:57.083187674Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b6d00af6-1510-40f9-b5eb-3bc648e23ae4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:41:57 multinode-528433 crio[2738]: time="2024-08-19 18:41:57.083823711Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:72a17195d6bf67942986a46e56fba67e75056f3f131edb583ec1fee36c6ae2d9,PodSandboxId:a66751ad9ff4b7dd8a62b46a4a4583c86d2fc242a5faa7a882286627ee3aa531,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724092707039774210,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7rfnn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0e711971-6865-4191-b5fa-b045b4653330,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f0cc386f169fbef2abbfeb9de505aff3998aa7d54b7f3eee2b29d3c03dec1da,PodSandboxId:1ae2b060ade861dc63c5234d1883d3f0cbef337e6aee2f6d619ac644361ab3ca,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724092673558617442,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-n2rkp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6bb98ad-bda2-447c-a80a-b344e03d1c91,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10caf3a7930bac1eeced86d6165d138f108504c200840b0130e4e5bc5ef69b80,PodSandboxId:a9614e5e446e321f3d7e05c2bed412acb3d46511060d4c40a1f27d7984f1c095,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724092673464668288,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-fz4lc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 492965e3-fe40-49d9-8d90-3d25bdc67d6a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dcb798727b65fc265db910cab3cfa2e0ac5496715c0adb579c5a659e0c767b8,PodSandboxId:8112158e3370e32ff89041b8d1ce455d489bfea82d7c2be21a684c5fbecbd714,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724092673447341747,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 066868c5-cc0d-43bb-bdaf-f8ef664a5829,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a21bb460a42c175e6fcfe880334b3416a31c862ad41f4046dde00c3a50bf99ac,PodSandboxId:fe97abf92a961c599d0942d24914be81d7f8fda0743f294108940b802968dcd0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724092673376821797,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p26jv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28ac0348-1a53-4a4c-b0f5-0771f9ab8179,},Annotations:map[string]string{io.ku
bernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:580fb4c199750cc4e95fcf711e440dc76ff14f3b53d8a6997f621ca5b7bb4518,PodSandboxId:ec850d38417e844c38d6c2cf40506877ec7dfbd96dbb3406587fee1007e86201,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724092668515792156,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-528433,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fee372227a6243e6c504b433e9dc3d8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:422d4bc5ba6865ba6386db8aac55e0668aac92c409da53ac44c1d7750424fec7,PodSandboxId:058fc9c59d6119baf38422cc75b4f90ab7826ad54f3af1afc0675a3af83ff043,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724092668494818663,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-528433,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43ec279f896b4ee770677d0bae22c4b1,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a81508e447df1c7bfda53e67d1b4030870a749ba659d35497fe4adecbcf41a9e,PodSandboxId:07a33f7aa1fc1db1a29caf20a03ba3e05f8eac70e4c6061af227896435a5b583,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724092668438039716,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-528433,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96878698fee7f503b18654c4aea536a8,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0f607c724637f67917051010258f5d5d9d65d9a1966825b84ccb41087c55584,PodSandboxId:d278625234eca5cd6eb49cbed77ae24c11a6b2dc250d04df0d1e742f9248f6c5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724092668381932769,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-528433,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4955a086665c86d028d1d703c01db303,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f224d23c4d8f44f6775dc540bb3177686565fe9dd4224d12bc016af418710837,PodSandboxId:5a66f35c811553073f2f3811564ee8a98cc8a9d42eac401ac3f3e5c2dec93f90,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724092339879158107,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7rfnn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0e711971-6865-4191-b5fa-b045b4653330,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed1b7f887e7493a29296d3302ca41b9feb5ce631efe79f5c7346da1ce5f3f5aa,PodSandboxId:ceb057fc9c46954d8f4a27e13f091e6ba329f2eb2c19345e5311fc805a372cc8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724092286333371591,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-fz4lc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 492965e3-fe40-49d9-8d90-3d25bdc67d6a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e235ceb0a44969b8100fa98407fe1fbe8a39f89a722ec5fba50a7894d1c315b,PodSandboxId:8de468fe51d3c4f2cff19a27ebefd5ee016ffd1fb280b0cfa04fb5d8edf263f1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724092286273438089,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 066868c5-cc0d-43bb-bdaf-f8ef664a5829,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:057d837bfdf9b2c340d252182c5f52f95286a592b06ccc5f204badee2872440e,PodSandboxId:d5e3379f56814476907e612492d03f333c231c34477a32ac79018389c6afdcd7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724092274155853097,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-n2rkp,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: c6bb98ad-bda2-447c-a80a-b344e03d1c91,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5d6d7978005d7389f31edd035fd6fa05cd87e4a3901ca69aa1b2f9f73576240,PodSandboxId:877c89858b3f669bbca9ae01e4e630458f04d5ff4a9d4bced862d0d1c1b0ba59,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724092271823692641,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p26jv,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 28ac0348-1a53-4a4c-b0f5-0771f9ab8179,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e18ee04a7496876e00d3bf4eea0c2cc1bea22033e6265d9eb65c8556c18dbecc,PodSandboxId:cb470e280bb52b1ef495582750d7a53f8e226406c8abe2072527d1b05a734c36,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724092259746908878,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-528433,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 4955a086665c86d028d1d703c01db303,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c29a242039f25f811c96462be446c25a6154d911bbeffded18a5c5d7b8f8ea4,PodSandboxId:d4079a671c6959278f96d3504ca0c4360fac69130c3ec5050df94d901fc2dd87,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724092259742342579,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-528433,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 43ec279f896b4ee770677d0bae22c4b1,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f8613599f748b0dac210582c28b8864da1a0b7e328a1299edffaeebf943a44a,PodSandboxId:25a8248374561b640ce1e6ecf7e2b9af1a1b78e773fa2083d765ad0735d9757b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724092259716943720,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-528433,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fee372227a6243e6c504b433e9dc3d8,},
Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c65c30f3ad8d84072ebc470eb8e6aa5f850402138c0c5b057a93852df19a0f24,PodSandboxId:203cae8e180f3260e3a8def82d9ad87ff2903a20f9dd92fd570436a9a7cb9291,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724092259655024951,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-528433,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96878698fee7f503b18654c4aea536a8,},Annotations:map
[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b6d00af6-1510-40f9-b5eb-3bc648e23ae4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:41:57 multinode-528433 crio[2738]: time="2024-08-19 18:41:57.125223509Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f3828d42-a495-4efd-b717-e4bffa893274 name=/runtime.v1.RuntimeService/Version
	Aug 19 18:41:57 multinode-528433 crio[2738]: time="2024-08-19 18:41:57.125356539Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f3828d42-a495-4efd-b717-e4bffa893274 name=/runtime.v1.RuntimeService/Version
	Aug 19 18:41:57 multinode-528433 crio[2738]: time="2024-08-19 18:41:57.126456292Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=226f841c-6953-47fb-b847-5f780edece84 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:41:57 multinode-528433 crio[2738]: time="2024-08-19 18:41:57.126929913Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092917126907280,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=226f841c-6953-47fb-b847-5f780edece84 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:41:57 multinode-528433 crio[2738]: time="2024-08-19 18:41:57.128592864Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9b9928ec-aba6-48d4-9f8a-e800d2b66b6e name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:41:57 multinode-528433 crio[2738]: time="2024-08-19 18:41:57.129020349Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9b9928ec-aba6-48d4-9f8a-e800d2b66b6e name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:41:57 multinode-528433 crio[2738]: time="2024-08-19 18:41:57.129761213Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:72a17195d6bf67942986a46e56fba67e75056f3f131edb583ec1fee36c6ae2d9,PodSandboxId:a66751ad9ff4b7dd8a62b46a4a4583c86d2fc242a5faa7a882286627ee3aa531,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724092707039774210,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7rfnn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0e711971-6865-4191-b5fa-b045b4653330,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f0cc386f169fbef2abbfeb9de505aff3998aa7d54b7f3eee2b29d3c03dec1da,PodSandboxId:1ae2b060ade861dc63c5234d1883d3f0cbef337e6aee2f6d619ac644361ab3ca,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724092673558617442,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-n2rkp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6bb98ad-bda2-447c-a80a-b344e03d1c91,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10caf3a7930bac1eeced86d6165d138f108504c200840b0130e4e5bc5ef69b80,PodSandboxId:a9614e5e446e321f3d7e05c2bed412acb3d46511060d4c40a1f27d7984f1c095,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724092673464668288,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-fz4lc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 492965e3-fe40-49d9-8d90-3d25bdc67d6a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dcb798727b65fc265db910cab3cfa2e0ac5496715c0adb579c5a659e0c767b8,PodSandboxId:8112158e3370e32ff89041b8d1ce455d489bfea82d7c2be21a684c5fbecbd714,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724092673447341747,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 066868c5-cc0d-43bb-bdaf-f8ef664a5829,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a21bb460a42c175e6fcfe880334b3416a31c862ad41f4046dde00c3a50bf99ac,PodSandboxId:fe97abf92a961c599d0942d24914be81d7f8fda0743f294108940b802968dcd0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724092673376821797,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p26jv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28ac0348-1a53-4a4c-b0f5-0771f9ab8179,},Annotations:map[string]string{io.ku
bernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:580fb4c199750cc4e95fcf711e440dc76ff14f3b53d8a6997f621ca5b7bb4518,PodSandboxId:ec850d38417e844c38d6c2cf40506877ec7dfbd96dbb3406587fee1007e86201,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724092668515792156,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-528433,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fee372227a6243e6c504b433e9dc3d8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:422d4bc5ba6865ba6386db8aac55e0668aac92c409da53ac44c1d7750424fec7,PodSandboxId:058fc9c59d6119baf38422cc75b4f90ab7826ad54f3af1afc0675a3af83ff043,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724092668494818663,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-528433,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43ec279f896b4ee770677d0bae22c4b1,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a81508e447df1c7bfda53e67d1b4030870a749ba659d35497fe4adecbcf41a9e,PodSandboxId:07a33f7aa1fc1db1a29caf20a03ba3e05f8eac70e4c6061af227896435a5b583,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724092668438039716,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-528433,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96878698fee7f503b18654c4aea536a8,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0f607c724637f67917051010258f5d5d9d65d9a1966825b84ccb41087c55584,PodSandboxId:d278625234eca5cd6eb49cbed77ae24c11a6b2dc250d04df0d1e742f9248f6c5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724092668381932769,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-528433,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4955a086665c86d028d1d703c01db303,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f224d23c4d8f44f6775dc540bb3177686565fe9dd4224d12bc016af418710837,PodSandboxId:5a66f35c811553073f2f3811564ee8a98cc8a9d42eac401ac3f3e5c2dec93f90,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724092339879158107,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7rfnn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0e711971-6865-4191-b5fa-b045b4653330,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed1b7f887e7493a29296d3302ca41b9feb5ce631efe79f5c7346da1ce5f3f5aa,PodSandboxId:ceb057fc9c46954d8f4a27e13f091e6ba329f2eb2c19345e5311fc805a372cc8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724092286333371591,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-fz4lc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 492965e3-fe40-49d9-8d90-3d25bdc67d6a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e235ceb0a44969b8100fa98407fe1fbe8a39f89a722ec5fba50a7894d1c315b,PodSandboxId:8de468fe51d3c4f2cff19a27ebefd5ee016ffd1fb280b0cfa04fb5d8edf263f1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724092286273438089,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 066868c5-cc0d-43bb-bdaf-f8ef664a5829,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:057d837bfdf9b2c340d252182c5f52f95286a592b06ccc5f204badee2872440e,PodSandboxId:d5e3379f56814476907e612492d03f333c231c34477a32ac79018389c6afdcd7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724092274155853097,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-n2rkp,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: c6bb98ad-bda2-447c-a80a-b344e03d1c91,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5d6d7978005d7389f31edd035fd6fa05cd87e4a3901ca69aa1b2f9f73576240,PodSandboxId:877c89858b3f669bbca9ae01e4e630458f04d5ff4a9d4bced862d0d1c1b0ba59,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724092271823692641,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p26jv,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 28ac0348-1a53-4a4c-b0f5-0771f9ab8179,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e18ee04a7496876e00d3bf4eea0c2cc1bea22033e6265d9eb65c8556c18dbecc,PodSandboxId:cb470e280bb52b1ef495582750d7a53f8e226406c8abe2072527d1b05a734c36,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724092259746908878,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-528433,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 4955a086665c86d028d1d703c01db303,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c29a242039f25f811c96462be446c25a6154d911bbeffded18a5c5d7b8f8ea4,PodSandboxId:d4079a671c6959278f96d3504ca0c4360fac69130c3ec5050df94d901fc2dd87,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724092259742342579,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-528433,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 43ec279f896b4ee770677d0bae22c4b1,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f8613599f748b0dac210582c28b8864da1a0b7e328a1299edffaeebf943a44a,PodSandboxId:25a8248374561b640ce1e6ecf7e2b9af1a1b78e773fa2083d765ad0735d9757b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724092259716943720,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-528433,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fee372227a6243e6c504b433e9dc3d8,},
Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c65c30f3ad8d84072ebc470eb8e6aa5f850402138c0c5b057a93852df19a0f24,PodSandboxId:203cae8e180f3260e3a8def82d9ad87ff2903a20f9dd92fd570436a9a7cb9291,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724092259655024951,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-528433,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96878698fee7f503b18654c4aea536a8,},Annotations:map
[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9b9928ec-aba6-48d4-9f8a-e800d2b66b6e name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:41:57 multinode-528433 crio[2738]: time="2024-08-19 18:41:57.177424375Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d45f370b-5aeb-410a-ba05-aad8690d1f3a name=/runtime.v1.RuntimeService/Version
	Aug 19 18:41:57 multinode-528433 crio[2738]: time="2024-08-19 18:41:57.177499041Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d45f370b-5aeb-410a-ba05-aad8690d1f3a name=/runtime.v1.RuntimeService/Version
	Aug 19 18:41:57 multinode-528433 crio[2738]: time="2024-08-19 18:41:57.178583925Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ef1e5a17-9d34-4796-aced-dedee3c4f103 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:41:57 multinode-528433 crio[2738]: time="2024-08-19 18:41:57.179160225Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092917178985911,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ef1e5a17-9d34-4796-aced-dedee3c4f103 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:41:57 multinode-528433 crio[2738]: time="2024-08-19 18:41:57.179839829Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ac6b9286-d12b-40bf-ad88-cbc6fb672b98 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:41:57 multinode-528433 crio[2738]: time="2024-08-19 18:41:57.179911464Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ac6b9286-d12b-40bf-ad88-cbc6fb672b98 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:41:57 multinode-528433 crio[2738]: time="2024-08-19 18:41:57.180390969Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:72a17195d6bf67942986a46e56fba67e75056f3f131edb583ec1fee36c6ae2d9,PodSandboxId:a66751ad9ff4b7dd8a62b46a4a4583c86d2fc242a5faa7a882286627ee3aa531,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724092707039774210,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7rfnn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0e711971-6865-4191-b5fa-b045b4653330,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f0cc386f169fbef2abbfeb9de505aff3998aa7d54b7f3eee2b29d3c03dec1da,PodSandboxId:1ae2b060ade861dc63c5234d1883d3f0cbef337e6aee2f6d619ac644361ab3ca,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724092673558617442,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-n2rkp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6bb98ad-bda2-447c-a80a-b344e03d1c91,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10caf3a7930bac1eeced86d6165d138f108504c200840b0130e4e5bc5ef69b80,PodSandboxId:a9614e5e446e321f3d7e05c2bed412acb3d46511060d4c40a1f27d7984f1c095,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724092673464668288,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-fz4lc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 492965e3-fe40-49d9-8d90-3d25bdc67d6a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dcb798727b65fc265db910cab3cfa2e0ac5496715c0adb579c5a659e0c767b8,PodSandboxId:8112158e3370e32ff89041b8d1ce455d489bfea82d7c2be21a684c5fbecbd714,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724092673447341747,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 066868c5-cc0d-43bb-bdaf-f8ef664a5829,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a21bb460a42c175e6fcfe880334b3416a31c862ad41f4046dde00c3a50bf99ac,PodSandboxId:fe97abf92a961c599d0942d24914be81d7f8fda0743f294108940b802968dcd0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724092673376821797,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p26jv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28ac0348-1a53-4a4c-b0f5-0771f9ab8179,},Annotations:map[string]string{io.ku
bernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:580fb4c199750cc4e95fcf711e440dc76ff14f3b53d8a6997f621ca5b7bb4518,PodSandboxId:ec850d38417e844c38d6c2cf40506877ec7dfbd96dbb3406587fee1007e86201,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724092668515792156,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-528433,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fee372227a6243e6c504b433e9dc3d8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:422d4bc5ba6865ba6386db8aac55e0668aac92c409da53ac44c1d7750424fec7,PodSandboxId:058fc9c59d6119baf38422cc75b4f90ab7826ad54f3af1afc0675a3af83ff043,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724092668494818663,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-528433,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43ec279f896b4ee770677d0bae22c4b1,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a81508e447df1c7bfda53e67d1b4030870a749ba659d35497fe4adecbcf41a9e,PodSandboxId:07a33f7aa1fc1db1a29caf20a03ba3e05f8eac70e4c6061af227896435a5b583,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724092668438039716,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-528433,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96878698fee7f503b18654c4aea536a8,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0f607c724637f67917051010258f5d5d9d65d9a1966825b84ccb41087c55584,PodSandboxId:d278625234eca5cd6eb49cbed77ae24c11a6b2dc250d04df0d1e742f9248f6c5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724092668381932769,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-528433,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4955a086665c86d028d1d703c01db303,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f224d23c4d8f44f6775dc540bb3177686565fe9dd4224d12bc016af418710837,PodSandboxId:5a66f35c811553073f2f3811564ee8a98cc8a9d42eac401ac3f3e5c2dec93f90,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724092339879158107,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7rfnn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0e711971-6865-4191-b5fa-b045b4653330,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed1b7f887e7493a29296d3302ca41b9feb5ce631efe79f5c7346da1ce5f3f5aa,PodSandboxId:ceb057fc9c46954d8f4a27e13f091e6ba329f2eb2c19345e5311fc805a372cc8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724092286333371591,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-fz4lc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 492965e3-fe40-49d9-8d90-3d25bdc67d6a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e235ceb0a44969b8100fa98407fe1fbe8a39f89a722ec5fba50a7894d1c315b,PodSandboxId:8de468fe51d3c4f2cff19a27ebefd5ee016ffd1fb280b0cfa04fb5d8edf263f1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724092286273438089,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 066868c5-cc0d-43bb-bdaf-f8ef664a5829,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:057d837bfdf9b2c340d252182c5f52f95286a592b06ccc5f204badee2872440e,PodSandboxId:d5e3379f56814476907e612492d03f333c231c34477a32ac79018389c6afdcd7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724092274155853097,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-n2rkp,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: c6bb98ad-bda2-447c-a80a-b344e03d1c91,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5d6d7978005d7389f31edd035fd6fa05cd87e4a3901ca69aa1b2f9f73576240,PodSandboxId:877c89858b3f669bbca9ae01e4e630458f04d5ff4a9d4bced862d0d1c1b0ba59,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724092271823692641,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p26jv,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 28ac0348-1a53-4a4c-b0f5-0771f9ab8179,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e18ee04a7496876e00d3bf4eea0c2cc1bea22033e6265d9eb65c8556c18dbecc,PodSandboxId:cb470e280bb52b1ef495582750d7a53f8e226406c8abe2072527d1b05a734c36,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724092259746908878,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-528433,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 4955a086665c86d028d1d703c01db303,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c29a242039f25f811c96462be446c25a6154d911bbeffded18a5c5d7b8f8ea4,PodSandboxId:d4079a671c6959278f96d3504ca0c4360fac69130c3ec5050df94d901fc2dd87,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724092259742342579,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-528433,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 43ec279f896b4ee770677d0bae22c4b1,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f8613599f748b0dac210582c28b8864da1a0b7e328a1299edffaeebf943a44a,PodSandboxId:25a8248374561b640ce1e6ecf7e2b9af1a1b78e773fa2083d765ad0735d9757b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724092259716943720,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-528433,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fee372227a6243e6c504b433e9dc3d8,},
Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c65c30f3ad8d84072ebc470eb8e6aa5f850402138c0c5b057a93852df19a0f24,PodSandboxId:203cae8e180f3260e3a8def82d9ad87ff2903a20f9dd92fd570436a9a7cb9291,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724092259655024951,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-528433,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96878698fee7f503b18654c4aea536a8,},Annotations:map
[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ac6b9286-d12b-40bf-ad88-cbc6fb672b98 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:41:57 multinode-528433 crio[2738]: time="2024-08-19 18:41:57.225738830Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=208715a6-6259-4d34-a19e-da5e010af0c1 name=/runtime.v1.RuntimeService/Version
	Aug 19 18:41:57 multinode-528433 crio[2738]: time="2024-08-19 18:41:57.225833335Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=208715a6-6259-4d34-a19e-da5e010af0c1 name=/runtime.v1.RuntimeService/Version
	Aug 19 18:41:57 multinode-528433 crio[2738]: time="2024-08-19 18:41:57.226759239Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8e1959a9-e497-4407-956e-efe37330f9d4 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:41:57 multinode-528433 crio[2738]: time="2024-08-19 18:41:57.227173183Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092917227150712,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8e1959a9-e497-4407-956e-efe37330f9d4 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:41:57 multinode-528433 crio[2738]: time="2024-08-19 18:41:57.227770560Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d58967e3-001f-4b84-84dc-d15bd60ee904 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:41:57 multinode-528433 crio[2738]: time="2024-08-19 18:41:57.227849909Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d58967e3-001f-4b84-84dc-d15bd60ee904 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:41:57 multinode-528433 crio[2738]: time="2024-08-19 18:41:57.228234742Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:72a17195d6bf67942986a46e56fba67e75056f3f131edb583ec1fee36c6ae2d9,PodSandboxId:a66751ad9ff4b7dd8a62b46a4a4583c86d2fc242a5faa7a882286627ee3aa531,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724092707039774210,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7rfnn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0e711971-6865-4191-b5fa-b045b4653330,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f0cc386f169fbef2abbfeb9de505aff3998aa7d54b7f3eee2b29d3c03dec1da,PodSandboxId:1ae2b060ade861dc63c5234d1883d3f0cbef337e6aee2f6d619ac644361ab3ca,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724092673558617442,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-n2rkp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6bb98ad-bda2-447c-a80a-b344e03d1c91,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10caf3a7930bac1eeced86d6165d138f108504c200840b0130e4e5bc5ef69b80,PodSandboxId:a9614e5e446e321f3d7e05c2bed412acb3d46511060d4c40a1f27d7984f1c095,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724092673464668288,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-fz4lc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 492965e3-fe40-49d9-8d90-3d25bdc67d6a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dcb798727b65fc265db910cab3cfa2e0ac5496715c0adb579c5a659e0c767b8,PodSandboxId:8112158e3370e32ff89041b8d1ce455d489bfea82d7c2be21a684c5fbecbd714,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724092673447341747,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 066868c5-cc0d-43bb-bdaf-f8ef664a5829,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a21bb460a42c175e6fcfe880334b3416a31c862ad41f4046dde00c3a50bf99ac,PodSandboxId:fe97abf92a961c599d0942d24914be81d7f8fda0743f294108940b802968dcd0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724092673376821797,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p26jv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28ac0348-1a53-4a4c-b0f5-0771f9ab8179,},Annotations:map[string]string{io.ku
bernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:580fb4c199750cc4e95fcf711e440dc76ff14f3b53d8a6997f621ca5b7bb4518,PodSandboxId:ec850d38417e844c38d6c2cf40506877ec7dfbd96dbb3406587fee1007e86201,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724092668515792156,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-528433,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fee372227a6243e6c504b433e9dc3d8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:422d4bc5ba6865ba6386db8aac55e0668aac92c409da53ac44c1d7750424fec7,PodSandboxId:058fc9c59d6119baf38422cc75b4f90ab7826ad54f3af1afc0675a3af83ff043,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724092668494818663,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-528433,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43ec279f896b4ee770677d0bae22c4b1,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a81508e447df1c7bfda53e67d1b4030870a749ba659d35497fe4adecbcf41a9e,PodSandboxId:07a33f7aa1fc1db1a29caf20a03ba3e05f8eac70e4c6061af227896435a5b583,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724092668438039716,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-528433,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96878698fee7f503b18654c4aea536a8,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0f607c724637f67917051010258f5d5d9d65d9a1966825b84ccb41087c55584,PodSandboxId:d278625234eca5cd6eb49cbed77ae24c11a6b2dc250d04df0d1e742f9248f6c5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724092668381932769,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-528433,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4955a086665c86d028d1d703c01db303,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f224d23c4d8f44f6775dc540bb3177686565fe9dd4224d12bc016af418710837,PodSandboxId:5a66f35c811553073f2f3811564ee8a98cc8a9d42eac401ac3f3e5c2dec93f90,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724092339879158107,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7rfnn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0e711971-6865-4191-b5fa-b045b4653330,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed1b7f887e7493a29296d3302ca41b9feb5ce631efe79f5c7346da1ce5f3f5aa,PodSandboxId:ceb057fc9c46954d8f4a27e13f091e6ba329f2eb2c19345e5311fc805a372cc8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724092286333371591,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-fz4lc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 492965e3-fe40-49d9-8d90-3d25bdc67d6a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e235ceb0a44969b8100fa98407fe1fbe8a39f89a722ec5fba50a7894d1c315b,PodSandboxId:8de468fe51d3c4f2cff19a27ebefd5ee016ffd1fb280b0cfa04fb5d8edf263f1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724092286273438089,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 066868c5-cc0d-43bb-bdaf-f8ef664a5829,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:057d837bfdf9b2c340d252182c5f52f95286a592b06ccc5f204badee2872440e,PodSandboxId:d5e3379f56814476907e612492d03f333c231c34477a32ac79018389c6afdcd7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724092274155853097,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-n2rkp,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: c6bb98ad-bda2-447c-a80a-b344e03d1c91,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5d6d7978005d7389f31edd035fd6fa05cd87e4a3901ca69aa1b2f9f73576240,PodSandboxId:877c89858b3f669bbca9ae01e4e630458f04d5ff4a9d4bced862d0d1c1b0ba59,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724092271823692641,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p26jv,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 28ac0348-1a53-4a4c-b0f5-0771f9ab8179,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e18ee04a7496876e00d3bf4eea0c2cc1bea22033e6265d9eb65c8556c18dbecc,PodSandboxId:cb470e280bb52b1ef495582750d7a53f8e226406c8abe2072527d1b05a734c36,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724092259746908878,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-528433,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 4955a086665c86d028d1d703c01db303,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c29a242039f25f811c96462be446c25a6154d911bbeffded18a5c5d7b8f8ea4,PodSandboxId:d4079a671c6959278f96d3504ca0c4360fac69130c3ec5050df94d901fc2dd87,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724092259742342579,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-528433,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 43ec279f896b4ee770677d0bae22c4b1,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f8613599f748b0dac210582c28b8864da1a0b7e328a1299edffaeebf943a44a,PodSandboxId:25a8248374561b640ce1e6ecf7e2b9af1a1b78e773fa2083d765ad0735d9757b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724092259716943720,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-528433,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fee372227a6243e6c504b433e9dc3d8,},
Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c65c30f3ad8d84072ebc470eb8e6aa5f850402138c0c5b057a93852df19a0f24,PodSandboxId:203cae8e180f3260e3a8def82d9ad87ff2903a20f9dd92fd570436a9a7cb9291,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724092259655024951,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-528433,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96878698fee7f503b18654c4aea536a8,},Annotations:map
[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d58967e3-001f-4b84-84dc-d15bd60ee904 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	72a17195d6bf6       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   a66751ad9ff4b       busybox-7dff88458-7rfnn
	8f0cc386f169f       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      4 minutes ago       Running             kindnet-cni               1                   1ae2b060ade86       kindnet-n2rkp
	10caf3a7930ba       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Running             coredns                   1                   a9614e5e446e3       coredns-6f6b679f8f-fz4lc
	5dcb798727b65       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       1                   8112158e3370e       storage-provisioner
	a21bb460a42c1       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      4 minutes ago       Running             kube-proxy                1                   fe97abf92a961       kube-proxy-p26jv
	580fb4c199750       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      4 minutes ago       Running             etcd                      1                   ec850d38417e8       etcd-multinode-528433
	422d4bc5ba686       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      4 minutes ago       Running             kube-scheduler            1                   058fc9c59d611       kube-scheduler-multinode-528433
	a81508e447df1       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      4 minutes ago       Running             kube-apiserver            1                   07a33f7aa1fc1       kube-apiserver-multinode-528433
	a0f607c724637       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      4 minutes ago       Running             kube-controller-manager   1                   d278625234eca       kube-controller-manager-multinode-528433
	f224d23c4d8f4       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   5a66f35c81155       busybox-7dff88458-7rfnn
	ed1b7f887e749       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      10 minutes ago      Exited              coredns                   0                   ceb057fc9c469       coredns-6f6b679f8f-fz4lc
	9e235ceb0a449       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Exited              storage-provisioner       0                   8de468fe51d3c       storage-provisioner
	057d837bfdf9b       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    10 minutes ago      Exited              kindnet-cni               0                   d5e3379f56814       kindnet-n2rkp
	a5d6d7978005d       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      10 minutes ago      Exited              kube-proxy                0                   877c89858b3f6       kube-proxy-p26jv
	e18ee04a74968       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      10 minutes ago      Exited              kube-controller-manager   0                   cb470e280bb52       kube-controller-manager-multinode-528433
	7c29a242039f2       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      10 minutes ago      Exited              kube-scheduler            0                   d4079a671c695       kube-scheduler-multinode-528433
	8f8613599f748       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      10 minutes ago      Exited              etcd                      0                   25a8248374561       etcd-multinode-528433
	c65c30f3ad8d8       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      10 minutes ago      Exited              kube-apiserver            0                   203cae8e180f3       kube-apiserver-multinode-528433
	
	
	==> coredns [10caf3a7930bac1eeced86d6165d138f108504c200840b0130e4e5bc5ef69b80] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:32985 - 55068 "HINFO IN 2276461329978003692.3688836495844611696. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.019112479s
	
	
	==> coredns [ed1b7f887e7493a29296d3302ca41b9feb5ce631efe79f5c7346da1ce5f3f5aa] <==
	[INFO] 10.244.1.2:45136 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001778026s
	[INFO] 10.244.1.2:39595 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000093282s
	[INFO] 10.244.1.2:52118 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000067848s
	[INFO] 10.244.1.2:35693 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001214491s
	[INFO] 10.244.1.2:53820 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000063204s
	[INFO] 10.244.1.2:36563 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000060887s
	[INFO] 10.244.1.2:41229 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000064173s
	[INFO] 10.244.0.3:37769 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000083799s
	[INFO] 10.244.0.3:54377 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000059108s
	[INFO] 10.244.0.3:34587 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000076758s
	[INFO] 10.244.0.3:47718 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000042003s
	[INFO] 10.244.1.2:51694 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000160518s
	[INFO] 10.244.1.2:54523 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000117392s
	[INFO] 10.244.1.2:45410 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000076187s
	[INFO] 10.244.1.2:36210 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000090522s
	[INFO] 10.244.0.3:44693 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122508s
	[INFO] 10.244.0.3:53188 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000110882s
	[INFO] 10.244.0.3:35460 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00009435s
	[INFO] 10.244.0.3:48546 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00006206s
	[INFO] 10.244.1.2:37874 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000157291s
	[INFO] 10.244.1.2:50300 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000102288s
	[INFO] 10.244.1.2:50241 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000095737s
	[INFO] 10.244.1.2:47032 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000069064s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-528433
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-528433
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9c2db9d51ec33b5c53a86e9ba3d384ee332e3411
	                    minikube.k8s.io/name=multinode-528433
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T18_31_05_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 18:31:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-528433
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 18:41:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 18:37:52 +0000   Mon, 19 Aug 2024 18:31:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 18:37:52 +0000   Mon, 19 Aug 2024 18:31:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 18:37:52 +0000   Mon, 19 Aug 2024 18:31:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 18:37:52 +0000   Mon, 19 Aug 2024 18:31:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.168
	  Hostname:    multinode-528433
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a5065be9838d4acd9d9f081f00a42b7b
	  System UUID:                a5065be9-838d-4acd-9d9f-081f00a42b7b
	  Boot ID:                    cd729d8e-64bb-410c-9f54-c5249111761b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-7rfnn                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m41s
	  kube-system                 coredns-6f6b679f8f-fz4lc                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     10m
	  kube-system                 etcd-multinode-528433                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         10m
	  kube-system                 kindnet-n2rkp                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-apiserver-multinode-528433             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-multinode-528433    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-p26jv                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-multinode-528433             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 10m                    kube-proxy       
	  Normal  Starting                 4m3s                   kube-proxy       
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m                    kubelet          Node multinode-528433 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                    kubelet          Node multinode-528433 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m                    kubelet          Node multinode-528433 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    10m                    kubelet          Node multinode-528433 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m                    kubelet          Node multinode-528433 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  10m                    kubelet          Node multinode-528433 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           10m                    node-controller  Node multinode-528433 event: Registered Node multinode-528433 in Controller
	  Normal  NodeReady                10m                    kubelet          Node multinode-528433 status is now: NodeReady
	  Normal  Starting                 4m10s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m10s (x8 over 4m10s)  kubelet          Node multinode-528433 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m10s (x8 over 4m10s)  kubelet          Node multinode-528433 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m10s (x7 over 4m10s)  kubelet          Node multinode-528433 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m10s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m2s                   node-controller  Node multinode-528433 event: Registered Node multinode-528433 in Controller
	
	
	Name:               multinode-528433-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-528433-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9c2db9d51ec33b5c53a86e9ba3d384ee332e3411
	                    minikube.k8s.io/name=multinode-528433
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_19T18_38_30_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 18:38:30 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-528433-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 18:39:31 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 19 Aug 2024 18:39:01 +0000   Mon, 19 Aug 2024 18:40:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 19 Aug 2024 18:39:01 +0000   Mon, 19 Aug 2024 18:40:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 19 Aug 2024 18:39:01 +0000   Mon, 19 Aug 2024 18:40:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 19 Aug 2024 18:39:01 +0000   Mon, 19 Aug 2024 18:40:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.107
	  Hostname:    multinode-528433-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b87bdc29e80543e191b01af8a0a8ce51
	  System UUID:                b87bdc29-e805-43e1-91b0-1af8a0a8ce51
	  Boot ID:                    8993ba52-c500-46b0-af42-831d10624bba
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-vmbvk    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m32s
	  kube-system                 kindnet-l9wzp              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-proxy-7wbgt           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m58s                  kube-proxy       
	  Normal  Starting                 3m22s                  kube-proxy       
	  Normal  NodeHasNoDiskPressure    10m (x2 over 10m)      kubelet          Node multinode-528433-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x2 over 10m)      kubelet          Node multinode-528433-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m (x2 over 10m)      kubelet          Node multinode-528433-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                9m43s                  kubelet          Node multinode-528433-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m27s (x2 over 3m27s)  kubelet          Node multinode-528433-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m27s (x2 over 3m27s)  kubelet          Node multinode-528433-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m27s (x2 over 3m27s)  kubelet          Node multinode-528433-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m22s                  node-controller  Node multinode-528433-m02 event: Registered Node multinode-528433-m02 in Controller
	  Normal  NodeReady                3m7s                   kubelet          Node multinode-528433-m02 status is now: NodeReady
	  Normal  NodeNotReady             102s                   node-controller  Node multinode-528433-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.056804] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.185782] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.130416] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.267325] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +4.001571] systemd-fstab-generator[759]: Ignoring "noauto" option for root device
	[  +4.088576] systemd-fstab-generator[890]: Ignoring "noauto" option for root device
	[  +0.058151] kauditd_printk_skb: 158 callbacks suppressed
	[Aug19 18:31] systemd-fstab-generator[1226]: Ignoring "noauto" option for root device
	[  +0.090579] kauditd_printk_skb: 69 callbacks suppressed
	[  +4.616981] systemd-fstab-generator[1326]: Ignoring "noauto" option for root device
	[  +1.080821] kauditd_printk_skb: 43 callbacks suppressed
	[ +15.760285] kauditd_printk_skb: 38 callbacks suppressed
	[Aug19 18:32] kauditd_printk_skb: 12 callbacks suppressed
	[Aug19 18:37] systemd-fstab-generator[2658]: Ignoring "noauto" option for root device
	[  +0.140159] systemd-fstab-generator[2670]: Ignoring "noauto" option for root device
	[  +0.168582] systemd-fstab-generator[2684]: Ignoring "noauto" option for root device
	[  +0.132434] systemd-fstab-generator[2696]: Ignoring "noauto" option for root device
	[  +0.277811] systemd-fstab-generator[2724]: Ignoring "noauto" option for root device
	[  +7.376340] systemd-fstab-generator[2820]: Ignoring "noauto" option for root device
	[  +0.086199] kauditd_printk_skb: 100 callbacks suppressed
	[  +1.705913] systemd-fstab-generator[2944]: Ignoring "noauto" option for root device
	[  +5.741353] kauditd_printk_skb: 74 callbacks suppressed
	[Aug19 18:38] systemd-fstab-generator[3781]: Ignoring "noauto" option for root device
	[  +0.117847] kauditd_printk_skb: 34 callbacks suppressed
	[ +21.190155] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [580fb4c199750cc4e95fcf711e440dc76ff14f3b53d8a6997f621ca5b7bb4518] <==
	{"level":"info","ts":"2024-08-19T18:37:49.022877Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e34fba8f5739efe8 switched to configuration voters=(16379515494576287720)"}
	{"level":"info","ts":"2024-08-19T18:37:49.022951Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"f729467791c9db0d","local-member-id":"e34fba8f5739efe8","added-peer-id":"e34fba8f5739efe8","added-peer-peer-urls":["https://192.168.39.168:2380"]}
	{"level":"info","ts":"2024-08-19T18:37:49.023064Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f729467791c9db0d","local-member-id":"e34fba8f5739efe8","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T18:37:49.023110Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T18:37:49.030626Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-19T18:37:49.032464Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"e34fba8f5739efe8","initial-advertise-peer-urls":["https://192.168.39.168:2380"],"listen-peer-urls":["https://192.168.39.168:2380"],"advertise-client-urls":["https://192.168.39.168:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.168:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-19T18:37:49.034494Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.168:2380"}
	{"level":"info","ts":"2024-08-19T18:37:49.040294Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.168:2380"}
	{"level":"info","ts":"2024-08-19T18:37:49.034283Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-19T18:37:50.747752Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e34fba8f5739efe8 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-19T18:37:50.747883Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e34fba8f5739efe8 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-19T18:37:50.747926Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e34fba8f5739efe8 received MsgPreVoteResp from e34fba8f5739efe8 at term 2"}
	{"level":"info","ts":"2024-08-19T18:37:50.747962Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e34fba8f5739efe8 became candidate at term 3"}
	{"level":"info","ts":"2024-08-19T18:37:50.747987Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e34fba8f5739efe8 received MsgVoteResp from e34fba8f5739efe8 at term 3"}
	{"level":"info","ts":"2024-08-19T18:37:50.748015Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e34fba8f5739efe8 became leader at term 3"}
	{"level":"info","ts":"2024-08-19T18:37:50.748040Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e34fba8f5739efe8 elected leader e34fba8f5739efe8 at term 3"}
	{"level":"info","ts":"2024-08-19T18:37:50.753384Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"e34fba8f5739efe8","local-member-attributes":"{Name:multinode-528433 ClientURLs:[https://192.168.39.168:2379]}","request-path":"/0/members/e34fba8f5739efe8/attributes","cluster-id":"f729467791c9db0d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-19T18:37:50.753718Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T18:37:50.753756Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-19T18:37:50.753799Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-19T18:37:50.753871Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T18:37:50.755039Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T18:37:50.755176Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T18:37:50.756060Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-19T18:37:50.756069Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.168:2379"}
	
	
	==> etcd [8f8613599f748b0dac210582c28b8864da1a0b7e328a1299edffaeebf943a44a] <==
	{"level":"info","ts":"2024-08-19T18:31:00.534680Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-19T18:31:00.534711Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-19T18:31:00.535368Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T18:31:00.536097Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.168:2379"}
	{"level":"info","ts":"2024-08-19T18:31:00.536333Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f729467791c9db0d","local-member-id":"e34fba8f5739efe8","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T18:31:00.536466Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T18:31:00.536504Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T18:31:00.536856Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T18:31:00.537647Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-19T18:31:54.507523Z","caller":"traceutil/trace.go:171","msg":"trace[340798663] transaction","detail":"{read_only:false; response_revision:444; number_of_response:1; }","duration":"226.650217ms","start":"2024-08-19T18:31:54.280843Z","end":"2024-08-19T18:31:54.507494Z","steps":["trace[340798663] 'process raft request'  (duration: 225.520166ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T18:32:51.542001Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"144.916938ms","expected-duration":"100ms","prefix":"","request":"header:<ID:17287227062303532188 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-528433-m03.17ed34e086fc271f\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-528433-m03.17ed34e086fc271f\" value_size:646 lease:8063855025448756070 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-08-19T18:32:51.542398Z","caller":"traceutil/trace.go:171","msg":"trace[393461625] linearizableReadLoop","detail":"{readStateIndex:614; appliedIndex:613; }","duration":"143.563732ms","start":"2024-08-19T18:32:51.398802Z","end":"2024-08-19T18:32:51.542366Z","steps":["trace[393461625] 'read index received'  (duration: 21.701µs)","trace[393461625] 'applied index is now lower than readState.Index'  (duration: 143.541099ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-19T18:32:51.542446Z","caller":"traceutil/trace.go:171","msg":"trace[1168984684] transaction","detail":"{read_only:false; response_revision:579; number_of_response:1; }","duration":"228.794162ms","start":"2024-08-19T18:32:51.313596Z","end":"2024-08-19T18:32:51.542391Z","steps":["trace[1168984684] 'process raft request'  (duration: 82.790504ms)","trace[1168984684] 'compare'  (duration: 144.763552ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-19T18:32:51.542589Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"143.777221ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-528433-m03\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T18:32:51.542642Z","caller":"traceutil/trace.go:171","msg":"trace[1152645336] range","detail":"{range_begin:/registry/minions/multinode-528433-m03; range_end:; response_count:0; response_revision:579; }","duration":"143.83216ms","start":"2024-08-19T18:32:51.398798Z","end":"2024-08-19T18:32:51.542630Z","steps":["trace[1152645336] 'agreement among raft nodes before linearized reading'  (duration: 143.692307ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T18:36:06.216837Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-19T18:36:06.217005Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-528433","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.168:2380"],"advertise-client-urls":["https://192.168.39.168:2379"]}
	{"level":"warn","ts":"2024-08-19T18:36:06.217170Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.168:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-19T18:36:06.217231Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.168:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-19T18:36:06.217482Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-19T18:36:06.217557Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-19T18:36:06.315480Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"e34fba8f5739efe8","current-leader-member-id":"e34fba8f5739efe8"}
	{"level":"info","ts":"2024-08-19T18:36:06.318420Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.168:2380"}
	{"level":"info","ts":"2024-08-19T18:36:06.318733Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.168:2380"}
	{"level":"info","ts":"2024-08-19T18:36:06.318760Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-528433","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.168:2380"],"advertise-client-urls":["https://192.168.39.168:2379"]}
	
	
	==> kernel <==
	 18:41:57 up 11 min,  0 users,  load average: 0.33, 0.28, 0.16
	Linux multinode-528433 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [057d837bfdf9b2c340d252182c5f52f95286a592b06ccc5f204badee2872440e] <==
	I0819 18:35:25.233417       1 main.go:322] Node multinode-528433-m03 has CIDR [10.244.3.0/24] 
	I0819 18:35:35.232814       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0819 18:35:35.232853       1 main.go:299] handling current node
	I0819 18:35:35.232875       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0819 18:35:35.232882       1 main.go:322] Node multinode-528433-m02 has CIDR [10.244.1.0/24] 
	I0819 18:35:35.233052       1 main.go:295] Handling node with IPs: map[192.168.39.113:{}]
	I0819 18:35:35.233091       1 main.go:322] Node multinode-528433-m03 has CIDR [10.244.3.0/24] 
	I0819 18:35:45.227209       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0819 18:35:45.227382       1 main.go:322] Node multinode-528433-m02 has CIDR [10.244.1.0/24] 
	I0819 18:35:45.227547       1 main.go:295] Handling node with IPs: map[192.168.39.113:{}]
	I0819 18:35:45.227576       1 main.go:322] Node multinode-528433-m03 has CIDR [10.244.3.0/24] 
	I0819 18:35:45.227639       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0819 18:35:45.227657       1 main.go:299] handling current node
	I0819 18:35:55.229537       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0819 18:35:55.229840       1 main.go:299] handling current node
	I0819 18:35:55.229882       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0819 18:35:55.229910       1 main.go:322] Node multinode-528433-m02 has CIDR [10.244.1.0/24] 
	I0819 18:35:55.230167       1 main.go:295] Handling node with IPs: map[192.168.39.113:{}]
	I0819 18:35:55.230217       1 main.go:322] Node multinode-528433-m03 has CIDR [10.244.3.0/24] 
	I0819 18:36:05.227048       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0819 18:36:05.227405       1 main.go:299] handling current node
	I0819 18:36:05.227499       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0819 18:36:05.227523       1 main.go:322] Node multinode-528433-m02 has CIDR [10.244.1.0/24] 
	I0819 18:36:05.227821       1 main.go:295] Handling node with IPs: map[192.168.39.113:{}]
	I0819 18:36:05.228197       1 main.go:322] Node multinode-528433-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [8f0cc386f169fbef2abbfeb9de505aff3998aa7d54b7f3eee2b29d3c03dec1da] <==
	I0819 18:40:54.520542       1 main.go:322] Node multinode-528433-m02 has CIDR [10.244.1.0/24] 
	I0819 18:41:04.521544       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0819 18:41:04.521600       1 main.go:322] Node multinode-528433-m02 has CIDR [10.244.1.0/24] 
	I0819 18:41:04.521732       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0819 18:41:04.521761       1 main.go:299] handling current node
	I0819 18:41:14.529197       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0819 18:41:14.529342       1 main.go:299] handling current node
	I0819 18:41:14.529378       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0819 18:41:14.529399       1 main.go:322] Node multinode-528433-m02 has CIDR [10.244.1.0/24] 
	I0819 18:41:24.521853       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0819 18:41:24.521911       1 main.go:299] handling current node
	I0819 18:41:24.521925       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0819 18:41:24.521932       1 main.go:322] Node multinode-528433-m02 has CIDR [10.244.1.0/24] 
	I0819 18:41:34.525796       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0819 18:41:34.525878       1 main.go:299] handling current node
	I0819 18:41:34.525893       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0819 18:41:34.525899       1 main.go:322] Node multinode-528433-m02 has CIDR [10.244.1.0/24] 
	I0819 18:41:44.520155       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0819 18:41:44.520360       1 main.go:299] handling current node
	I0819 18:41:44.520429       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0819 18:41:44.520457       1 main.go:322] Node multinode-528433-m02 has CIDR [10.244.1.0/24] 
	I0819 18:41:54.520563       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0819 18:41:54.520672       1 main.go:299] handling current node
	I0819 18:41:54.520700       1 main.go:295] Handling node with IPs: map[192.168.39.107:{}]
	I0819 18:41:54.520717       1 main.go:322] Node multinode-528433-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [a81508e447df1c7bfda53e67d1b4030870a749ba659d35497fe4adecbcf41a9e] <==
	I0819 18:37:52.092315       1 policy_source.go:224] refreshing policies
	I0819 18:37:52.094611       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0819 18:37:52.094663       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0819 18:37:52.102597       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0819 18:37:52.115479       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0819 18:37:52.116645       1 shared_informer.go:320] Caches are synced for configmaps
	I0819 18:37:52.116796       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0819 18:37:52.117049       1 aggregator.go:171] initial CRD sync complete...
	I0819 18:37:52.117081       1 autoregister_controller.go:144] Starting autoregister controller
	I0819 18:37:52.117087       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0819 18:37:52.117098       1 cache.go:39] Caches are synced for autoregister controller
	I0819 18:37:52.124432       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E0819 18:37:52.151542       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0819 18:37:52.155877       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0819 18:37:52.170545       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0819 18:37:52.200010       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0819 18:37:52.200052       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0819 18:37:53.003454       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0819 18:37:54.368065       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0819 18:37:54.495945       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0819 18:37:54.513785       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0819 18:37:54.610211       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0819 18:37:54.618781       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0819 18:37:55.627472       1 controller.go:615] quota admission added evaluator for: endpoints
	I0819 18:37:55.730939       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [c65c30f3ad8d84072ebc470eb8e6aa5f850402138c0c5b057a93852df19a0f24] <==
	I0819 18:31:09.837872       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0819 18:32:20.866555       1 conn.go:339] Error on socket receive: read tcp 192.168.39.168:8443->192.168.39.1:50128: use of closed network connection
	E0819 18:32:21.049628       1 conn.go:339] Error on socket receive: read tcp 192.168.39.168:8443->192.168.39.1:50148: use of closed network connection
	E0819 18:32:21.231814       1 conn.go:339] Error on socket receive: read tcp 192.168.39.168:8443->192.168.39.1:50162: use of closed network connection
	E0819 18:32:21.417330       1 conn.go:339] Error on socket receive: read tcp 192.168.39.168:8443->192.168.39.1:50178: use of closed network connection
	E0819 18:32:21.592602       1 conn.go:339] Error on socket receive: read tcp 192.168.39.168:8443->192.168.39.1:50184: use of closed network connection
	E0819 18:32:21.763020       1 conn.go:339] Error on socket receive: read tcp 192.168.39.168:8443->192.168.39.1:50200: use of closed network connection
	E0819 18:32:22.043931       1 conn.go:339] Error on socket receive: read tcp 192.168.39.168:8443->192.168.39.1:50220: use of closed network connection
	E0819 18:32:22.220122       1 conn.go:339] Error on socket receive: read tcp 192.168.39.168:8443->192.168.39.1:50228: use of closed network connection
	E0819 18:32:22.382643       1 conn.go:339] Error on socket receive: read tcp 192.168.39.168:8443->192.168.39.1:50238: use of closed network connection
	E0819 18:32:22.548609       1 conn.go:339] Error on socket receive: read tcp 192.168.39.168:8443->192.168.39.1:50252: use of closed network connection
	I0819 18:36:06.219982       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	W0819 18:36:06.228900       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:36:06.233592       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:36:06.233714       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:36:06.233799       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:36:06.234451       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:36:06.235952       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:36:06.236075       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:36:06.236173       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:36:06.236533       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:36:06.236647       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:36:06.236733       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:36:06.236906       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:36:06.240654       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [a0f607c724637f67917051010258f5d5d9d65d9a1966825b84ccb41087c55584] <==
	E0819 18:39:10.627563       1 range_allocator.go:433] "CIDR assignment for node failed. Releasing allocated CIDR" err="failed to patch node CIDR: Node \"multinode-528433-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.3.0/24\", \"10.244.2.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-528433-m03"
	E0819 18:39:10.627707       1 range_allocator.go:246] "Unhandled Error" err="error syncing 'multinode-528433-m03': failed to patch node CIDR: Node \"multinode-528433-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.3.0/24\", \"10.244.2.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid], requeuing" logger="UnhandledError"
	I0819 18:39:10.627841       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-528433-m03"
	I0819 18:39:10.633018       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-528433-m03"
	I0819 18:39:10.677179       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-528433-m03"
	I0819 18:39:10.683899       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-528433-m03"
	I0819 18:39:11.013058       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-528433-m03"
	I0819 18:39:20.927689       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-528433-m03"
	I0819 18:39:30.486892       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-528433-m02"
	I0819 18:39:30.486927       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-528433-m03"
	I0819 18:39:30.500660       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-528433-m03"
	I0819 18:39:30.618101       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-528433-m03"
	I0819 18:39:35.338788       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-528433-m03"
	I0819 18:39:35.352483       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-528433-m03"
	I0819 18:39:35.913485       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-528433-m02"
	I0819 18:39:35.913578       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-528433-m03"
	I0819 18:40:15.636749       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-528433-m02"
	I0819 18:40:15.660297       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-528433-m02"
	I0819 18:40:15.667147       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="14.815317ms"
	I0819 18:40:15.668499       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="38.443µs"
	I0819 18:40:20.748787       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-528433-m02"
	I0819 18:40:35.515104       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-m4pn8"
	I0819 18:40:35.538121       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-m4pn8"
	I0819 18:40:35.538210       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-xc2kd"
	I0819 18:40:35.557932       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-xc2kd"
	
	
	==> kube-controller-manager [e18ee04a7496876e00d3bf4eea0c2cc1bea22033e6265d9eb65c8556c18dbecc] <==
	I0819 18:33:39.804148       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-528433-m02"
	I0819 18:33:39.804370       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-528433-m03"
	I0819 18:33:40.892579       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-528433-m02"
	I0819 18:33:40.895353       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-528433-m03\" does not exist"
	I0819 18:33:40.905809       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-528433-m03" podCIDRs=["10.244.3.0/24"]
	I0819 18:33:40.905854       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-528433-m03"
	I0819 18:33:40.905876       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-528433-m03"
	I0819 18:33:40.913450       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-528433-m03"
	I0819 18:33:41.317937       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-528433-m03"
	I0819 18:33:41.679166       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-528433-m03"
	I0819 18:33:44.101043       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-528433-m03"
	I0819 18:33:51.155484       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-528433-m03"
	I0819 18:34:00.693444       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-528433-m03"
	I0819 18:34:00.693858       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-528433-m03"
	I0819 18:34:00.706602       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-528433-m03"
	I0819 18:34:04.106808       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-528433-m03"
	I0819 18:34:39.122142       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-528433-m02"
	I0819 18:34:39.122154       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-528433-m03"
	I0819 18:34:39.137007       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-528433-m02"
	I0819 18:34:39.177568       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="8.210783ms"
	I0819 18:34:39.178035       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="271.767µs"
	I0819 18:34:44.177894       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-528433-m03"
	I0819 18:34:44.195886       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-528433-m03"
	I0819 18:34:44.204180       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-528433-m02"
	I0819 18:34:54.286076       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-528433-m03"
	
	
	==> kube-proxy [a21bb460a42c175e6fcfe880334b3416a31c862ad41f4046dde00c3a50bf99ac] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 18:37:53.795720       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0819 18:37:53.809336       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.168"]
	E0819 18:37:53.809420       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 18:37:53.867853       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 18:37:53.867908       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 18:37:53.867938       1 server_linux.go:169] "Using iptables Proxier"
	I0819 18:37:53.870508       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 18:37:53.871520       1 server.go:483] "Version info" version="v1.31.0"
	I0819 18:37:53.871594       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 18:37:53.874937       1 config.go:197] "Starting service config controller"
	I0819 18:37:53.874987       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 18:37:53.875022       1 config.go:104] "Starting endpoint slice config controller"
	I0819 18:37:53.875026       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 18:37:53.875634       1 config.go:326] "Starting node config controller"
	I0819 18:37:53.875661       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 18:37:53.975968       1 shared_informer.go:320] Caches are synced for node config
	I0819 18:37:53.976062       1 shared_informer.go:320] Caches are synced for service config
	I0819 18:37:53.976103       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [a5d6d7978005d7389f31edd035fd6fa05cd87e4a3901ca69aa1b2f9f73576240] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 18:31:11.981056       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0819 18:31:11.999700       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.168"]
	E0819 18:31:11.999828       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 18:31:12.045508       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 18:31:12.045603       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 18:31:12.045649       1 server_linux.go:169] "Using iptables Proxier"
	I0819 18:31:12.048560       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 18:31:12.048935       1 server.go:483] "Version info" version="v1.31.0"
	I0819 18:31:12.048978       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 18:31:12.050571       1 config.go:197] "Starting service config controller"
	I0819 18:31:12.050632       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 18:31:12.050667       1 config.go:104] "Starting endpoint slice config controller"
	I0819 18:31:12.050683       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 18:31:12.051169       1 config.go:326] "Starting node config controller"
	I0819 18:31:12.051207       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 18:31:12.151303       1 shared_informer.go:320] Caches are synced for node config
	I0819 18:31:12.151352       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0819 18:31:12.151326       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [422d4bc5ba6865ba6386db8aac55e0668aac92c409da53ac44c1d7750424fec7] <==
	W0819 18:37:52.141428       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0819 18:37:52.141810       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 18:37:52.141489       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 18:37:52.141855       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 18:37:52.141536       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0819 18:37:52.141870       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 18:37:52.141594       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0819 18:37:52.141904       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 18:37:52.141644       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0819 18:37:52.141920       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 18:37:52.141992       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0819 18:37:52.142027       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 18:37:52.142085       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0819 18:37:52.142114       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 18:37:52.142161       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0819 18:37:52.142190       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 18:37:52.142338       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0819 18:37:52.142371       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 18:37:52.142435       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0819 18:37:52.142464       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 18:37:52.142508       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0819 18:37:52.142536       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 18:37:52.142646       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0819 18:37:52.142736       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0819 18:37:53.596865       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [7c29a242039f25f811c96462be446c25a6154d911bbeffded18a5c5d7b8f8ea4] <==
	E0819 18:31:02.172401       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 18:31:02.170462       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0819 18:31:02.172518       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 18:31:03.042988       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0819 18:31:03.043042       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 18:31:03.066752       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0819 18:31:03.066814       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0819 18:31:03.116325       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0819 18:31:03.116359       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 18:31:03.129828       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0819 18:31:03.129959       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 18:31:03.134948       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0819 18:31:03.135002       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 18:31:03.237227       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0819 18:31:03.237393       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 18:31:03.372054       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0819 18:31:03.372092       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 18:31:03.438285       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0819 18:31:03.438406       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 18:31:03.649531       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0819 18:31:03.649635       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0819 18:31:06.536851       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0819 18:36:06.211826       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0819 18:36:06.212066       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E0819 18:36:06.212990       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Aug 19 18:40:47 multinode-528433 kubelet[2951]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 18:40:47 multinode-528433 kubelet[2951]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 18:40:47 multinode-528433 kubelet[2951]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 18:40:47 multinode-528433 kubelet[2951]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 18:40:47 multinode-528433 kubelet[2951]: E0819 18:40:47.880103    2951 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092847879809399,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:40:47 multinode-528433 kubelet[2951]: E0819 18:40:47.880154    2951 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092847879809399,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:40:57 multinode-528433 kubelet[2951]: E0819 18:40:57.883774    2951 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092857883460046,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:40:57 multinode-528433 kubelet[2951]: E0819 18:40:57.884030    2951 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092857883460046,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:41:07 multinode-528433 kubelet[2951]: E0819 18:41:07.889343    2951 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092867888823887,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:41:07 multinode-528433 kubelet[2951]: E0819 18:41:07.889418    2951 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092867888823887,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:41:17 multinode-528433 kubelet[2951]: E0819 18:41:17.892936    2951 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092877891805591,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:41:17 multinode-528433 kubelet[2951]: E0819 18:41:17.893000    2951 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092877891805591,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:41:27 multinode-528433 kubelet[2951]: E0819 18:41:27.894875    2951 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092887894631471,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:41:27 multinode-528433 kubelet[2951]: E0819 18:41:27.895146    2951 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092887894631471,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:41:37 multinode-528433 kubelet[2951]: E0819 18:41:37.899214    2951 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092897898617286,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:41:37 multinode-528433 kubelet[2951]: E0819 18:41:37.899238    2951 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092897898617286,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:41:47 multinode-528433 kubelet[2951]: E0819 18:41:47.794939    2951 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 18:41:47 multinode-528433 kubelet[2951]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 18:41:47 multinode-528433 kubelet[2951]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 18:41:47 multinode-528433 kubelet[2951]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 18:41:47 multinode-528433 kubelet[2951]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 18:41:47 multinode-528433 kubelet[2951]: E0819 18:41:47.902196    2951 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092907901068189,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:41:47 multinode-528433 kubelet[2951]: E0819 18:41:47.902335    2951 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092907901068189,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:41:57 multinode-528433 kubelet[2951]: E0819 18:41:57.905280    2951 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092917904794691,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:41:57 multinode-528433 kubelet[2951]: E0819 18:41:57.905335    2951 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092917904794691,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 18:41:56.817075  411294 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19468-372744/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-528433 -n multinode-528433
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-528433 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.38s)

                                                
                                    
x
+
TestPreload (271.11s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-763873 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0819 18:46:53.184908  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:47:10.115108  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-763873 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m7.935584674s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-763873 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-763873 image pull gcr.io/k8s-minikube/busybox: (2.871809691s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-763873
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-763873: exit status 82 (2m0.481758098s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-763873"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-763873 failed: exit status 82
panic.go:626: *** TestPreload FAILED at 2024-08-19 18:49:55.506351378 +0000 UTC m=+3942.706278185
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-763873 -n test-preload-763873
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-763873 -n test-preload-763873: exit status 3 (18.646194716s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 18:50:14.148082  414176 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.142:22: connect: no route to host
	E0819 18:50:14.148103  414176 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.142:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-763873" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-763873" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-763873
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-763873: (1.174827809s)
--- FAIL: TestPreload (271.11s)

                                                
                                    
x
+
TestKubernetesUpgrade (436.45s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-127646 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-127646 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (5m10.55481462s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-127646] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19468
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19468-372744/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19468-372744/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-127646" primary control-plane node in "kubernetes-upgrade-127646" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 18:53:39.213674  418752 out.go:345] Setting OutFile to fd 1 ...
	I0819 18:53:39.213818  418752 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:53:39.213828  418752 out.go:358] Setting ErrFile to fd 2...
	I0819 18:53:39.213834  418752 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:53:39.214055  418752 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19468-372744/.minikube/bin
	I0819 18:53:39.214672  418752 out.go:352] Setting JSON to false
	I0819 18:53:39.215742  418752 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":9362,"bootTime":1724084257,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 18:53:39.215817  418752 start.go:139] virtualization: kvm guest
	I0819 18:53:39.218250  418752 out.go:177] * [kubernetes-upgrade-127646] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 18:53:39.219864  418752 out.go:177]   - MINIKUBE_LOCATION=19468
	I0819 18:53:39.219901  418752 notify.go:220] Checking for updates...
	I0819 18:53:39.222653  418752 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 18:53:39.224209  418752 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19468-372744/kubeconfig
	I0819 18:53:39.225880  418752 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19468-372744/.minikube
	I0819 18:53:39.227443  418752 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 18:53:39.228930  418752 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 18:53:39.230939  418752 config.go:182] Loaded profile config "NoKubernetes-282030": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:53:39.231090  418752 config.go:182] Loaded profile config "cert-expiration-005082": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:53:39.231208  418752 config.go:182] Loaded profile config "force-systemd-flag-448594": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:53:39.231336  418752 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 18:53:39.274686  418752 out.go:177] * Using the kvm2 driver based on user configuration
	I0819 18:53:39.275940  418752 start.go:297] selected driver: kvm2
	I0819 18:53:39.275960  418752 start.go:901] validating driver "kvm2" against <nil>
	I0819 18:53:39.275979  418752 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 18:53:39.276730  418752 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 18:53:39.276830  418752 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19468-372744/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 18:53:39.292794  418752 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 18:53:39.292862  418752 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 18:53:39.293157  418752 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0819 18:53:39.293195  418752 cni.go:84] Creating CNI manager for ""
	I0819 18:53:39.293207  418752 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 18:53:39.293220  418752 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 18:53:39.293284  418752 start.go:340] cluster config:
	{Name:kubernetes-upgrade-127646 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-127646 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 18:53:39.293427  418752 iso.go:125] acquiring lock: {Name:mk4c0ac1c3202b1a296739df622960e7a0bd8566 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 18:53:39.295425  418752 out.go:177] * Starting "kubernetes-upgrade-127646" primary control-plane node in "kubernetes-upgrade-127646" cluster
	I0819 18:53:39.296878  418752 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0819 18:53:39.296913  418752 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0819 18:53:39.296923  418752 cache.go:56] Caching tarball of preloaded images
	I0819 18:53:39.297041  418752 preload.go:172] Found /home/jenkins/minikube-integration/19468-372744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 18:53:39.297052  418752 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0819 18:53:39.297139  418752 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/kubernetes-upgrade-127646/config.json ...
	I0819 18:53:39.297166  418752 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/kubernetes-upgrade-127646/config.json: {Name:mkdaf5d9de139ce05b87408f9f111362692ed424 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:53:39.297326  418752 start.go:360] acquireMachinesLock for kubernetes-upgrade-127646: {Name:mk24ba67a747357e9ce40f1e460d2bb0bc59cc75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 18:54:19.912798  418752 start.go:364] duration metric: took 40.61541131s to acquireMachinesLock for "kubernetes-upgrade-127646"
	I0819 18:54:19.912878  418752 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-127646 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-127646 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 18:54:19.913028  418752 start.go:125] createHost starting for "" (driver="kvm2")
	I0819 18:54:19.915249  418752 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 18:54:19.915549  418752 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:54:19.915609  418752 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:54:19.932605  418752 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41527
	I0819 18:54:19.933040  418752 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:54:19.933634  418752 main.go:141] libmachine: Using API Version  1
	I0819 18:54:19.933660  418752 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:54:19.933950  418752 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:54:19.934135  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Calling .GetMachineName
	I0819 18:54:19.934322  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Calling .DriverName
	I0819 18:54:19.934501  418752 start.go:159] libmachine.API.Create for "kubernetes-upgrade-127646" (driver="kvm2")
	I0819 18:54:19.934529  418752 client.go:168] LocalClient.Create starting
	I0819 18:54:19.934570  418752 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem
	I0819 18:54:19.934609  418752 main.go:141] libmachine: Decoding PEM data...
	I0819 18:54:19.934634  418752 main.go:141] libmachine: Parsing certificate...
	I0819 18:54:19.934712  418752 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem
	I0819 18:54:19.934742  418752 main.go:141] libmachine: Decoding PEM data...
	I0819 18:54:19.934758  418752 main.go:141] libmachine: Parsing certificate...
	I0819 18:54:19.934782  418752 main.go:141] libmachine: Running pre-create checks...
	I0819 18:54:19.934802  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Calling .PreCreateCheck
	I0819 18:54:19.935207  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Calling .GetConfigRaw
	I0819 18:54:19.935619  418752 main.go:141] libmachine: Creating machine...
	I0819 18:54:19.935632  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Calling .Create
	I0819 18:54:19.935791  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Creating KVM machine...
	I0819 18:54:19.937260  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | found existing default KVM network
	I0819 18:54:19.938759  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | I0819 18:54:19.938604  419254 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:da:b7:91} reservation:<nil>}
	I0819 18:54:19.939752  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | I0819 18:54:19.939646  419254 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:d2:bc:d4} reservation:<nil>}
	I0819 18:54:19.940751  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | I0819 18:54:19.940668  419254 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:19:c5:5b} reservation:<nil>}
	I0819 18:54:19.942164  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | I0819 18:54:19.942053  419254 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00033d2f0}
	I0819 18:54:19.942193  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | created network xml: 
	I0819 18:54:19.942204  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | <network>
	I0819 18:54:19.942225  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG |   <name>mk-kubernetes-upgrade-127646</name>
	I0819 18:54:19.942236  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG |   <dns enable='no'/>
	I0819 18:54:19.942247  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG |   
	I0819 18:54:19.942256  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0819 18:54:19.942283  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG |     <dhcp>
	I0819 18:54:19.942292  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0819 18:54:19.942297  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG |     </dhcp>
	I0819 18:54:19.942305  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG |   </ip>
	I0819 18:54:19.942309  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG |   
	I0819 18:54:19.942317  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | </network>
	I0819 18:54:19.942330  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | 
	I0819 18:54:19.948426  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | trying to create private KVM network mk-kubernetes-upgrade-127646 192.168.72.0/24...
	I0819 18:54:20.021128  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | private KVM network mk-kubernetes-upgrade-127646 192.168.72.0/24 created
	I0819 18:54:20.021174  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Setting up store path in /home/jenkins/minikube-integration/19468-372744/.minikube/machines/kubernetes-upgrade-127646 ...
	I0819 18:54:20.021202  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Building disk image from file:///home/jenkins/minikube-integration/19468-372744/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0819 18:54:20.021221  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | I0819 18:54:20.021130  419254 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19468-372744/.minikube
	I0819 18:54:20.021534  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Downloading /home/jenkins/minikube-integration/19468-372744/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19468-372744/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 18:54:20.299317  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | I0819 18:54:20.299169  419254 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/kubernetes-upgrade-127646/id_rsa...
	I0819 18:54:20.550813  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | I0819 18:54:20.550674  419254 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/kubernetes-upgrade-127646/kubernetes-upgrade-127646.rawdisk...
	I0819 18:54:20.550843  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | Writing magic tar header
	I0819 18:54:20.550858  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | Writing SSH key tar header
	I0819 18:54:20.550984  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | I0819 18:54:20.550920  419254 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19468-372744/.minikube/machines/kubernetes-upgrade-127646 ...
	I0819 18:54:20.551079  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/kubernetes-upgrade-127646
	I0819 18:54:20.551121  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Setting executable bit set on /home/jenkins/minikube-integration/19468-372744/.minikube/machines/kubernetes-upgrade-127646 (perms=drwx------)
	I0819 18:54:20.551138  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Setting executable bit set on /home/jenkins/minikube-integration/19468-372744/.minikube/machines (perms=drwxr-xr-x)
	I0819 18:54:20.551149  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19468-372744/.minikube/machines
	I0819 18:54:20.551163  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19468-372744/.minikube
	I0819 18:54:20.551174  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19468-372744
	I0819 18:54:20.551184  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Setting executable bit set on /home/jenkins/minikube-integration/19468-372744/.minikube (perms=drwxr-xr-x)
	I0819 18:54:20.551195  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0819 18:54:20.551211  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | Checking permissions on dir: /home/jenkins
	I0819 18:54:20.551226  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Setting executable bit set on /home/jenkins/minikube-integration/19468-372744 (perms=drwxrwxr-x)
	I0819 18:54:20.551238  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | Checking permissions on dir: /home
	I0819 18:54:20.551253  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | Skipping /home - not owner
	I0819 18:54:20.551299  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0819 18:54:20.551326  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0819 18:54:20.551338  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Creating domain...
	I0819 18:54:20.552382  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) define libvirt domain using xml: 
	I0819 18:54:20.552407  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) <domain type='kvm'>
	I0819 18:54:20.552418  418752 main.go:141] libmachine: (kubernetes-upgrade-127646)   <name>kubernetes-upgrade-127646</name>
	I0819 18:54:20.552427  418752 main.go:141] libmachine: (kubernetes-upgrade-127646)   <memory unit='MiB'>2200</memory>
	I0819 18:54:20.552437  418752 main.go:141] libmachine: (kubernetes-upgrade-127646)   <vcpu>2</vcpu>
	I0819 18:54:20.552444  418752 main.go:141] libmachine: (kubernetes-upgrade-127646)   <features>
	I0819 18:54:20.552458  418752 main.go:141] libmachine: (kubernetes-upgrade-127646)     <acpi/>
	I0819 18:54:20.552465  418752 main.go:141] libmachine: (kubernetes-upgrade-127646)     <apic/>
	I0819 18:54:20.552477  418752 main.go:141] libmachine: (kubernetes-upgrade-127646)     <pae/>
	I0819 18:54:20.552485  418752 main.go:141] libmachine: (kubernetes-upgrade-127646)     
	I0819 18:54:20.552495  418752 main.go:141] libmachine: (kubernetes-upgrade-127646)   </features>
	I0819 18:54:20.552509  418752 main.go:141] libmachine: (kubernetes-upgrade-127646)   <cpu mode='host-passthrough'>
	I0819 18:54:20.552521  418752 main.go:141] libmachine: (kubernetes-upgrade-127646)   
	I0819 18:54:20.552529  418752 main.go:141] libmachine: (kubernetes-upgrade-127646)   </cpu>
	I0819 18:54:20.552541  418752 main.go:141] libmachine: (kubernetes-upgrade-127646)   <os>
	I0819 18:54:20.552550  418752 main.go:141] libmachine: (kubernetes-upgrade-127646)     <type>hvm</type>
	I0819 18:54:20.552564  418752 main.go:141] libmachine: (kubernetes-upgrade-127646)     <boot dev='cdrom'/>
	I0819 18:54:20.552581  418752 main.go:141] libmachine: (kubernetes-upgrade-127646)     <boot dev='hd'/>
	I0819 18:54:20.552605  418752 main.go:141] libmachine: (kubernetes-upgrade-127646)     <bootmenu enable='no'/>
	I0819 18:54:20.552627  418752 main.go:141] libmachine: (kubernetes-upgrade-127646)   </os>
	I0819 18:54:20.552637  418752 main.go:141] libmachine: (kubernetes-upgrade-127646)   <devices>
	I0819 18:54:20.552646  418752 main.go:141] libmachine: (kubernetes-upgrade-127646)     <disk type='file' device='cdrom'>
	I0819 18:54:20.552665  418752 main.go:141] libmachine: (kubernetes-upgrade-127646)       <source file='/home/jenkins/minikube-integration/19468-372744/.minikube/machines/kubernetes-upgrade-127646/boot2docker.iso'/>
	I0819 18:54:20.552676  418752 main.go:141] libmachine: (kubernetes-upgrade-127646)       <target dev='hdc' bus='scsi'/>
	I0819 18:54:20.552689  418752 main.go:141] libmachine: (kubernetes-upgrade-127646)       <readonly/>
	I0819 18:54:20.552704  418752 main.go:141] libmachine: (kubernetes-upgrade-127646)     </disk>
	I0819 18:54:20.552715  418752 main.go:141] libmachine: (kubernetes-upgrade-127646)     <disk type='file' device='disk'>
	I0819 18:54:20.552726  418752 main.go:141] libmachine: (kubernetes-upgrade-127646)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0819 18:54:20.552744  418752 main.go:141] libmachine: (kubernetes-upgrade-127646)       <source file='/home/jenkins/minikube-integration/19468-372744/.minikube/machines/kubernetes-upgrade-127646/kubernetes-upgrade-127646.rawdisk'/>
	I0819 18:54:20.552756  418752 main.go:141] libmachine: (kubernetes-upgrade-127646)       <target dev='hda' bus='virtio'/>
	I0819 18:54:20.552765  418752 main.go:141] libmachine: (kubernetes-upgrade-127646)     </disk>
	I0819 18:54:20.552776  418752 main.go:141] libmachine: (kubernetes-upgrade-127646)     <interface type='network'>
	I0819 18:54:20.552801  418752 main.go:141] libmachine: (kubernetes-upgrade-127646)       <source network='mk-kubernetes-upgrade-127646'/>
	I0819 18:54:20.552814  418752 main.go:141] libmachine: (kubernetes-upgrade-127646)       <model type='virtio'/>
	I0819 18:54:20.552823  418752 main.go:141] libmachine: (kubernetes-upgrade-127646)     </interface>
	I0819 18:54:20.552835  418752 main.go:141] libmachine: (kubernetes-upgrade-127646)     <interface type='network'>
	I0819 18:54:20.552848  418752 main.go:141] libmachine: (kubernetes-upgrade-127646)       <source network='default'/>
	I0819 18:54:20.552859  418752 main.go:141] libmachine: (kubernetes-upgrade-127646)       <model type='virtio'/>
	I0819 18:54:20.552871  418752 main.go:141] libmachine: (kubernetes-upgrade-127646)     </interface>
	I0819 18:54:20.552885  418752 main.go:141] libmachine: (kubernetes-upgrade-127646)     <serial type='pty'>
	I0819 18:54:20.552894  418752 main.go:141] libmachine: (kubernetes-upgrade-127646)       <target port='0'/>
	I0819 18:54:20.552901  418752 main.go:141] libmachine: (kubernetes-upgrade-127646)     </serial>
	I0819 18:54:20.552920  418752 main.go:141] libmachine: (kubernetes-upgrade-127646)     <console type='pty'>
	I0819 18:54:20.552932  418752 main.go:141] libmachine: (kubernetes-upgrade-127646)       <target type='serial' port='0'/>
	I0819 18:54:20.552944  418752 main.go:141] libmachine: (kubernetes-upgrade-127646)     </console>
	I0819 18:54:20.552958  418752 main.go:141] libmachine: (kubernetes-upgrade-127646)     <rng model='virtio'>
	I0819 18:54:20.552970  418752 main.go:141] libmachine: (kubernetes-upgrade-127646)       <backend model='random'>/dev/random</backend>
	I0819 18:54:20.552979  418752 main.go:141] libmachine: (kubernetes-upgrade-127646)     </rng>
	I0819 18:54:20.552987  418752 main.go:141] libmachine: (kubernetes-upgrade-127646)     
	I0819 18:54:20.552993  418752 main.go:141] libmachine: (kubernetes-upgrade-127646)     
	I0819 18:54:20.553006  418752 main.go:141] libmachine: (kubernetes-upgrade-127646)   </devices>
	I0819 18:54:20.553016  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) </domain>
	I0819 18:54:20.553046  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) 
	I0819 18:54:20.559944  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | domain kubernetes-upgrade-127646 has defined MAC address 52:54:00:c2:31:e2 in network default
	I0819 18:54:20.560602  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Ensuring networks are active...
	I0819 18:54:20.560642  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | domain kubernetes-upgrade-127646 has defined MAC address 52:54:00:9a:26:74 in network mk-kubernetes-upgrade-127646
	I0819 18:54:20.561525  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Ensuring network default is active
	I0819 18:54:20.561858  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Ensuring network mk-kubernetes-upgrade-127646 is active
	I0819 18:54:20.562446  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Getting domain xml...
	I0819 18:54:20.563257  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Creating domain...
	I0819 18:54:21.965197  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Waiting to get IP...
	I0819 18:54:21.966474  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | domain kubernetes-upgrade-127646 has defined MAC address 52:54:00:9a:26:74 in network mk-kubernetes-upgrade-127646
	I0819 18:54:21.967011  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | unable to find current IP address of domain kubernetes-upgrade-127646 in network mk-kubernetes-upgrade-127646
	I0819 18:54:21.967110  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | I0819 18:54:21.967016  419254 retry.go:31] will retry after 280.780509ms: waiting for machine to come up
	I0819 18:54:22.250255  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | domain kubernetes-upgrade-127646 has defined MAC address 52:54:00:9a:26:74 in network mk-kubernetes-upgrade-127646
	I0819 18:54:22.250978  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | unable to find current IP address of domain kubernetes-upgrade-127646 in network mk-kubernetes-upgrade-127646
	I0819 18:54:22.251012  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | I0819 18:54:22.250953  419254 retry.go:31] will retry after 301.898685ms: waiting for machine to come up
	I0819 18:54:22.554700  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | domain kubernetes-upgrade-127646 has defined MAC address 52:54:00:9a:26:74 in network mk-kubernetes-upgrade-127646
	I0819 18:54:22.555473  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | unable to find current IP address of domain kubernetes-upgrade-127646 in network mk-kubernetes-upgrade-127646
	I0819 18:54:22.555519  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | I0819 18:54:22.555432  419254 retry.go:31] will retry after 394.405433ms: waiting for machine to come up
	I0819 18:54:22.951139  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | domain kubernetes-upgrade-127646 has defined MAC address 52:54:00:9a:26:74 in network mk-kubernetes-upgrade-127646
	I0819 18:54:22.951655  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | unable to find current IP address of domain kubernetes-upgrade-127646 in network mk-kubernetes-upgrade-127646
	I0819 18:54:22.951709  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | I0819 18:54:22.951635  419254 retry.go:31] will retry after 388.062725ms: waiting for machine to come up
	I0819 18:54:23.341186  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | domain kubernetes-upgrade-127646 has defined MAC address 52:54:00:9a:26:74 in network mk-kubernetes-upgrade-127646
	I0819 18:54:23.341789  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | unable to find current IP address of domain kubernetes-upgrade-127646 in network mk-kubernetes-upgrade-127646
	I0819 18:54:23.341815  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | I0819 18:54:23.341749  419254 retry.go:31] will retry after 518.887866ms: waiting for machine to come up
	I0819 18:54:23.862426  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | domain kubernetes-upgrade-127646 has defined MAC address 52:54:00:9a:26:74 in network mk-kubernetes-upgrade-127646
	I0819 18:54:23.862895  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | unable to find current IP address of domain kubernetes-upgrade-127646 in network mk-kubernetes-upgrade-127646
	I0819 18:54:23.862931  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | I0819 18:54:23.862851  419254 retry.go:31] will retry after 749.079536ms: waiting for machine to come up
	I0819 18:54:24.613544  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | domain kubernetes-upgrade-127646 has defined MAC address 52:54:00:9a:26:74 in network mk-kubernetes-upgrade-127646
	I0819 18:54:24.614206  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | unable to find current IP address of domain kubernetes-upgrade-127646 in network mk-kubernetes-upgrade-127646
	I0819 18:54:24.614238  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | I0819 18:54:24.614153  419254 retry.go:31] will retry after 804.243561ms: waiting for machine to come up
	I0819 18:54:25.419466  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | domain kubernetes-upgrade-127646 has defined MAC address 52:54:00:9a:26:74 in network mk-kubernetes-upgrade-127646
	I0819 18:54:25.419964  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | unable to find current IP address of domain kubernetes-upgrade-127646 in network mk-kubernetes-upgrade-127646
	I0819 18:54:25.420011  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | I0819 18:54:25.419904  419254 retry.go:31] will retry after 1.079006213s: waiting for machine to come up
	I0819 18:54:26.500008  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | domain kubernetes-upgrade-127646 has defined MAC address 52:54:00:9a:26:74 in network mk-kubernetes-upgrade-127646
	I0819 18:54:26.500530  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | unable to find current IP address of domain kubernetes-upgrade-127646 in network mk-kubernetes-upgrade-127646
	I0819 18:54:26.500557  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | I0819 18:54:26.500479  419254 retry.go:31] will retry after 1.410949985s: waiting for machine to come up
	I0819 18:54:27.912693  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | domain kubernetes-upgrade-127646 has defined MAC address 52:54:00:9a:26:74 in network mk-kubernetes-upgrade-127646
	I0819 18:54:27.913203  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | unable to find current IP address of domain kubernetes-upgrade-127646 in network mk-kubernetes-upgrade-127646
	I0819 18:54:27.913240  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | I0819 18:54:27.913153  419254 retry.go:31] will retry after 1.678285687s: waiting for machine to come up
	I0819 18:54:29.594027  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | domain kubernetes-upgrade-127646 has defined MAC address 52:54:00:9a:26:74 in network mk-kubernetes-upgrade-127646
	I0819 18:54:29.594537  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | unable to find current IP address of domain kubernetes-upgrade-127646 in network mk-kubernetes-upgrade-127646
	I0819 18:54:29.594562  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | I0819 18:54:29.594484  419254 retry.go:31] will retry after 2.306075219s: waiting for machine to come up
	I0819 18:54:31.901725  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | domain kubernetes-upgrade-127646 has defined MAC address 52:54:00:9a:26:74 in network mk-kubernetes-upgrade-127646
	I0819 18:54:31.902258  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | unable to find current IP address of domain kubernetes-upgrade-127646 in network mk-kubernetes-upgrade-127646
	I0819 18:54:31.902288  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | I0819 18:54:31.902207  419254 retry.go:31] will retry after 3.545618001s: waiting for machine to come up
	I0819 18:54:35.449811  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | domain kubernetes-upgrade-127646 has defined MAC address 52:54:00:9a:26:74 in network mk-kubernetes-upgrade-127646
	I0819 18:54:35.450211  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | unable to find current IP address of domain kubernetes-upgrade-127646 in network mk-kubernetes-upgrade-127646
	I0819 18:54:35.450237  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | I0819 18:54:35.450172  419254 retry.go:31] will retry after 3.280358968s: waiting for machine to come up
	I0819 18:54:38.733646  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | domain kubernetes-upgrade-127646 has defined MAC address 52:54:00:9a:26:74 in network mk-kubernetes-upgrade-127646
	I0819 18:54:38.734131  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | unable to find current IP address of domain kubernetes-upgrade-127646 in network mk-kubernetes-upgrade-127646
	I0819 18:54:38.734163  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | I0819 18:54:38.734067  419254 retry.go:31] will retry after 3.993623624s: waiting for machine to come up
	I0819 18:54:42.731256  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | domain kubernetes-upgrade-127646 has defined MAC address 52:54:00:9a:26:74 in network mk-kubernetes-upgrade-127646
	I0819 18:54:42.731896  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Found IP for machine: 192.168.72.104
	I0819 18:54:42.731915  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Reserving static IP address...
	I0819 18:54:42.731947  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | domain kubernetes-upgrade-127646 has current primary IP address 192.168.72.104 and MAC address 52:54:00:9a:26:74 in network mk-kubernetes-upgrade-127646
	I0819 18:54:42.732330  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-127646", mac: "52:54:00:9a:26:74", ip: "192.168.72.104"} in network mk-kubernetes-upgrade-127646
	I0819 18:54:42.807954  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Reserved static IP address: 192.168.72.104
	I0819 18:54:42.807999  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Waiting for SSH to be available...
	I0819 18:54:42.808018  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | Getting to WaitForSSH function...
	I0819 18:54:42.810645  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | domain kubernetes-upgrade-127646 has defined MAC address 52:54:00:9a:26:74 in network mk-kubernetes-upgrade-127646
	I0819 18:54:42.811006  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:26:74", ip: ""} in network mk-kubernetes-upgrade-127646: {Iface:virbr4 ExpiryTime:2024-08-19 19:54:35 +0000 UTC Type:0 Mac:52:54:00:9a:26:74 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:minikube Clientid:01:52:54:00:9a:26:74}
	I0819 18:54:42.811035  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | domain kubernetes-upgrade-127646 has defined IP address 192.168.72.104 and MAC address 52:54:00:9a:26:74 in network mk-kubernetes-upgrade-127646
	I0819 18:54:42.811192  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | Using SSH client type: external
	I0819 18:54:42.811222  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | Using SSH private key: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/kubernetes-upgrade-127646/id_rsa (-rw-------)
	I0819 18:54:42.811258  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.104 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19468-372744/.minikube/machines/kubernetes-upgrade-127646/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 18:54:42.811268  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | About to run SSH command:
	I0819 18:54:42.811277  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | exit 0
	I0819 18:54:42.939428  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | SSH cmd err, output: <nil>: 
	I0819 18:54:42.939743  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) KVM machine creation complete!
	I0819 18:54:42.940152  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Calling .GetConfigRaw
	I0819 18:54:42.940672  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Calling .DriverName
	I0819 18:54:42.940845  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Calling .DriverName
	I0819 18:54:42.941004  418752 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0819 18:54:42.941020  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Calling .GetState
	I0819 18:54:42.942407  418752 main.go:141] libmachine: Detecting operating system of created instance...
	I0819 18:54:42.942421  418752 main.go:141] libmachine: Waiting for SSH to be available...
	I0819 18:54:42.942427  418752 main.go:141] libmachine: Getting to WaitForSSH function...
	I0819 18:54:42.942433  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Calling .GetSSHHostname
	I0819 18:54:42.944947  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | domain kubernetes-upgrade-127646 has defined MAC address 52:54:00:9a:26:74 in network mk-kubernetes-upgrade-127646
	I0819 18:54:42.945352  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:26:74", ip: ""} in network mk-kubernetes-upgrade-127646: {Iface:virbr4 ExpiryTime:2024-08-19 19:54:35 +0000 UTC Type:0 Mac:52:54:00:9a:26:74 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:kubernetes-upgrade-127646 Clientid:01:52:54:00:9a:26:74}
	I0819 18:54:42.945374  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | domain kubernetes-upgrade-127646 has defined IP address 192.168.72.104 and MAC address 52:54:00:9a:26:74 in network mk-kubernetes-upgrade-127646
	I0819 18:54:42.945516  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Calling .GetSSHPort
	I0819 18:54:42.945673  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Calling .GetSSHKeyPath
	I0819 18:54:42.945835  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Calling .GetSSHKeyPath
	I0819 18:54:42.946024  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Calling .GetSSHUsername
	I0819 18:54:42.946175  418752 main.go:141] libmachine: Using SSH client type: native
	I0819 18:54:42.946377  418752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.104 22 <nil> <nil>}
	I0819 18:54:42.946391  418752 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0819 18:54:43.051641  418752 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 18:54:43.051669  418752 main.go:141] libmachine: Detecting the provisioner...
	I0819 18:54:43.051705  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Calling .GetSSHHostname
	I0819 18:54:43.054671  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | domain kubernetes-upgrade-127646 has defined MAC address 52:54:00:9a:26:74 in network mk-kubernetes-upgrade-127646
	I0819 18:54:43.055106  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:26:74", ip: ""} in network mk-kubernetes-upgrade-127646: {Iface:virbr4 ExpiryTime:2024-08-19 19:54:35 +0000 UTC Type:0 Mac:52:54:00:9a:26:74 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:kubernetes-upgrade-127646 Clientid:01:52:54:00:9a:26:74}
	I0819 18:54:43.055133  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | domain kubernetes-upgrade-127646 has defined IP address 192.168.72.104 and MAC address 52:54:00:9a:26:74 in network mk-kubernetes-upgrade-127646
	I0819 18:54:43.055317  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Calling .GetSSHPort
	I0819 18:54:43.055560  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Calling .GetSSHKeyPath
	I0819 18:54:43.055775  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Calling .GetSSHKeyPath
	I0819 18:54:43.055970  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Calling .GetSSHUsername
	I0819 18:54:43.056157  418752 main.go:141] libmachine: Using SSH client type: native
	I0819 18:54:43.056358  418752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.104 22 <nil> <nil>}
	I0819 18:54:43.056369  418752 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0819 18:54:43.168696  418752 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0819 18:54:43.168791  418752 main.go:141] libmachine: found compatible host: buildroot
	I0819 18:54:43.168802  418752 main.go:141] libmachine: Provisioning with buildroot...
	I0819 18:54:43.168812  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Calling .GetMachineName
	I0819 18:54:43.169074  418752 buildroot.go:166] provisioning hostname "kubernetes-upgrade-127646"
	I0819 18:54:43.169111  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Calling .GetMachineName
	I0819 18:54:43.169314  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Calling .GetSSHHostname
	I0819 18:54:43.171968  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | domain kubernetes-upgrade-127646 has defined MAC address 52:54:00:9a:26:74 in network mk-kubernetes-upgrade-127646
	I0819 18:54:43.172413  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:26:74", ip: ""} in network mk-kubernetes-upgrade-127646: {Iface:virbr4 ExpiryTime:2024-08-19 19:54:35 +0000 UTC Type:0 Mac:52:54:00:9a:26:74 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:kubernetes-upgrade-127646 Clientid:01:52:54:00:9a:26:74}
	I0819 18:54:43.172446  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | domain kubernetes-upgrade-127646 has defined IP address 192.168.72.104 and MAC address 52:54:00:9a:26:74 in network mk-kubernetes-upgrade-127646
	I0819 18:54:43.172670  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Calling .GetSSHPort
	I0819 18:54:43.172884  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Calling .GetSSHKeyPath
	I0819 18:54:43.173044  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Calling .GetSSHKeyPath
	I0819 18:54:43.173196  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Calling .GetSSHUsername
	I0819 18:54:43.173382  418752 main.go:141] libmachine: Using SSH client type: native
	I0819 18:54:43.173585  418752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.104 22 <nil> <nil>}
	I0819 18:54:43.173600  418752 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-127646 && echo "kubernetes-upgrade-127646" | sudo tee /etc/hostname
	I0819 18:54:43.298164  418752 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-127646
	
	I0819 18:54:43.298204  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Calling .GetSSHHostname
	I0819 18:54:43.301075  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | domain kubernetes-upgrade-127646 has defined MAC address 52:54:00:9a:26:74 in network mk-kubernetes-upgrade-127646
	I0819 18:54:43.301458  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:26:74", ip: ""} in network mk-kubernetes-upgrade-127646: {Iface:virbr4 ExpiryTime:2024-08-19 19:54:35 +0000 UTC Type:0 Mac:52:54:00:9a:26:74 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:kubernetes-upgrade-127646 Clientid:01:52:54:00:9a:26:74}
	I0819 18:54:43.301489  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | domain kubernetes-upgrade-127646 has defined IP address 192.168.72.104 and MAC address 52:54:00:9a:26:74 in network mk-kubernetes-upgrade-127646
	I0819 18:54:43.301642  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Calling .GetSSHPort
	I0819 18:54:43.301892  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Calling .GetSSHKeyPath
	I0819 18:54:43.302065  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Calling .GetSSHKeyPath
	I0819 18:54:43.302175  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Calling .GetSSHUsername
	I0819 18:54:43.302404  418752 main.go:141] libmachine: Using SSH client type: native
	I0819 18:54:43.302626  418752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.104 22 <nil> <nil>}
	I0819 18:54:43.302651  418752 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-127646' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-127646/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-127646' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 18:54:43.416722  418752 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 18:54:43.416762  418752 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19468-372744/.minikube CaCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19468-372744/.minikube}
	I0819 18:54:43.416785  418752 buildroot.go:174] setting up certificates
	I0819 18:54:43.416796  418752 provision.go:84] configureAuth start
	I0819 18:54:43.416805  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Calling .GetMachineName
	I0819 18:54:43.417100  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Calling .GetIP
	I0819 18:54:43.420080  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | domain kubernetes-upgrade-127646 has defined MAC address 52:54:00:9a:26:74 in network mk-kubernetes-upgrade-127646
	I0819 18:54:43.420499  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:26:74", ip: ""} in network mk-kubernetes-upgrade-127646: {Iface:virbr4 ExpiryTime:2024-08-19 19:54:35 +0000 UTC Type:0 Mac:52:54:00:9a:26:74 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:kubernetes-upgrade-127646 Clientid:01:52:54:00:9a:26:74}
	I0819 18:54:43.420522  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | domain kubernetes-upgrade-127646 has defined IP address 192.168.72.104 and MAC address 52:54:00:9a:26:74 in network mk-kubernetes-upgrade-127646
	I0819 18:54:43.420677  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Calling .GetSSHHostname
	I0819 18:54:43.422850  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | domain kubernetes-upgrade-127646 has defined MAC address 52:54:00:9a:26:74 in network mk-kubernetes-upgrade-127646
	I0819 18:54:43.423183  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:26:74", ip: ""} in network mk-kubernetes-upgrade-127646: {Iface:virbr4 ExpiryTime:2024-08-19 19:54:35 +0000 UTC Type:0 Mac:52:54:00:9a:26:74 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:kubernetes-upgrade-127646 Clientid:01:52:54:00:9a:26:74}
	I0819 18:54:43.423231  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | domain kubernetes-upgrade-127646 has defined IP address 192.168.72.104 and MAC address 52:54:00:9a:26:74 in network mk-kubernetes-upgrade-127646
	I0819 18:54:43.423362  418752 provision.go:143] copyHostCerts
	I0819 18:54:43.423426  418752 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem, removing ...
	I0819 18:54:43.423449  418752 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem
	I0819 18:54:43.423519  418752 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem (1123 bytes)
	I0819 18:54:43.423714  418752 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem, removing ...
	I0819 18:54:43.423728  418752 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem
	I0819 18:54:43.423764  418752 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem (1675 bytes)
	I0819 18:54:43.423840  418752 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem, removing ...
	I0819 18:54:43.423851  418752 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem
	I0819 18:54:43.423880  418752 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem (1082 bytes)
	I0819 18:54:43.423940  418752 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-127646 san=[127.0.0.1 192.168.72.104 kubernetes-upgrade-127646 localhost minikube]
	I0819 18:54:43.558965  418752 provision.go:177] copyRemoteCerts
	I0819 18:54:43.559026  418752 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 18:54:43.559057  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Calling .GetSSHHostname
	I0819 18:54:43.561869  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | domain kubernetes-upgrade-127646 has defined MAC address 52:54:00:9a:26:74 in network mk-kubernetes-upgrade-127646
	I0819 18:54:43.562159  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:26:74", ip: ""} in network mk-kubernetes-upgrade-127646: {Iface:virbr4 ExpiryTime:2024-08-19 19:54:35 +0000 UTC Type:0 Mac:52:54:00:9a:26:74 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:kubernetes-upgrade-127646 Clientid:01:52:54:00:9a:26:74}
	I0819 18:54:43.562184  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | domain kubernetes-upgrade-127646 has defined IP address 192.168.72.104 and MAC address 52:54:00:9a:26:74 in network mk-kubernetes-upgrade-127646
	I0819 18:54:43.562395  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Calling .GetSSHPort
	I0819 18:54:43.562592  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Calling .GetSSHKeyPath
	I0819 18:54:43.562745  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Calling .GetSSHUsername
	I0819 18:54:43.562887  418752 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/kubernetes-upgrade-127646/id_rsa Username:docker}
	I0819 18:54:43.646185  418752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 18:54:43.671421  418752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0819 18:54:43.695023  418752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 18:54:43.718872  418752 provision.go:87] duration metric: took 302.061521ms to configureAuth
	I0819 18:54:43.718904  418752 buildroot.go:189] setting minikube options for container-runtime
	I0819 18:54:43.719066  418752 config.go:182] Loaded profile config "kubernetes-upgrade-127646": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0819 18:54:43.719154  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Calling .GetSSHHostname
	I0819 18:54:43.721868  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | domain kubernetes-upgrade-127646 has defined MAC address 52:54:00:9a:26:74 in network mk-kubernetes-upgrade-127646
	I0819 18:54:43.722199  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:26:74", ip: ""} in network mk-kubernetes-upgrade-127646: {Iface:virbr4 ExpiryTime:2024-08-19 19:54:35 +0000 UTC Type:0 Mac:52:54:00:9a:26:74 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:kubernetes-upgrade-127646 Clientid:01:52:54:00:9a:26:74}
	I0819 18:54:43.722223  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | domain kubernetes-upgrade-127646 has defined IP address 192.168.72.104 and MAC address 52:54:00:9a:26:74 in network mk-kubernetes-upgrade-127646
	I0819 18:54:43.722414  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Calling .GetSSHPort
	I0819 18:54:43.722643  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Calling .GetSSHKeyPath
	I0819 18:54:43.722777  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Calling .GetSSHKeyPath
	I0819 18:54:43.722888  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Calling .GetSSHUsername
	I0819 18:54:43.723026  418752 main.go:141] libmachine: Using SSH client type: native
	I0819 18:54:43.723196  418752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.104 22 <nil> <nil>}
	I0819 18:54:43.723210  418752 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 18:54:43.991585  418752 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 18:54:43.991626  418752 main.go:141] libmachine: Checking connection to Docker...
	I0819 18:54:43.991640  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Calling .GetURL
	I0819 18:54:43.992895  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | Using libvirt version 6000000
	I0819 18:54:43.995306  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | domain kubernetes-upgrade-127646 has defined MAC address 52:54:00:9a:26:74 in network mk-kubernetes-upgrade-127646
	I0819 18:54:43.995620  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:26:74", ip: ""} in network mk-kubernetes-upgrade-127646: {Iface:virbr4 ExpiryTime:2024-08-19 19:54:35 +0000 UTC Type:0 Mac:52:54:00:9a:26:74 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:kubernetes-upgrade-127646 Clientid:01:52:54:00:9a:26:74}
	I0819 18:54:43.995659  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | domain kubernetes-upgrade-127646 has defined IP address 192.168.72.104 and MAC address 52:54:00:9a:26:74 in network mk-kubernetes-upgrade-127646
	I0819 18:54:43.995899  418752 main.go:141] libmachine: Docker is up and running!
	I0819 18:54:43.995925  418752 main.go:141] libmachine: Reticulating splines...
	I0819 18:54:43.995934  418752 client.go:171] duration metric: took 24.061393244s to LocalClient.Create
	I0819 18:54:43.995962  418752 start.go:167] duration metric: took 24.061461001s to libmachine.API.Create "kubernetes-upgrade-127646"
	I0819 18:54:43.995977  418752 start.go:293] postStartSetup for "kubernetes-upgrade-127646" (driver="kvm2")
	I0819 18:54:43.995989  418752 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 18:54:43.996014  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Calling .DriverName
	I0819 18:54:43.996276  418752 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 18:54:43.996305  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Calling .GetSSHHostname
	I0819 18:54:43.998656  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | domain kubernetes-upgrade-127646 has defined MAC address 52:54:00:9a:26:74 in network mk-kubernetes-upgrade-127646
	I0819 18:54:43.999039  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:26:74", ip: ""} in network mk-kubernetes-upgrade-127646: {Iface:virbr4 ExpiryTime:2024-08-19 19:54:35 +0000 UTC Type:0 Mac:52:54:00:9a:26:74 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:kubernetes-upgrade-127646 Clientid:01:52:54:00:9a:26:74}
	I0819 18:54:43.999073  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | domain kubernetes-upgrade-127646 has defined IP address 192.168.72.104 and MAC address 52:54:00:9a:26:74 in network mk-kubernetes-upgrade-127646
	I0819 18:54:43.999196  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Calling .GetSSHPort
	I0819 18:54:43.999379  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Calling .GetSSHKeyPath
	I0819 18:54:43.999547  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Calling .GetSSHUsername
	I0819 18:54:43.999695  418752 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/kubernetes-upgrade-127646/id_rsa Username:docker}
	I0819 18:54:44.087868  418752 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 18:54:44.092447  418752 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 18:54:44.092477  418752 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-372744/.minikube/addons for local assets ...
	I0819 18:54:44.092549  418752 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-372744/.minikube/files for local assets ...
	I0819 18:54:44.092658  418752 filesync.go:149] local asset: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem -> 3800092.pem in /etc/ssl/certs
	I0819 18:54:44.092788  418752 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 18:54:44.103804  418752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem --> /etc/ssl/certs/3800092.pem (1708 bytes)
	I0819 18:54:44.129457  418752 start.go:296] duration metric: took 133.463897ms for postStartSetup
	I0819 18:54:44.129521  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Calling .GetConfigRaw
	I0819 18:54:44.130259  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Calling .GetIP
	I0819 18:54:44.133184  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | domain kubernetes-upgrade-127646 has defined MAC address 52:54:00:9a:26:74 in network mk-kubernetes-upgrade-127646
	I0819 18:54:44.133540  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:26:74", ip: ""} in network mk-kubernetes-upgrade-127646: {Iface:virbr4 ExpiryTime:2024-08-19 19:54:35 +0000 UTC Type:0 Mac:52:54:00:9a:26:74 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:kubernetes-upgrade-127646 Clientid:01:52:54:00:9a:26:74}
	I0819 18:54:44.133574  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | domain kubernetes-upgrade-127646 has defined IP address 192.168.72.104 and MAC address 52:54:00:9a:26:74 in network mk-kubernetes-upgrade-127646
	I0819 18:54:44.133800  418752 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/kubernetes-upgrade-127646/config.json ...
	I0819 18:54:44.134005  418752 start.go:128] duration metric: took 24.220964928s to createHost
	I0819 18:54:44.134031  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Calling .GetSSHHostname
	I0819 18:54:44.136533  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | domain kubernetes-upgrade-127646 has defined MAC address 52:54:00:9a:26:74 in network mk-kubernetes-upgrade-127646
	I0819 18:54:44.136946  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:26:74", ip: ""} in network mk-kubernetes-upgrade-127646: {Iface:virbr4 ExpiryTime:2024-08-19 19:54:35 +0000 UTC Type:0 Mac:52:54:00:9a:26:74 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:kubernetes-upgrade-127646 Clientid:01:52:54:00:9a:26:74}
	I0819 18:54:44.136988  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | domain kubernetes-upgrade-127646 has defined IP address 192.168.72.104 and MAC address 52:54:00:9a:26:74 in network mk-kubernetes-upgrade-127646
	I0819 18:54:44.137149  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Calling .GetSSHPort
	I0819 18:54:44.137383  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Calling .GetSSHKeyPath
	I0819 18:54:44.137549  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Calling .GetSSHKeyPath
	I0819 18:54:44.137729  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Calling .GetSSHUsername
	I0819 18:54:44.137868  418752 main.go:141] libmachine: Using SSH client type: native
	I0819 18:54:44.138601  418752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.104 22 <nil> <nil>}
	I0819 18:54:44.138629  418752 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 18:54:44.248473  418752 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724093684.218766863
	
	I0819 18:54:44.248500  418752 fix.go:216] guest clock: 1724093684.218766863
	I0819 18:54:44.248511  418752 fix.go:229] Guest: 2024-08-19 18:54:44.218766863 +0000 UTC Remote: 2024-08-19 18:54:44.13401687 +0000 UTC m=+64.962908371 (delta=84.749993ms)
	I0819 18:54:44.248568  418752 fix.go:200] guest clock delta is within tolerance: 84.749993ms
	I0819 18:54:44.248578  418752 start.go:83] releasing machines lock for "kubernetes-upgrade-127646", held for 24.335746498s
	I0819 18:54:44.248607  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Calling .DriverName
	I0819 18:54:44.248959  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Calling .GetIP
	I0819 18:54:44.251848  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | domain kubernetes-upgrade-127646 has defined MAC address 52:54:00:9a:26:74 in network mk-kubernetes-upgrade-127646
	I0819 18:54:44.252170  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:26:74", ip: ""} in network mk-kubernetes-upgrade-127646: {Iface:virbr4 ExpiryTime:2024-08-19 19:54:35 +0000 UTC Type:0 Mac:52:54:00:9a:26:74 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:kubernetes-upgrade-127646 Clientid:01:52:54:00:9a:26:74}
	I0819 18:54:44.252200  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | domain kubernetes-upgrade-127646 has defined IP address 192.168.72.104 and MAC address 52:54:00:9a:26:74 in network mk-kubernetes-upgrade-127646
	I0819 18:54:44.252441  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Calling .DriverName
	I0819 18:54:44.252962  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Calling .DriverName
	I0819 18:54:44.253146  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Calling .DriverName
	I0819 18:54:44.253234  418752 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 18:54:44.253290  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Calling .GetSSHHostname
	I0819 18:54:44.253429  418752 ssh_runner.go:195] Run: cat /version.json
	I0819 18:54:44.253458  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Calling .GetSSHHostname
	I0819 18:54:44.256393  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | domain kubernetes-upgrade-127646 has defined MAC address 52:54:00:9a:26:74 in network mk-kubernetes-upgrade-127646
	I0819 18:54:44.256610  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | domain kubernetes-upgrade-127646 has defined MAC address 52:54:00:9a:26:74 in network mk-kubernetes-upgrade-127646
	I0819 18:54:44.256779  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:26:74", ip: ""} in network mk-kubernetes-upgrade-127646: {Iface:virbr4 ExpiryTime:2024-08-19 19:54:35 +0000 UTC Type:0 Mac:52:54:00:9a:26:74 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:kubernetes-upgrade-127646 Clientid:01:52:54:00:9a:26:74}
	I0819 18:54:44.256808  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | domain kubernetes-upgrade-127646 has defined IP address 192.168.72.104 and MAC address 52:54:00:9a:26:74 in network mk-kubernetes-upgrade-127646
	I0819 18:54:44.256944  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:26:74", ip: ""} in network mk-kubernetes-upgrade-127646: {Iface:virbr4 ExpiryTime:2024-08-19 19:54:35 +0000 UTC Type:0 Mac:52:54:00:9a:26:74 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:kubernetes-upgrade-127646 Clientid:01:52:54:00:9a:26:74}
	I0819 18:54:44.256966  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Calling .GetSSHPort
	I0819 18:54:44.256975  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | domain kubernetes-upgrade-127646 has defined IP address 192.168.72.104 and MAC address 52:54:00:9a:26:74 in network mk-kubernetes-upgrade-127646
	I0819 18:54:44.257181  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Calling .GetSSHPort
	I0819 18:54:44.257258  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Calling .GetSSHKeyPath
	I0819 18:54:44.257446  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Calling .GetSSHKeyPath
	I0819 18:54:44.257453  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Calling .GetSSHUsername
	I0819 18:54:44.257649  418752 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/kubernetes-upgrade-127646/id_rsa Username:docker}
	I0819 18:54:44.257692  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Calling .GetSSHUsername
	I0819 18:54:44.257851  418752 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/kubernetes-upgrade-127646/id_rsa Username:docker}
	I0819 18:54:44.336841  418752 ssh_runner.go:195] Run: systemctl --version
	I0819 18:54:44.361929  418752 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 18:54:44.524323  418752 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 18:54:44.530726  418752 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 18:54:44.530809  418752 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 18:54:44.551197  418752 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 18:54:44.551221  418752 start.go:495] detecting cgroup driver to use...
	I0819 18:54:44.551294  418752 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 18:54:44.569242  418752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 18:54:44.584486  418752 docker.go:217] disabling cri-docker service (if available) ...
	I0819 18:54:44.584551  418752 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 18:54:44.598337  418752 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 18:54:44.612441  418752 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 18:54:44.736493  418752 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 18:54:44.902379  418752 docker.go:233] disabling docker service ...
	I0819 18:54:44.902447  418752 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 18:54:44.917893  418752 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 18:54:44.930338  418752 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 18:54:45.078397  418752 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 18:54:45.194261  418752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 18:54:45.208107  418752 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 18:54:45.226032  418752 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0819 18:54:45.226096  418752 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:54:45.236008  418752 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 18:54:45.236077  418752 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:54:45.246201  418752 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:54:45.256390  418752 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:54:45.266427  418752 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 18:54:45.276438  418752 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 18:54:45.285456  418752 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 18:54:45.285518  418752 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 18:54:45.299146  418752 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 18:54:45.309105  418752 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 18:54:45.430688  418752 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 18:54:45.569076  418752 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 18:54:45.569155  418752 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 18:54:45.573954  418752 start.go:563] Will wait 60s for crictl version
	I0819 18:54:45.574031  418752 ssh_runner.go:195] Run: which crictl
	I0819 18:54:45.577776  418752 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 18:54:45.619931  418752 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 18:54:45.620023  418752 ssh_runner.go:195] Run: crio --version
	I0819 18:54:45.655778  418752 ssh_runner.go:195] Run: crio --version
	I0819 18:54:45.689516  418752 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0819 18:54:45.690753  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) Calling .GetIP
	I0819 18:54:45.693435  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | domain kubernetes-upgrade-127646 has defined MAC address 52:54:00:9a:26:74 in network mk-kubernetes-upgrade-127646
	I0819 18:54:45.693877  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:26:74", ip: ""} in network mk-kubernetes-upgrade-127646: {Iface:virbr4 ExpiryTime:2024-08-19 19:54:35 +0000 UTC Type:0 Mac:52:54:00:9a:26:74 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:kubernetes-upgrade-127646 Clientid:01:52:54:00:9a:26:74}
	I0819 18:54:45.693909  418752 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | domain kubernetes-upgrade-127646 has defined IP address 192.168.72.104 and MAC address 52:54:00:9a:26:74 in network mk-kubernetes-upgrade-127646
	I0819 18:54:45.694185  418752 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0819 18:54:45.698433  418752 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 18:54:45.711302  418752 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-127646 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-127646 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.104 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 18:54:45.711427  418752 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0819 18:54:45.711475  418752 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 18:54:45.744112  418752 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0819 18:54:45.744192  418752 ssh_runner.go:195] Run: which lz4
	I0819 18:54:45.748132  418752 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 18:54:45.752485  418752 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 18:54:45.752522  418752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0819 18:54:47.338877  418752 crio.go:462] duration metric: took 1.590787215s to copy over tarball
	I0819 18:54:47.338970  418752 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 18:54:49.835358  418752 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.496346541s)
	I0819 18:54:49.835412  418752 crio.go:469] duration metric: took 2.496489136s to extract the tarball
	I0819 18:54:49.835424  418752 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 18:54:49.879404  418752 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 18:54:49.929693  418752 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0819 18:54:49.929720  418752 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0819 18:54:49.929767  418752 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 18:54:49.929810  418752 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0819 18:54:49.929832  418752 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 18:54:49.929875  418752 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0819 18:54:49.929804  418752 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0819 18:54:49.929840  418752 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0819 18:54:49.929844  418752 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0819 18:54:49.929924  418752 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0819 18:54:49.931222  418752 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0819 18:54:49.931238  418752 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0819 18:54:49.931241  418752 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0819 18:54:49.931223  418752 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 18:54:49.931297  418752 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0819 18:54:49.931318  418752 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0819 18:54:49.931336  418752 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 18:54:49.931284  418752 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0819 18:54:50.116254  418752 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0819 18:54:50.145061  418752 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0819 18:54:50.164653  418752 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0819 18:54:50.164705  418752 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0819 18:54:50.164754  418752 ssh_runner.go:195] Run: which crictl
	I0819 18:54:50.202652  418752 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0819 18:54:50.202659  418752 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0819 18:54:50.202741  418752 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0819 18:54:50.202780  418752 ssh_runner.go:195] Run: which crictl
	I0819 18:54:50.240349  418752 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0819 18:54:50.240368  418752 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0819 18:54:50.265030  418752 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0819 18:54:50.267272  418752 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0819 18:54:50.267976  418752 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0819 18:54:50.273151  418752 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 18:54:50.281872  418752 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0819 18:54:50.364431  418752 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0819 18:54:50.364499  418752 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0819 18:54:50.493827  418752 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0819 18:54:50.493883  418752 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0819 18:54:50.493879  418752 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0819 18:54:50.493922  418752 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0819 18:54:50.493937  418752 ssh_runner.go:195] Run: which crictl
	I0819 18:54:50.493960  418752 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0819 18:54:50.493972  418752 ssh_runner.go:195] Run: which crictl
	I0819 18:54:50.494005  418752 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0819 18:54:50.494032  418752 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0819 18:54:50.494045  418752 ssh_runner.go:195] Run: which crictl
	I0819 18:54:50.494060  418752 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 18:54:50.494098  418752 ssh_runner.go:195] Run: which crictl
	I0819 18:54:50.494122  418752 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0819 18:54:50.494061  418752 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0819 18:54:50.494180  418752 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0819 18:54:50.494208  418752 ssh_runner.go:195] Run: which crictl
	I0819 18:54:50.511812  418752 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0819 18:54:50.511957  418752 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0819 18:54:50.520782  418752 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0819 18:54:50.581343  418752 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 18:54:50.581370  418752 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0819 18:54:50.581464  418752 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0819 18:54:50.581470  418752 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0819 18:54:50.585833  418752 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0819 18:54:50.585833  418752 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0819 18:54:50.656242  418752 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 18:54:50.692062  418752 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0819 18:54:50.692144  418752 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0819 18:54:50.692161  418752 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0819 18:54:50.711601  418752 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0819 18:54:50.762736  418752 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 18:54:50.822145  418752 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 18:54:50.843475  418752 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0819 18:54:50.843607  418752 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0819 18:54:50.843690  418752 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0819 18:54:50.843862  418752 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0819 18:54:50.860059  418752 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0819 18:54:51.035096  418752 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0819 18:54:51.035128  418752 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0819 18:54:51.035194  418752 cache_images.go:92] duration metric: took 1.105460734s to LoadCachedImages
	W0819 18:54:51.035321  418752 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0819 18:54:51.035379  418752 kubeadm.go:934] updating node { 192.168.72.104 8443 v1.20.0 crio true true} ...
	I0819 18:54:51.035509  418752 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-127646 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.104
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-127646 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 18:54:51.035591  418752 ssh_runner.go:195] Run: crio config
	I0819 18:54:51.105039  418752 cni.go:84] Creating CNI manager for ""
	I0819 18:54:51.105070  418752 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 18:54:51.105085  418752 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 18:54:51.105105  418752 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.104 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-127646 NodeName:kubernetes-upgrade-127646 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.104"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.104 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0819 18:54:51.105241  418752 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.104
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-127646"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.104
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.104"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 18:54:51.105307  418752 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0819 18:54:51.116525  418752 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 18:54:51.116596  418752 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 18:54:51.127849  418752 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0819 18:54:51.150464  418752 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 18:54:51.172730  418752 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0819 18:54:51.196024  418752 ssh_runner.go:195] Run: grep 192.168.72.104	control-plane.minikube.internal$ /etc/hosts
	I0819 18:54:51.201073  418752 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.104	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 18:54:51.214689  418752 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 18:54:51.347873  418752 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 18:54:51.367268  418752 certs.go:68] Setting up /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/kubernetes-upgrade-127646 for IP: 192.168.72.104
	I0819 18:54:51.367297  418752 certs.go:194] generating shared ca certs ...
	I0819 18:54:51.367321  418752 certs.go:226] acquiring lock for ca certs: {Name:mk639e03f593e0bccac045f6e9f5ba3b96cc81e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:54:51.367516  418752 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key
	I0819 18:54:51.367568  418752 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key
	I0819 18:54:51.367580  418752 certs.go:256] generating profile certs ...
	I0819 18:54:51.367654  418752 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/kubernetes-upgrade-127646/client.key
	I0819 18:54:51.367710  418752 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/kubernetes-upgrade-127646/client.crt with IP's: []
	I0819 18:54:51.584001  418752 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/kubernetes-upgrade-127646/client.crt ...
	I0819 18:54:51.584042  418752 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/kubernetes-upgrade-127646/client.crt: {Name:mk310398ab95390aa82dcbeb5848984e4eafaac0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:54:51.623251  418752 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/kubernetes-upgrade-127646/client.key ...
	I0819 18:54:51.623292  418752 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/kubernetes-upgrade-127646/client.key: {Name:mk725a65dd00380680a8b5631ed906724e58be5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:54:51.639461  418752 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/kubernetes-upgrade-127646/apiserver.key.7a265075
	I0819 18:54:51.639508  418752 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/kubernetes-upgrade-127646/apiserver.crt.7a265075 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.104]
	I0819 18:54:51.781377  418752 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/kubernetes-upgrade-127646/apiserver.crt.7a265075 ...
	I0819 18:54:51.781409  418752 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/kubernetes-upgrade-127646/apiserver.crt.7a265075: {Name:mk9614dc9a4bbc3ea41b26c3691379bb3a4ce4ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:54:51.803720  418752 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/kubernetes-upgrade-127646/apiserver.key.7a265075 ...
	I0819 18:54:51.803771  418752 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/kubernetes-upgrade-127646/apiserver.key.7a265075: {Name:mk8960e4dd572835d27fe9512d73b5cfa066a388 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:54:51.803944  418752 certs.go:381] copying /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/kubernetes-upgrade-127646/apiserver.crt.7a265075 -> /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/kubernetes-upgrade-127646/apiserver.crt
	I0819 18:54:51.804103  418752 certs.go:385] copying /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/kubernetes-upgrade-127646/apiserver.key.7a265075 -> /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/kubernetes-upgrade-127646/apiserver.key
	I0819 18:54:51.804204  418752 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/kubernetes-upgrade-127646/proxy-client.key
	I0819 18:54:51.804229  418752 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/kubernetes-upgrade-127646/proxy-client.crt with IP's: []
	I0819 18:54:52.001621  418752 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/kubernetes-upgrade-127646/proxy-client.crt ...
	I0819 18:54:52.001655  418752 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/kubernetes-upgrade-127646/proxy-client.crt: {Name:mk8e686f1bd0f70214fa2eeee968b172fefe5aef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:54:52.049688  418752 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/kubernetes-upgrade-127646/proxy-client.key ...
	I0819 18:54:52.049731  418752 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/kubernetes-upgrade-127646/proxy-client.key: {Name:mkd66982ce85fc3f8ace6d59a7fa4d73c3a87a04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:54:52.050023  418752 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem (1338 bytes)
	W0819 18:54:52.050071  418752 certs.go:480] ignoring /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009_empty.pem, impossibly tiny 0 bytes
	I0819 18:54:52.050087  418752 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 18:54:52.050119  418752 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem (1082 bytes)
	I0819 18:54:52.050150  418752 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem (1123 bytes)
	I0819 18:54:52.050179  418752 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem (1675 bytes)
	I0819 18:54:52.050239  418752 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem (1708 bytes)
	I0819 18:54:52.050868  418752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 18:54:52.078826  418752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 18:54:52.104606  418752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 18:54:52.131083  418752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 18:54:52.157546  418752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/kubernetes-upgrade-127646/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0819 18:54:52.186642  418752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/kubernetes-upgrade-127646/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 18:54:52.267962  418752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/kubernetes-upgrade-127646/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 18:54:52.307397  418752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/kubernetes-upgrade-127646/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 18:54:52.332622  418752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem --> /usr/share/ca-certificates/380009.pem (1338 bytes)
	I0819 18:54:52.357642  418752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem --> /usr/share/ca-certificates/3800092.pem (1708 bytes)
	I0819 18:54:52.382894  418752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 18:54:52.411135  418752 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 18:54:52.429637  418752 ssh_runner.go:195] Run: openssl version
	I0819 18:54:52.435939  418752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/380009.pem && ln -fs /usr/share/ca-certificates/380009.pem /etc/ssl/certs/380009.pem"
	I0819 18:54:52.447889  418752 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/380009.pem
	I0819 18:54:52.452735  418752 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 17:56 /usr/share/ca-certificates/380009.pem
	I0819 18:54:52.452796  418752 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/380009.pem
	I0819 18:54:52.459178  418752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/380009.pem /etc/ssl/certs/51391683.0"
	I0819 18:54:52.471108  418752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3800092.pem && ln -fs /usr/share/ca-certificates/3800092.pem /etc/ssl/certs/3800092.pem"
	I0819 18:54:52.482780  418752 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3800092.pem
	I0819 18:54:52.487669  418752 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 17:56 /usr/share/ca-certificates/3800092.pem
	I0819 18:54:52.487747  418752 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3800092.pem
	I0819 18:54:52.493960  418752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3800092.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 18:54:52.505351  418752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 18:54:52.517127  418752 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:54:52.522361  418752 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:54:52.522428  418752 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:54:52.528185  418752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 18:54:52.542146  418752 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 18:54:52.546748  418752 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 18:54:52.546816  418752 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-127646 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-127646 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.104 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 18:54:52.546900  418752 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 18:54:52.546992  418752 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 18:54:52.585701  418752 cri.go:89] found id: ""
	I0819 18:54:52.585782  418752 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 18:54:52.597112  418752 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 18:54:52.609375  418752 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 18:54:52.619696  418752 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 18:54:52.619723  418752 kubeadm.go:157] found existing configuration files:
	
	I0819 18:54:52.619781  418752 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 18:54:52.629536  418752 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 18:54:52.629611  418752 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 18:54:52.640128  418752 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 18:54:52.650424  418752 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 18:54:52.650512  418752 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 18:54:52.661526  418752 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 18:54:52.672100  418752 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 18:54:52.672173  418752 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 18:54:52.683018  418752 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 18:54:52.693057  418752 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 18:54:52.693120  418752 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 18:54:52.703601  418752 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 18:54:52.973876  418752 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 18:56:50.771744  418752 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0819 18:56:50.771902  418752 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0819 18:56:50.773548  418752 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0819 18:56:50.773648  418752 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 18:56:50.773751  418752 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 18:56:50.773860  418752 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 18:56:50.773980  418752 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0819 18:56:50.774075  418752 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 18:56:50.775786  418752 out.go:235]   - Generating certificates and keys ...
	I0819 18:56:50.775869  418752 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 18:56:50.775952  418752 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 18:56:50.776052  418752 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0819 18:56:50.776148  418752 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0819 18:56:50.776244  418752 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0819 18:56:50.776321  418752 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0819 18:56:50.776409  418752 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0819 18:56:50.776582  418752 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-127646 localhost] and IPs [192.168.72.104 127.0.0.1 ::1]
	I0819 18:56:50.776661  418752 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0819 18:56:50.776824  418752 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-127646 localhost] and IPs [192.168.72.104 127.0.0.1 ::1]
	I0819 18:56:50.776900  418752 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0819 18:56:50.776992  418752 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0819 18:56:50.777055  418752 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0819 18:56:50.777137  418752 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 18:56:50.777215  418752 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 18:56:50.777283  418752 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 18:56:50.777382  418752 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 18:56:50.777461  418752 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 18:56:50.777612  418752 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 18:56:50.777728  418752 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 18:56:50.777793  418752 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 18:56:50.777888  418752 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 18:56:50.779448  418752 out.go:235]   - Booting up control plane ...
	I0819 18:56:50.779552  418752 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 18:56:50.779646  418752 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 18:56:50.779773  418752 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 18:56:50.779910  418752 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 18:56:50.780146  418752 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0819 18:56:50.780216  418752 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0819 18:56:50.780323  418752 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 18:56:50.780588  418752 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 18:56:50.780691  418752 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 18:56:50.780954  418752 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 18:56:50.781047  418752 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 18:56:50.781285  418752 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 18:56:50.781370  418752 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 18:56:50.781547  418752 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 18:56:50.781633  418752 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 18:56:50.781875  418752 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 18:56:50.781891  418752 kubeadm.go:310] 
	I0819 18:56:50.781924  418752 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0819 18:56:50.781987  418752 kubeadm.go:310] 		timed out waiting for the condition
	I0819 18:56:50.782005  418752 kubeadm.go:310] 
	I0819 18:56:50.782048  418752 kubeadm.go:310] 	This error is likely caused by:
	I0819 18:56:50.782094  418752 kubeadm.go:310] 		- The kubelet is not running
	I0819 18:56:50.782223  418752 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0819 18:56:50.782237  418752 kubeadm.go:310] 
	I0819 18:56:50.782388  418752 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0819 18:56:50.782441  418752 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0819 18:56:50.782484  418752 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0819 18:56:50.782493  418752 kubeadm.go:310] 
	I0819 18:56:50.782632  418752 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0819 18:56:50.782732  418752 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0819 18:56:50.782745  418752 kubeadm.go:310] 
	I0819 18:56:50.782877  418752 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0819 18:56:50.783002  418752 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0819 18:56:50.783112  418752 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0819 18:56:50.783209  418752 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0819 18:56:50.783289  418752 kubeadm.go:310] 
	W0819 18:56:50.783435  418752 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-127646 localhost] and IPs [192.168.72.104 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-127646 localhost] and IPs [192.168.72.104 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-127646 localhost] and IPs [192.168.72.104 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-127646 localhost] and IPs [192.168.72.104 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0819 18:56:50.783482  418752 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 18:56:52.146298  418752 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.36277615s)
	I0819 18:56:52.146429  418752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:56:52.164031  418752 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 18:56:52.178704  418752 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 18:56:52.178732  418752 kubeadm.go:157] found existing configuration files:
	
	I0819 18:56:52.178798  418752 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 18:56:52.191839  418752 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 18:56:52.191973  418752 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 18:56:52.205432  418752 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 18:56:52.218423  418752 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 18:56:52.218503  418752 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 18:56:52.231909  418752 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 18:56:52.244839  418752 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 18:56:52.244906  418752 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 18:56:52.258460  418752 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 18:56:52.270889  418752 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 18:56:52.270973  418752 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 18:56:52.282649  418752 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 18:56:52.360808  418752 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0819 18:56:52.360930  418752 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 18:56:52.519779  418752 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 18:56:52.519915  418752 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 18:56:52.520091  418752 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0819 18:56:52.750815  418752 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 18:56:52.753567  418752 out.go:235]   - Generating certificates and keys ...
	I0819 18:56:52.753681  418752 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 18:56:52.753771  418752 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 18:56:52.753869  418752 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 18:56:52.753976  418752 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 18:56:52.754119  418752 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 18:56:52.754213  418752 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 18:56:52.754306  418752 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 18:56:52.754421  418752 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 18:56:52.754534  418752 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 18:56:52.754637  418752 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 18:56:52.754692  418752 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 18:56:52.754769  418752 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 18:56:53.146686  418752 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 18:56:53.513978  418752 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 18:56:53.599276  418752 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 18:56:53.706803  418752 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 18:56:53.724962  418752 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 18:56:53.725960  418752 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 18:56:53.726026  418752 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 18:56:53.884182  418752 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 18:56:53.886107  418752 out.go:235]   - Booting up control plane ...
	I0819 18:56:53.886240  418752 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 18:56:53.894140  418752 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 18:56:53.900349  418752 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 18:56:53.900467  418752 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 18:56:53.904280  418752 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0819 18:57:33.907312  418752 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0819 18:57:33.907631  418752 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 18:57:33.907947  418752 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 18:57:38.908575  418752 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 18:57:38.908881  418752 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 18:57:48.909486  418752 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 18:57:48.909786  418752 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 18:58:08.908664  418752 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 18:58:08.908925  418752 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 18:58:48.907952  418752 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 18:58:48.908228  418752 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 18:58:48.908263  418752 kubeadm.go:310] 
	I0819 18:58:48.908330  418752 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0819 18:58:48.908387  418752 kubeadm.go:310] 		timed out waiting for the condition
	I0819 18:58:48.908398  418752 kubeadm.go:310] 
	I0819 18:58:48.908443  418752 kubeadm.go:310] 	This error is likely caused by:
	I0819 18:58:48.908488  418752 kubeadm.go:310] 		- The kubelet is not running
	I0819 18:58:48.908617  418752 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0819 18:58:48.908628  418752 kubeadm.go:310] 
	I0819 18:58:48.908792  418752 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0819 18:58:48.908847  418752 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0819 18:58:48.908889  418752 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0819 18:58:48.908899  418752 kubeadm.go:310] 
	I0819 18:58:48.909045  418752 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0819 18:58:48.909158  418752 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0819 18:58:48.909170  418752 kubeadm.go:310] 
	I0819 18:58:48.909319  418752 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0819 18:58:48.909464  418752 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0819 18:58:48.909564  418752 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0819 18:58:48.909681  418752 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0819 18:58:48.909705  418752 kubeadm.go:310] 
	I0819 18:58:48.910677  418752 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 18:58:48.910794  418752 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0819 18:58:48.910909  418752 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0819 18:58:48.911028  418752 kubeadm.go:394] duration metric: took 3m56.364214166s to StartCluster
	I0819 18:58:48.911098  418752 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:58:48.911190  418752 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:58:48.972097  418752 cri.go:89] found id: ""
	I0819 18:58:48.972149  418752 logs.go:276] 0 containers: []
	W0819 18:58:48.972164  418752 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:58:48.972173  418752 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:58:48.972245  418752 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:58:49.022058  418752 cri.go:89] found id: ""
	I0819 18:58:49.022097  418752 logs.go:276] 0 containers: []
	W0819 18:58:49.022109  418752 logs.go:278] No container was found matching "etcd"
	I0819 18:58:49.022117  418752 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:58:49.022186  418752 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:58:49.074126  418752 cri.go:89] found id: ""
	I0819 18:58:49.074160  418752 logs.go:276] 0 containers: []
	W0819 18:58:49.074171  418752 logs.go:278] No container was found matching "coredns"
	I0819 18:58:49.074179  418752 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:58:49.074246  418752 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:58:49.122647  418752 cri.go:89] found id: ""
	I0819 18:58:49.122680  418752 logs.go:276] 0 containers: []
	W0819 18:58:49.122692  418752 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:58:49.122700  418752 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:58:49.122764  418752 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:58:49.170557  418752 cri.go:89] found id: ""
	I0819 18:58:49.170592  418752 logs.go:276] 0 containers: []
	W0819 18:58:49.170605  418752 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:58:49.170614  418752 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:58:49.170677  418752 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:58:49.221796  418752 cri.go:89] found id: ""
	I0819 18:58:49.221891  418752 logs.go:276] 0 containers: []
	W0819 18:58:49.221912  418752 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:58:49.221945  418752 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:58:49.222021  418752 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:58:49.272023  418752 cri.go:89] found id: ""
	I0819 18:58:49.272056  418752 logs.go:276] 0 containers: []
	W0819 18:58:49.272068  418752 logs.go:278] No container was found matching "kindnet"
	I0819 18:58:49.272082  418752 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:58:49.272098  418752 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:58:49.386986  418752 logs.go:123] Gathering logs for container status ...
	I0819 18:58:49.387031  418752 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:58:49.432407  418752 logs.go:123] Gathering logs for kubelet ...
	I0819 18:58:49.432450  418752 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:58:49.516780  418752 logs.go:123] Gathering logs for dmesg ...
	I0819 18:58:49.516829  418752 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:58:49.535693  418752 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:58:49.535731  418752 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:58:49.707442  418752 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0819 18:58:49.707476  418752 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0819 18:58:49.707543  418752 out.go:270] * 
	* 
	W0819 18:58:49.707687  418752 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0819 18:58:49.707719  418752 out.go:270] * 
	* 
	W0819 18:58:49.708782  418752 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 18:58:49.712280  418752 out.go:201] 
	W0819 18:58:49.713634  418752 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0819 18:58:49.713697  418752 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0819 18:58:49.713726  418752 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0819 18:58:49.715438  418752 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-127646 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-127646
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-127646: (1.504938434s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-127646 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-127646 status --format={{.Host}}: exit status 7 (63.882887ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-127646 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-127646 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (57.259369725s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-127646 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-127646 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-127646 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (97.785346ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-127646] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19468
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19468-372744/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19468-372744/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-127646
	    minikube start -p kubernetes-upgrade-127646 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1276462 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0, by running:
	    
	    minikube start -p kubernetes-upgrade-127646 --kubernetes-version=v1.31.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-127646 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-127646 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m2.616783747s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-08-19 19:00:51.416326307 +0000 UTC m=+4598.616253129
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-127646 -n kubernetes-upgrade-127646
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-127646 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-127646 logs -n 25: (2.108354434s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |    Profile     |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	| ssh     | -p kindnet-571803 sudo                               | kindnet-571803 | jenkins | v1.33.1 | 19 Aug 24 19:00 UTC | 19 Aug 24 19:00 UTC |
	|         | systemctl status kubelet --all                       |                |         |         |                     |                     |
	|         | --full --no-pager                                    |                |         |         |                     |                     |
	| ssh     | -p kindnet-571803 sudo                               | kindnet-571803 | jenkins | v1.33.1 | 19 Aug 24 19:00 UTC | 19 Aug 24 19:00 UTC |
	|         | systemctl cat kubelet                                |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p kindnet-571803 sudo                               | kindnet-571803 | jenkins | v1.33.1 | 19 Aug 24 19:00 UTC | 19 Aug 24 19:00 UTC |
	|         | journalctl -xeu kubelet --all                        |                |         |         |                     |                     |
	|         | --full --no-pager                                    |                |         |         |                     |                     |
	| ssh     | -p kindnet-571803 sudo cat                           | kindnet-571803 | jenkins | v1.33.1 | 19 Aug 24 19:00 UTC | 19 Aug 24 19:00 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                |         |         |                     |                     |
	| ssh     | -p kindnet-571803 sudo cat                           | kindnet-571803 | jenkins | v1.33.1 | 19 Aug 24 19:00 UTC | 19 Aug 24 19:00 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                |         |         |                     |                     |
	| ssh     | -p kindnet-571803 sudo                               | kindnet-571803 | jenkins | v1.33.1 | 19 Aug 24 19:00 UTC |                     |
	|         | systemctl status docker --all                        |                |         |         |                     |                     |
	|         | --full --no-pager                                    |                |         |         |                     |                     |
	| ssh     | -p kindnet-571803 sudo                               | kindnet-571803 | jenkins | v1.33.1 | 19 Aug 24 19:00 UTC | 19 Aug 24 19:00 UTC |
	|         | systemctl cat docker                                 |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p kindnet-571803 sudo cat                           | kindnet-571803 | jenkins | v1.33.1 | 19 Aug 24 19:00 UTC | 19 Aug 24 19:00 UTC |
	|         | /etc/docker/daemon.json                              |                |         |         |                     |                     |
	| ssh     | -p kindnet-571803 sudo docker                        | kindnet-571803 | jenkins | v1.33.1 | 19 Aug 24 19:00 UTC |                     |
	|         | system info                                          |                |         |         |                     |                     |
	| ssh     | -p kindnet-571803 sudo                               | kindnet-571803 | jenkins | v1.33.1 | 19 Aug 24 19:00 UTC |                     |
	|         | systemctl status cri-docker                          |                |         |         |                     |                     |
	|         | --all --full --no-pager                              |                |         |         |                     |                     |
	| ssh     | -p kindnet-571803 sudo                               | kindnet-571803 | jenkins | v1.33.1 | 19 Aug 24 19:00 UTC | 19 Aug 24 19:00 UTC |
	|         | systemctl cat cri-docker                             |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p kindnet-571803 sudo cat                           | kindnet-571803 | jenkins | v1.33.1 | 19 Aug 24 19:00 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                |         |         |                     |                     |
	| ssh     | -p kindnet-571803 sudo cat                           | kindnet-571803 | jenkins | v1.33.1 | 19 Aug 24 19:00 UTC | 19 Aug 24 19:00 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                |         |         |                     |                     |
	| ssh     | -p kindnet-571803 sudo                               | kindnet-571803 | jenkins | v1.33.1 | 19 Aug 24 19:00 UTC | 19 Aug 24 19:00 UTC |
	|         | cri-dockerd --version                                |                |         |         |                     |                     |
	| ssh     | -p kindnet-571803 sudo                               | kindnet-571803 | jenkins | v1.33.1 | 19 Aug 24 19:00 UTC |                     |
	|         | systemctl status containerd                          |                |         |         |                     |                     |
	|         | --all --full --no-pager                              |                |         |         |                     |                     |
	| ssh     | -p kindnet-571803 sudo                               | kindnet-571803 | jenkins | v1.33.1 | 19 Aug 24 19:00 UTC | 19 Aug 24 19:00 UTC |
	|         | systemctl cat containerd                             |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p kindnet-571803 sudo cat                           | kindnet-571803 | jenkins | v1.33.1 | 19 Aug 24 19:00 UTC | 19 Aug 24 19:00 UTC |
	|         | /lib/systemd/system/containerd.service               |                |         |         |                     |                     |
	| ssh     | -p kindnet-571803 sudo cat                           | kindnet-571803 | jenkins | v1.33.1 | 19 Aug 24 19:00 UTC | 19 Aug 24 19:00 UTC |
	|         | /etc/containerd/config.toml                          |                |         |         |                     |                     |
	| ssh     | -p kindnet-571803 sudo                               | kindnet-571803 | jenkins | v1.33.1 | 19 Aug 24 19:00 UTC | 19 Aug 24 19:00 UTC |
	|         | containerd config dump                               |                |         |         |                     |                     |
	| ssh     | -p kindnet-571803 sudo                               | kindnet-571803 | jenkins | v1.33.1 | 19 Aug 24 19:00 UTC | 19 Aug 24 19:00 UTC |
	|         | systemctl status crio --all                          |                |         |         |                     |                     |
	|         | --full --no-pager                                    |                |         |         |                     |                     |
	| ssh     | -p kindnet-571803 sudo                               | kindnet-571803 | jenkins | v1.33.1 | 19 Aug 24 19:00 UTC | 19 Aug 24 19:00 UTC |
	|         | systemctl cat crio --no-pager                        |                |         |         |                     |                     |
	| ssh     | -p kindnet-571803 sudo find                          | kindnet-571803 | jenkins | v1.33.1 | 19 Aug 24 19:00 UTC | 19 Aug 24 19:00 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                |         |         |                     |                     |
	| ssh     | -p kindnet-571803 sudo crio                          | kindnet-571803 | jenkins | v1.33.1 | 19 Aug 24 19:00 UTC | 19 Aug 24 19:00 UTC |
	|         | config                                               |                |         |         |                     |                     |
	| delete  | -p kindnet-571803                                    | kindnet-571803 | jenkins | v1.33.1 | 19 Aug 24 19:00 UTC | 19 Aug 24 19:00 UTC |
	| start   | -p flannel-571803                                    | flannel-571803 | jenkins | v1.33.1 | 19 Aug 24 19:00 UTC |                     |
	|         | --memory=3072                                        |                |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                |         |         |                     |                     |
	|         | --cni=flannel --driver=kvm2                          |                |         |         |                     |                     |
	|         | --container-runtime=crio                             |                |         |         |                     |                     |
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 19:00:20
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 19:00:20.234789  427182 out.go:345] Setting OutFile to fd 1 ...
	I0819 19:00:20.235256  427182 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:00:20.235277  427182 out.go:358] Setting ErrFile to fd 2...
	I0819 19:00:20.235285  427182 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:00:20.236503  427182 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19468-372744/.minikube/bin
	I0819 19:00:20.237490  427182 out.go:352] Setting JSON to false
	I0819 19:00:20.239065  427182 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":9763,"bootTime":1724084257,"procs":295,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 19:00:20.239129  427182 start.go:139] virtualization: kvm guest
	I0819 19:00:20.241412  427182 out.go:177] * [flannel-571803] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 19:00:20.242903  427182 out.go:177]   - MINIKUBE_LOCATION=19468
	I0819 19:00:20.242992  427182 notify.go:220] Checking for updates...
	I0819 19:00:20.245699  427182 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 19:00:20.247195  427182 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19468-372744/kubeconfig
	I0819 19:00:20.248377  427182 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19468-372744/.minikube
	I0819 19:00:20.249624  427182 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 19:00:20.251047  427182 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 19:00:20.252900  427182 config.go:182] Loaded profile config "calico-571803": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:00:20.253032  427182 config.go:182] Loaded profile config "custom-flannel-571803": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:00:20.253166  427182 config.go:182] Loaded profile config "kubernetes-upgrade-127646": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:00:20.253281  427182 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 19:00:20.290384  427182 out.go:177] * Using the kvm2 driver based on user configuration
	I0819 19:00:20.291800  427182 start.go:297] selected driver: kvm2
	I0819 19:00:20.291822  427182 start.go:901] validating driver "kvm2" against <nil>
	I0819 19:00:20.291857  427182 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 19:00:20.292848  427182 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 19:00:20.292970  427182 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19468-372744/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 19:00:20.313730  427182 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 19:00:20.313791  427182 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 19:00:20.314014  427182 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 19:00:20.314084  427182 cni.go:84] Creating CNI manager for "flannel"
	I0819 19:00:20.314097  427182 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0819 19:00:20.314144  427182 start.go:340] cluster config:
	{Name:flannel-571803 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:flannel-571803 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 19:00:20.314241  427182 iso.go:125] acquiring lock: {Name:mk4c0ac1c3202b1a296739df622960e7a0bd8566 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 19:00:20.316090  427182 out.go:177] * Starting "flannel-571803" primary control-plane node in "flannel-571803" cluster
	I0819 19:00:16.544695  424989 out.go:235]   - Booting up control plane ...
	I0819 19:00:16.544823  424989 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 19:00:16.544940  424989 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 19:00:16.546042  424989 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 19:00:16.577144  424989 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 19:00:16.590821  424989 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 19:00:16.590909  424989 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 19:00:16.751012  424989 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0819 19:00:16.751167  424989 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0819 19:00:17.752725  424989 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001630733s
	I0819 19:00:17.752837  424989 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0819 19:00:17.383191  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | domain custom-flannel-571803 has defined MAC address 52:54:00:a7:6b:85 in network mk-custom-flannel-571803
	I0819 19:00:17.383945  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | unable to find current IP address of domain custom-flannel-571803 in network mk-custom-flannel-571803
	I0819 19:00:17.383975  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | I0819 19:00:17.383826  426417 retry.go:31] will retry after 737.908322ms: waiting for machine to come up
	I0819 19:00:18.124002  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | domain custom-flannel-571803 has defined MAC address 52:54:00:a7:6b:85 in network mk-custom-flannel-571803
	I0819 19:00:18.124478  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | unable to find current IP address of domain custom-flannel-571803 in network mk-custom-flannel-571803
	I0819 19:00:18.124503  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | I0819 19:00:18.124443  426417 retry.go:31] will retry after 614.154642ms: waiting for machine to come up
	I0819 19:00:18.740736  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | domain custom-flannel-571803 has defined MAC address 52:54:00:a7:6b:85 in network mk-custom-flannel-571803
	I0819 19:00:18.740931  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | unable to find current IP address of domain custom-flannel-571803 in network mk-custom-flannel-571803
	I0819 19:00:18.740947  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | I0819 19:00:18.740848  426417 retry.go:31] will retry after 1.005289566s: waiting for machine to come up
	I0819 19:00:19.970595  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | domain custom-flannel-571803 has defined MAC address 52:54:00:a7:6b:85 in network mk-custom-flannel-571803
	I0819 19:00:19.971182  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | unable to find current IP address of domain custom-flannel-571803 in network mk-custom-flannel-571803
	I0819 19:00:19.971214  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | I0819 19:00:19.971121  426417 retry.go:31] will retry after 1.14410976s: waiting for machine to come up
	I0819 19:00:21.117007  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | domain custom-flannel-571803 has defined MAC address 52:54:00:a7:6b:85 in network mk-custom-flannel-571803
	I0819 19:00:21.117585  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | unable to find current IP address of domain custom-flannel-571803 in network mk-custom-flannel-571803
	I0819 19:00:21.117615  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | I0819 19:00:21.117529  426417 retry.go:31] will retry after 1.388670612s: waiting for machine to come up
	I0819 19:00:23.255402  424989 kubeadm.go:310] [api-check] The API server is healthy after 5.501299243s
	I0819 19:00:23.273831  424989 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 19:00:23.295886  424989 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 19:00:23.329301  424989 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 19:00:23.329585  424989 kubeadm.go:310] [mark-control-plane] Marking the node calico-571803 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 19:00:23.347594  424989 kubeadm.go:310] [bootstrap-token] Using token: ntm4sn.fa1qvspz78sbmfdi
	I0819 19:00:23.349252  424989 out.go:235]   - Configuring RBAC rules ...
	I0819 19:00:23.349403  424989 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 19:00:23.356331  424989 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 19:00:23.369104  424989 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 19:00:23.373810  424989 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 19:00:23.381647  424989 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 19:00:23.385672  424989 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 19:00:23.661708  424989 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 19:00:24.092858  424989 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 19:00:24.661691  424989 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 19:00:24.661747  424989 kubeadm.go:310] 
	I0819 19:00:24.661824  424989 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 19:00:24.661854  424989 kubeadm.go:310] 
	I0819 19:00:24.661991  424989 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 19:00:24.662005  424989 kubeadm.go:310] 
	I0819 19:00:24.662048  424989 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 19:00:24.662136  424989 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 19:00:24.662209  424989 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 19:00:24.662226  424989 kubeadm.go:310] 
	I0819 19:00:24.662307  424989 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 19:00:24.662316  424989 kubeadm.go:310] 
	I0819 19:00:24.662381  424989 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 19:00:24.662407  424989 kubeadm.go:310] 
	I0819 19:00:24.662476  424989 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 19:00:24.662586  424989 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 19:00:24.662690  424989 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 19:00:24.662703  424989 kubeadm.go:310] 
	I0819 19:00:24.662842  424989 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 19:00:24.662946  424989 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 19:00:24.662956  424989 kubeadm.go:310] 
	I0819 19:00:24.663065  424989 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ntm4sn.fa1qvspz78sbmfdi \
	I0819 19:00:24.663209  424989 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3fcbd90565c5acbc36a47b2db682cb22dce9b172c9bf3af21e506ebb67608039 \
	I0819 19:00:24.663237  424989 kubeadm.go:310] 	--control-plane 
	I0819 19:00:24.663248  424989 kubeadm.go:310] 
	I0819 19:00:24.663366  424989 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 19:00:24.663384  424989 kubeadm.go:310] 
	I0819 19:00:24.663482  424989 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ntm4sn.fa1qvspz78sbmfdi \
	I0819 19:00:24.663633  424989 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3fcbd90565c5acbc36a47b2db682cb22dce9b172c9bf3af21e506ebb67608039 
	I0819 19:00:24.664370  424989 kubeadm.go:310] W0819 19:00:13.366084     854 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 19:00:24.664781  424989 kubeadm.go:310] W0819 19:00:13.367087     854 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 19:00:24.664929  424989 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 19:00:24.664998  424989 cni.go:84] Creating CNI manager for "calico"
	I0819 19:00:24.666837  424989 out.go:177] * Configuring Calico (Container Networking Interface) ...
	I0819 19:00:20.317544  427182 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 19:00:20.317593  427182 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0819 19:00:20.317601  427182 cache.go:56] Caching tarball of preloaded images
	I0819 19:00:20.317710  427182 preload.go:172] Found /home/jenkins/minikube-integration/19468-372744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 19:00:20.317725  427182 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 19:00:20.317849  427182 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/flannel-571803/config.json ...
	I0819 19:00:20.317875  427182 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/flannel-571803/config.json: {Name:mkc8e459a042f3bcf440e736ccad5166b936e718 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:00:20.318047  427182 start.go:360] acquireMachinesLock for flannel-571803: {Name:mk24ba67a747357e9ce40f1e460d2bb0bc59cc75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 19:00:24.668549  424989 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0819 19:00:24.668568  424989 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (253923 bytes)
	I0819 19:00:24.693148  424989 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0819 19:00:26.068386  424989 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.375194264s)
	I0819 19:00:26.068438  424989 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 19:00:26.068557  424989 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:00:26.068557  424989 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes calico-571803 minikube.k8s.io/updated_at=2024_08_19T19_00_26_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=9c2db9d51ec33b5c53a86e9ba3d384ee332e3411 minikube.k8s.io/name=calico-571803 minikube.k8s.io/primary=true
	I0819 19:00:26.087798  424989 ops.go:34] apiserver oom_adj: -16
	I0819 19:00:26.234278  424989 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:00:22.508370  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | domain custom-flannel-571803 has defined MAC address 52:54:00:a7:6b:85 in network mk-custom-flannel-571803
	I0819 19:00:22.509047  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | unable to find current IP address of domain custom-flannel-571803 in network mk-custom-flannel-571803
	I0819 19:00:22.509077  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | I0819 19:00:22.508999  426417 retry.go:31] will retry after 1.90176567s: waiting for machine to come up
	I0819 19:00:24.412562  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | domain custom-flannel-571803 has defined MAC address 52:54:00:a7:6b:85 in network mk-custom-flannel-571803
	I0819 19:00:24.413087  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | unable to find current IP address of domain custom-flannel-571803 in network mk-custom-flannel-571803
	I0819 19:00:24.413111  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | I0819 19:00:24.413036  426417 retry.go:31] will retry after 2.23558322s: waiting for machine to come up
	I0819 19:00:26.651501  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | domain custom-flannel-571803 has defined MAC address 52:54:00:a7:6b:85 in network mk-custom-flannel-571803
	I0819 19:00:26.652101  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | unable to find current IP address of domain custom-flannel-571803 in network mk-custom-flannel-571803
	I0819 19:00:26.652130  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | I0819 19:00:26.652044  426417 retry.go:31] will retry after 2.619500836s: waiting for machine to come up
	I0819 19:00:26.735266  424989 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:00:27.235050  424989 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:00:27.734341  424989 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:00:28.234624  424989 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:00:28.735116  424989 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:00:29.234302  424989 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:00:29.377920  424989 kubeadm.go:1113] duration metric: took 3.309436966s to wait for elevateKubeSystemPrivileges
	I0819 19:00:29.377962  424989 kubeadm.go:394] duration metric: took 16.240196171s to StartCluster
	I0819 19:00:29.377996  424989 settings.go:142] acquiring lock: {Name:mk396fcf49a1d0e69583cf37ff3c819e37118163 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:00:29.378121  424989 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19468-372744/kubeconfig
	I0819 19:00:29.379233  424989 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/kubeconfig: {Name:mk8e7b4e1bb7da665111d2acd83eb48882c66853 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:00:29.379487  424989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0819 19:00:29.379517  424989 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.61.242 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 19:00:29.379643  424989 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 19:00:29.379758  424989 addons.go:69] Setting storage-provisioner=true in profile "calico-571803"
	I0819 19:00:29.379800  424989 addons.go:234] Setting addon storage-provisioner=true in "calico-571803"
	I0819 19:00:29.379825  424989 config.go:182] Loaded profile config "calico-571803": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:00:29.379812  424989 addons.go:69] Setting default-storageclass=true in profile "calico-571803"
	I0819 19:00:29.379895  424989 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-571803"
	I0819 19:00:29.379834  424989 host.go:66] Checking if "calico-571803" exists ...
	I0819 19:00:29.380370  424989 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:00:29.380403  424989 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:00:29.380426  424989 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:00:29.380460  424989 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:00:29.381135  424989 out.go:177] * Verifying Kubernetes components...
	I0819 19:00:29.382581  424989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:00:29.401467  424989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39345
	I0819 19:00:29.401520  424989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39407
	I0819 19:00:29.401963  424989 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:00:29.402054  424989 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:00:29.402504  424989 main.go:141] libmachine: Using API Version  1
	I0819 19:00:29.402526  424989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:00:29.402639  424989 main.go:141] libmachine: Using API Version  1
	I0819 19:00:29.402664  424989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:00:29.402899  424989 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:00:29.403050  424989 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:00:29.403523  424989 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:00:29.403555  424989 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:00:29.404228  424989 main.go:141] libmachine: (calico-571803) Calling .GetState
	I0819 19:00:29.410329  424989 addons.go:234] Setting addon default-storageclass=true in "calico-571803"
	I0819 19:00:29.410369  424989 host.go:66] Checking if "calico-571803" exists ...
	I0819 19:00:29.410638  424989 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:00:29.410666  424989 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:00:29.421396  424989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35797
	I0819 19:00:29.421920  424989 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:00:29.422442  424989 main.go:141] libmachine: Using API Version  1
	I0819 19:00:29.422460  424989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:00:29.422855  424989 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:00:29.423091  424989 main.go:141] libmachine: (calico-571803) Calling .GetState
	I0819 19:00:29.424987  424989 main.go:141] libmachine: (calico-571803) Calling .DriverName
	I0819 19:00:29.426659  424989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34817
	I0819 19:00:29.426967  424989 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:00:29.427044  424989 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:00:29.427601  424989 main.go:141] libmachine: Using API Version  1
	I0819 19:00:29.427631  424989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:00:29.427947  424989 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:00:29.428240  424989 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 19:00:29.428256  424989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 19:00:29.428272  424989 main.go:141] libmachine: (calico-571803) Calling .GetSSHHostname
	I0819 19:00:29.428579  424989 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:00:29.428619  424989 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:00:29.431400  424989 main.go:141] libmachine: (calico-571803) DBG | domain calico-571803 has defined MAC address 52:54:00:55:a7:50 in network mk-calico-571803
	I0819 19:00:29.431891  424989 main.go:141] libmachine: (calico-571803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:a7:50", ip: ""} in network mk-calico-571803: {Iface:virbr3 ExpiryTime:2024-08-19 19:59:57 +0000 UTC Type:0 Mac:52:54:00:55:a7:50 Iaid: IPaddr:192.168.61.242 Prefix:24 Hostname:calico-571803 Clientid:01:52:54:00:55:a7:50}
	I0819 19:00:29.431925  424989 main.go:141] libmachine: (calico-571803) DBG | domain calico-571803 has defined IP address 192.168.61.242 and MAC address 52:54:00:55:a7:50 in network mk-calico-571803
	I0819 19:00:29.432092  424989 main.go:141] libmachine: (calico-571803) Calling .GetSSHPort
	I0819 19:00:29.432318  424989 main.go:141] libmachine: (calico-571803) Calling .GetSSHKeyPath
	I0819 19:00:29.432501  424989 main.go:141] libmachine: (calico-571803) Calling .GetSSHUsername
	I0819 19:00:29.432675  424989 sshutil.go:53] new ssh client: &{IP:192.168.61.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/calico-571803/id_rsa Username:docker}
	I0819 19:00:29.445849  424989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46851
	I0819 19:00:29.446378  424989 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:00:29.446965  424989 main.go:141] libmachine: Using API Version  1
	I0819 19:00:29.446989  424989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:00:29.447317  424989 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:00:29.447517  424989 main.go:141] libmachine: (calico-571803) Calling .GetState
	I0819 19:00:29.449248  424989 main.go:141] libmachine: (calico-571803) Calling .DriverName
	I0819 19:00:29.449556  424989 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 19:00:29.449579  424989 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 19:00:29.449602  424989 main.go:141] libmachine: (calico-571803) Calling .GetSSHHostname
	I0819 19:00:29.452684  424989 main.go:141] libmachine: (calico-571803) DBG | domain calico-571803 has defined MAC address 52:54:00:55:a7:50 in network mk-calico-571803
	I0819 19:00:29.453162  424989 main.go:141] libmachine: (calico-571803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:a7:50", ip: ""} in network mk-calico-571803: {Iface:virbr3 ExpiryTime:2024-08-19 19:59:57 +0000 UTC Type:0 Mac:52:54:00:55:a7:50 Iaid: IPaddr:192.168.61.242 Prefix:24 Hostname:calico-571803 Clientid:01:52:54:00:55:a7:50}
	I0819 19:00:29.453193  424989 main.go:141] libmachine: (calico-571803) DBG | domain calico-571803 has defined IP address 192.168.61.242 and MAC address 52:54:00:55:a7:50 in network mk-calico-571803
	I0819 19:00:29.453336  424989 main.go:141] libmachine: (calico-571803) Calling .GetSSHPort
	I0819 19:00:29.453549  424989 main.go:141] libmachine: (calico-571803) Calling .GetSSHKeyPath
	I0819 19:00:29.453739  424989 main.go:141] libmachine: (calico-571803) Calling .GetSSHUsername
	I0819 19:00:29.453916  424989 sshutil.go:53] new ssh client: &{IP:192.168.61.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/calico-571803/id_rsa Username:docker}
	I0819 19:00:29.624305  424989 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 19:00:29.624380  424989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0819 19:00:29.782978  424989 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 19:00:29.786015  424989 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 19:00:30.189396  424989 start.go:971] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0819 19:00:30.190107  424989 main.go:141] libmachine: Making call to close driver server
	I0819 19:00:30.190212  424989 main.go:141] libmachine: (calico-571803) Calling .Close
	I0819 19:00:30.190674  424989 main.go:141] libmachine: (calico-571803) DBG | Closing plugin on server side
	I0819 19:00:30.190730  424989 node_ready.go:35] waiting up to 15m0s for node "calico-571803" to be "Ready" ...
	I0819 19:00:30.190782  424989 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:00:30.190807  424989 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:00:30.190846  424989 main.go:141] libmachine: Making call to close driver server
	I0819 19:00:30.190869  424989 main.go:141] libmachine: (calico-571803) Calling .Close
	I0819 19:00:30.191130  424989 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:00:30.191148  424989 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:00:30.221962  424989 main.go:141] libmachine: Making call to close driver server
	I0819 19:00:30.222003  424989 main.go:141] libmachine: (calico-571803) Calling .Close
	I0819 19:00:30.222338  424989 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:00:30.222358  424989 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:00:30.222384  424989 main.go:141] libmachine: (calico-571803) DBG | Closing plugin on server side
	I0819 19:00:30.484972  424989 main.go:141] libmachine: Making call to close driver server
	I0819 19:00:30.485002  424989 main.go:141] libmachine: (calico-571803) Calling .Close
	I0819 19:00:30.485311  424989 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:00:30.485328  424989 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:00:30.485336  424989 main.go:141] libmachine: Making call to close driver server
	I0819 19:00:30.485344  424989 main.go:141] libmachine: (calico-571803) Calling .Close
	I0819 19:00:30.485644  424989 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:00:30.485664  424989 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:00:30.487622  424989 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0819 19:00:30.488872  424989 addons.go:510] duration metric: took 1.109236669s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0819 19:00:30.694133  424989 kapi.go:214] "coredns" deployment in "kube-system" namespace and "calico-571803" context rescaled to 1 replicas
	I0819 19:00:29.272845  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | domain custom-flannel-571803 has defined MAC address 52:54:00:a7:6b:85 in network mk-custom-flannel-571803
	I0819 19:00:29.273429  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | unable to find current IP address of domain custom-flannel-571803 in network mk-custom-flannel-571803
	I0819 19:00:29.273459  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | I0819 19:00:29.273374  426417 retry.go:31] will retry after 3.526018147s: waiting for machine to come up
	I0819 19:00:32.193958  424989 node_ready.go:53] node "calico-571803" has status "Ready":"False"
	I0819 19:00:34.194664  424989 node_ready.go:53] node "calico-571803" has status "Ready":"False"
	I0819 19:00:36.194806  424989 node_ready.go:53] node "calico-571803" has status "Ready":"False"
	I0819 19:00:32.800823  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | domain custom-flannel-571803 has defined MAC address 52:54:00:a7:6b:85 in network mk-custom-flannel-571803
	I0819 19:00:32.801312  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | unable to find current IP address of domain custom-flannel-571803 in network mk-custom-flannel-571803
	I0819 19:00:32.801349  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | I0819 19:00:32.801257  426417 retry.go:31] will retry after 3.463292219s: waiting for machine to come up
	I0819 19:00:36.268517  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | domain custom-flannel-571803 has defined MAC address 52:54:00:a7:6b:85 in network mk-custom-flannel-571803
	I0819 19:00:36.269211  425784 main.go:141] libmachine: (custom-flannel-571803) Found IP for machine: 192.168.50.217
	I0819 19:00:36.269247  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | domain custom-flannel-571803 has current primary IP address 192.168.50.217 and MAC address 52:54:00:a7:6b:85 in network mk-custom-flannel-571803
	I0819 19:00:36.269257  425784 main.go:141] libmachine: (custom-flannel-571803) Reserving static IP address...
	I0819 19:00:36.269594  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | unable to find host DHCP lease matching {name: "custom-flannel-571803", mac: "52:54:00:a7:6b:85", ip: "192.168.50.217"} in network mk-custom-flannel-571803
	I0819 19:00:36.350642  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | Getting to WaitForSSH function...
	I0819 19:00:36.350675  425784 main.go:141] libmachine: (custom-flannel-571803) Reserved static IP address: 192.168.50.217
	I0819 19:00:36.350690  425784 main.go:141] libmachine: (custom-flannel-571803) Waiting for SSH to be available...
	I0819 19:00:36.353690  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | domain custom-flannel-571803 has defined MAC address 52:54:00:a7:6b:85 in network mk-custom-flannel-571803
	I0819 19:00:36.354078  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:a7:6b:85", ip: ""} in network mk-custom-flannel-571803
	I0819 19:00:36.354108  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | unable to find defined IP address of network mk-custom-flannel-571803 interface with MAC address 52:54:00:a7:6b:85
	I0819 19:00:36.354345  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | Using SSH client type: external
	I0819 19:00:36.354376  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | Using SSH private key: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/custom-flannel-571803/id_rsa (-rw-------)
	I0819 19:00:36.354406  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19468-372744/.minikube/machines/custom-flannel-571803/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 19:00:36.354421  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | About to run SSH command:
	I0819 19:00:36.354437  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | exit 0
	I0819 19:00:36.358166  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | SSH cmd err, output: exit status 255: 
	I0819 19:00:36.358191  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0819 19:00:36.358217  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | command : exit 0
	I0819 19:00:36.358226  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | err     : exit status 255
	I0819 19:00:36.358238  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | output  : 
	I0819 19:00:38.353715  425214 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 1769a85d48846f40b1e6eae5c255e7d7072e7323a484aac6b3d650afddf16fe5 6b72e004b8b08a3f79ef0a1e74949a313f877ef366ff327140f8cf509c86d534 ee0ec838ce8cfbbe48310687e3edd7d47de6f462020d5cfc488ca94cdd254a0e addc11f75f4b1c2af731ecf589cd4818327befcbaefe1a3056065c68653b65d4 903ee1f45ddab3a04ab0218d58cbbb177b0ee6d4e37bdc0c47a28491d10333ca 22c750175f88702154820a972e7bdbdce59398d37086742e273be8fc9a50c135 671095fc1372899e5460c393e92506e8f9186223fbb2963fee023939bc9a7d9a d44b07994d9b7e3e8be8f2262a08758339e98082d8d99aef19f595c346ba962c 38081dca9bb995a3c844aeedcdb4f097ab4275df3926d5eddd4f21c024fae0ca f5f5d3b753b9d77035a3bb7a72cbf91fbd86d2dedfd62bdcb2c9a83b9ab479f1 08b7a52624000a865230eb164d8924c25a891a1271c9e5d09c52ec9f99a5dd71 5dca437e379e5955f158b58e719d4081ff5709e9766f663c58c65a819ffca1fd 43f5314dcab5f9620e43d843820ea019245a1847c22f3afd9339d27c166cc50d: (19.899281728s)
	W0819 19:00:38.353823  425214 kubeadm.go:644] Failed to stop kube-system containers, port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 1769a85d48846f40b1e6eae5c255e7d7072e7323a484aac6b3d650afddf16fe5 6b72e004b8b08a3f79ef0a1e74949a313f877ef366ff327140f8cf509c86d534 ee0ec838ce8cfbbe48310687e3edd7d47de6f462020d5cfc488ca94cdd254a0e addc11f75f4b1c2af731ecf589cd4818327befcbaefe1a3056065c68653b65d4 903ee1f45ddab3a04ab0218d58cbbb177b0ee6d4e37bdc0c47a28491d10333ca 22c750175f88702154820a972e7bdbdce59398d37086742e273be8fc9a50c135 671095fc1372899e5460c393e92506e8f9186223fbb2963fee023939bc9a7d9a d44b07994d9b7e3e8be8f2262a08758339e98082d8d99aef19f595c346ba962c 38081dca9bb995a3c844aeedcdb4f097ab4275df3926d5eddd4f21c024fae0ca f5f5d3b753b9d77035a3bb7a72cbf91fbd86d2dedfd62bdcb2c9a83b9ab479f1 08b7a52624000a865230eb164d8924c25a891a1271c9e5d09c52ec9f99a5dd71 5dca437e379e5955f158b58e719d4081ff5709e9766f663c58c65a819ffca1fd 43f5314dcab5f9620e43d843820ea019245a1847c22f3afd9339d27c166cc50d: Proce
ss exited with status 1
	stdout:
	1769a85d48846f40b1e6eae5c255e7d7072e7323a484aac6b3d650afddf16fe5
	6b72e004b8b08a3f79ef0a1e74949a313f877ef366ff327140f8cf509c86d534
	ee0ec838ce8cfbbe48310687e3edd7d47de6f462020d5cfc488ca94cdd254a0e
	addc11f75f4b1c2af731ecf589cd4818327befcbaefe1a3056065c68653b65d4
	903ee1f45ddab3a04ab0218d58cbbb177b0ee6d4e37bdc0c47a28491d10333ca
	22c750175f88702154820a972e7bdbdce59398d37086742e273be8fc9a50c135
	
	stderr:
	E0819 19:00:38.339957    3214 remote_runtime.go:366] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"671095fc1372899e5460c393e92506e8f9186223fbb2963fee023939bc9a7d9a\": container with ID starting with 671095fc1372899e5460c393e92506e8f9186223fbb2963fee023939bc9a7d9a not found: ID does not exist" containerID="671095fc1372899e5460c393e92506e8f9186223fbb2963fee023939bc9a7d9a"
	time="2024-08-19T19:00:38Z" level=fatal msg="stopping the container \"671095fc1372899e5460c393e92506e8f9186223fbb2963fee023939bc9a7d9a\": rpc error: code = NotFound desc = could not find container \"671095fc1372899e5460c393e92506e8f9186223fbb2963fee023939bc9a7d9a\": container with ID starting with 671095fc1372899e5460c393e92506e8f9186223fbb2963fee023939bc9a7d9a not found: ID does not exist"
	I0819 19:00:38.353885  425214 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0819 19:00:38.401584  425214 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 19:00:38.412331  425214 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5647 Aug 19 18:59 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5658 Aug 19 18:59 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5759 Aug 19 18:59 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5602 Aug 19 18:59 /etc/kubernetes/scheduler.conf
	
	I0819 19:00:38.412407  425214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 19:00:38.422096  425214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 19:00:38.431640  425214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 19:00:38.440715  425214 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0819 19:00:38.440780  425214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 19:00:38.450302  425214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 19:00:38.459396  425214 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0819 19:00:38.459465  425214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 19:00:38.469603  425214 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 19:00:38.479500  425214 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:00:38.539739  425214 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:00:40.832887  427182 start.go:364] duration metric: took 20.514798495s to acquireMachinesLock for "flannel-571803"
	I0819 19:00:40.832957  427182 start.go:93] Provisioning new machine with config: &{Name:flannel-571803 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:flannel-571803 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 19:00:40.833150  427182 start.go:125] createHost starting for "" (driver="kvm2")
	I0819 19:00:37.204146  424989 node_ready.go:49] node "calico-571803" has status "Ready":"True"
	I0819 19:00:37.204170  424989 node_ready.go:38] duration metric: took 7.013384331s for node "calico-571803" to be "Ready" ...
	I0819 19:00:37.204178  424989 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 19:00:37.228117  424989 pod_ready.go:79] waiting up to 15m0s for pod "calico-kube-controllers-7fbd86d5c5-cftrf" in "kube-system" namespace to be "Ready" ...
	I0819 19:00:39.236027  424989 pod_ready.go:103] pod "calico-kube-controllers-7fbd86d5c5-cftrf" in "kube-system" namespace has status "Ready":"False"
	I0819 19:00:39.358994  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | Getting to WaitForSSH function...
	I0819 19:00:39.361494  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | domain custom-flannel-571803 has defined MAC address 52:54:00:a7:6b:85 in network mk-custom-flannel-571803
	I0819 19:00:39.361846  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:6b:85", ip: ""} in network mk-custom-flannel-571803: {Iface:virbr2 ExpiryTime:2024-08-19 20:00:30 +0000 UTC Type:0 Mac:52:54:00:a7:6b:85 Iaid: IPaddr:192.168.50.217 Prefix:24 Hostname:custom-flannel-571803 Clientid:01:52:54:00:a7:6b:85}
	I0819 19:00:39.361876  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | domain custom-flannel-571803 has defined IP address 192.168.50.217 and MAC address 52:54:00:a7:6b:85 in network mk-custom-flannel-571803
	I0819 19:00:39.361960  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | Using SSH client type: external
	I0819 19:00:39.361996  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | Using SSH private key: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/custom-flannel-571803/id_rsa (-rw-------)
	I0819 19:00:39.362027  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.217 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19468-372744/.minikube/machines/custom-flannel-571803/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 19:00:39.362041  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | About to run SSH command:
	I0819 19:00:39.362057  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | exit 0
	I0819 19:00:39.492678  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | SSH cmd err, output: <nil>: 
	I0819 19:00:39.492950  425784 main.go:141] libmachine: (custom-flannel-571803) KVM machine creation complete!
	I0819 19:00:39.493320  425784 main.go:141] libmachine: (custom-flannel-571803) Calling .GetConfigRaw
	I0819 19:00:39.494032  425784 main.go:141] libmachine: (custom-flannel-571803) Calling .DriverName
	I0819 19:00:39.494276  425784 main.go:141] libmachine: (custom-flannel-571803) Calling .DriverName
	I0819 19:00:39.494477  425784 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0819 19:00:39.494514  425784 main.go:141] libmachine: (custom-flannel-571803) Calling .GetState
	I0819 19:00:39.496235  425784 main.go:141] libmachine: Detecting operating system of created instance...
	I0819 19:00:39.496253  425784 main.go:141] libmachine: Waiting for SSH to be available...
	I0819 19:00:39.496267  425784 main.go:141] libmachine: Getting to WaitForSSH function...
	I0819 19:00:39.496275  425784 main.go:141] libmachine: (custom-flannel-571803) Calling .GetSSHHostname
	I0819 19:00:39.499036  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | domain custom-flannel-571803 has defined MAC address 52:54:00:a7:6b:85 in network mk-custom-flannel-571803
	I0819 19:00:39.499463  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:6b:85", ip: ""} in network mk-custom-flannel-571803: {Iface:virbr2 ExpiryTime:2024-08-19 20:00:30 +0000 UTC Type:0 Mac:52:54:00:a7:6b:85 Iaid: IPaddr:192.168.50.217 Prefix:24 Hostname:custom-flannel-571803 Clientid:01:52:54:00:a7:6b:85}
	I0819 19:00:39.499501  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | domain custom-flannel-571803 has defined IP address 192.168.50.217 and MAC address 52:54:00:a7:6b:85 in network mk-custom-flannel-571803
	I0819 19:00:39.499689  425784 main.go:141] libmachine: (custom-flannel-571803) Calling .GetSSHPort
	I0819 19:00:39.499875  425784 main.go:141] libmachine: (custom-flannel-571803) Calling .GetSSHKeyPath
	I0819 19:00:39.500067  425784 main.go:141] libmachine: (custom-flannel-571803) Calling .GetSSHKeyPath
	I0819 19:00:39.500267  425784 main.go:141] libmachine: (custom-flannel-571803) Calling .GetSSHUsername
	I0819 19:00:39.500447  425784 main.go:141] libmachine: Using SSH client type: native
	I0819 19:00:39.500716  425784 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.217 22 <nil> <nil>}
	I0819 19:00:39.500731  425784 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0819 19:00:39.607870  425784 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 19:00:39.607899  425784 main.go:141] libmachine: Detecting the provisioner...
	I0819 19:00:39.607912  425784 main.go:141] libmachine: (custom-flannel-571803) Calling .GetSSHHostname
	I0819 19:00:39.611215  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | domain custom-flannel-571803 has defined MAC address 52:54:00:a7:6b:85 in network mk-custom-flannel-571803
	I0819 19:00:39.611648  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:6b:85", ip: ""} in network mk-custom-flannel-571803: {Iface:virbr2 ExpiryTime:2024-08-19 20:00:30 +0000 UTC Type:0 Mac:52:54:00:a7:6b:85 Iaid: IPaddr:192.168.50.217 Prefix:24 Hostname:custom-flannel-571803 Clientid:01:52:54:00:a7:6b:85}
	I0819 19:00:39.611693  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | domain custom-flannel-571803 has defined IP address 192.168.50.217 and MAC address 52:54:00:a7:6b:85 in network mk-custom-flannel-571803
	I0819 19:00:39.611881  425784 main.go:141] libmachine: (custom-flannel-571803) Calling .GetSSHPort
	I0819 19:00:39.612126  425784 main.go:141] libmachine: (custom-flannel-571803) Calling .GetSSHKeyPath
	I0819 19:00:39.612346  425784 main.go:141] libmachine: (custom-flannel-571803) Calling .GetSSHKeyPath
	I0819 19:00:39.612541  425784 main.go:141] libmachine: (custom-flannel-571803) Calling .GetSSHUsername
	I0819 19:00:39.612724  425784 main.go:141] libmachine: Using SSH client type: native
	I0819 19:00:39.612967  425784 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.217 22 <nil> <nil>}
	I0819 19:00:39.612980  425784 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0819 19:00:39.724751  425784 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0819 19:00:39.724840  425784 main.go:141] libmachine: found compatible host: buildroot
	I0819 19:00:39.724851  425784 main.go:141] libmachine: Provisioning with buildroot...
	I0819 19:00:39.724863  425784 main.go:141] libmachine: (custom-flannel-571803) Calling .GetMachineName
	I0819 19:00:39.725145  425784 buildroot.go:166] provisioning hostname "custom-flannel-571803"
	I0819 19:00:39.725171  425784 main.go:141] libmachine: (custom-flannel-571803) Calling .GetMachineName
	I0819 19:00:39.725404  425784 main.go:141] libmachine: (custom-flannel-571803) Calling .GetSSHHostname
	I0819 19:00:39.728639  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | domain custom-flannel-571803 has defined MAC address 52:54:00:a7:6b:85 in network mk-custom-flannel-571803
	I0819 19:00:39.728987  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:6b:85", ip: ""} in network mk-custom-flannel-571803: {Iface:virbr2 ExpiryTime:2024-08-19 20:00:30 +0000 UTC Type:0 Mac:52:54:00:a7:6b:85 Iaid: IPaddr:192.168.50.217 Prefix:24 Hostname:custom-flannel-571803 Clientid:01:52:54:00:a7:6b:85}
	I0819 19:00:39.729020  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | domain custom-flannel-571803 has defined IP address 192.168.50.217 and MAC address 52:54:00:a7:6b:85 in network mk-custom-flannel-571803
	I0819 19:00:39.729288  425784 main.go:141] libmachine: (custom-flannel-571803) Calling .GetSSHPort
	I0819 19:00:39.729508  425784 main.go:141] libmachine: (custom-flannel-571803) Calling .GetSSHKeyPath
	I0819 19:00:39.729701  425784 main.go:141] libmachine: (custom-flannel-571803) Calling .GetSSHKeyPath
	I0819 19:00:39.729911  425784 main.go:141] libmachine: (custom-flannel-571803) Calling .GetSSHUsername
	I0819 19:00:39.730108  425784 main.go:141] libmachine: Using SSH client type: native
	I0819 19:00:39.730346  425784 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.217 22 <nil> <nil>}
	I0819 19:00:39.730365  425784 main.go:141] libmachine: About to run SSH command:
	sudo hostname custom-flannel-571803 && echo "custom-flannel-571803" | sudo tee /etc/hostname
	I0819 19:00:39.866290  425784 main.go:141] libmachine: SSH cmd err, output: <nil>: custom-flannel-571803
	
	I0819 19:00:39.866386  425784 main.go:141] libmachine: (custom-flannel-571803) Calling .GetSSHHostname
	I0819 19:00:39.869701  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | domain custom-flannel-571803 has defined MAC address 52:54:00:a7:6b:85 in network mk-custom-flannel-571803
	I0819 19:00:39.870122  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:6b:85", ip: ""} in network mk-custom-flannel-571803: {Iface:virbr2 ExpiryTime:2024-08-19 20:00:30 +0000 UTC Type:0 Mac:52:54:00:a7:6b:85 Iaid: IPaddr:192.168.50.217 Prefix:24 Hostname:custom-flannel-571803 Clientid:01:52:54:00:a7:6b:85}
	I0819 19:00:39.870194  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | domain custom-flannel-571803 has defined IP address 192.168.50.217 and MAC address 52:54:00:a7:6b:85 in network mk-custom-flannel-571803
	I0819 19:00:39.870315  425784 main.go:141] libmachine: (custom-flannel-571803) Calling .GetSSHPort
	I0819 19:00:39.870536  425784 main.go:141] libmachine: (custom-flannel-571803) Calling .GetSSHKeyPath
	I0819 19:00:39.870739  425784 main.go:141] libmachine: (custom-flannel-571803) Calling .GetSSHKeyPath
	I0819 19:00:39.870908  425784 main.go:141] libmachine: (custom-flannel-571803) Calling .GetSSHUsername
	I0819 19:00:39.871111  425784 main.go:141] libmachine: Using SSH client type: native
	I0819 19:00:39.871362  425784 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.217 22 <nil> <nil>}
	I0819 19:00:39.871390  425784 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scustom-flannel-571803' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-flannel-571803/g' /etc/hosts;
				else 
					echo '127.0.1.1 custom-flannel-571803' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 19:00:40.004544  425784 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 19:00:40.004580  425784 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19468-372744/.minikube CaCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19468-372744/.minikube}
	I0819 19:00:40.004641  425784 buildroot.go:174] setting up certificates
	I0819 19:00:40.004657  425784 provision.go:84] configureAuth start
	I0819 19:00:40.004675  425784 main.go:141] libmachine: (custom-flannel-571803) Calling .GetMachineName
	I0819 19:00:40.004985  425784 main.go:141] libmachine: (custom-flannel-571803) Calling .GetIP
	I0819 19:00:40.008161  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | domain custom-flannel-571803 has defined MAC address 52:54:00:a7:6b:85 in network mk-custom-flannel-571803
	I0819 19:00:40.008590  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:6b:85", ip: ""} in network mk-custom-flannel-571803: {Iface:virbr2 ExpiryTime:2024-08-19 20:00:30 +0000 UTC Type:0 Mac:52:54:00:a7:6b:85 Iaid: IPaddr:192.168.50.217 Prefix:24 Hostname:custom-flannel-571803 Clientid:01:52:54:00:a7:6b:85}
	I0819 19:00:40.008631  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | domain custom-flannel-571803 has defined IP address 192.168.50.217 and MAC address 52:54:00:a7:6b:85 in network mk-custom-flannel-571803
	I0819 19:00:40.008857  425784 main.go:141] libmachine: (custom-flannel-571803) Calling .GetSSHHostname
	I0819 19:00:40.011783  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | domain custom-flannel-571803 has defined MAC address 52:54:00:a7:6b:85 in network mk-custom-flannel-571803
	I0819 19:00:40.012151  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:6b:85", ip: ""} in network mk-custom-flannel-571803: {Iface:virbr2 ExpiryTime:2024-08-19 20:00:30 +0000 UTC Type:0 Mac:52:54:00:a7:6b:85 Iaid: IPaddr:192.168.50.217 Prefix:24 Hostname:custom-flannel-571803 Clientid:01:52:54:00:a7:6b:85}
	I0819 19:00:40.012199  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | domain custom-flannel-571803 has defined IP address 192.168.50.217 and MAC address 52:54:00:a7:6b:85 in network mk-custom-flannel-571803
	I0819 19:00:40.012461  425784 provision.go:143] copyHostCerts
	I0819 19:00:40.012525  425784 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem, removing ...
	I0819 19:00:40.012548  425784 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem
	I0819 19:00:40.012634  425784 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem (1123 bytes)
	I0819 19:00:40.012779  425784 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem, removing ...
	I0819 19:00:40.012792  425784 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem
	I0819 19:00:40.012827  425784 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem (1675 bytes)
	I0819 19:00:40.012917  425784 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem, removing ...
	I0819 19:00:40.012928  425784 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem
	I0819 19:00:40.012956  425784 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem (1082 bytes)
	I0819 19:00:40.013047  425784 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem org=jenkins.custom-flannel-571803 san=[127.0.0.1 192.168.50.217 custom-flannel-571803 localhost minikube]
	I0819 19:00:40.077058  425784 provision.go:177] copyRemoteCerts
	I0819 19:00:40.077122  425784 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 19:00:40.077151  425784 main.go:141] libmachine: (custom-flannel-571803) Calling .GetSSHHostname
	I0819 19:00:40.079982  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | domain custom-flannel-571803 has defined MAC address 52:54:00:a7:6b:85 in network mk-custom-flannel-571803
	I0819 19:00:40.080416  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:6b:85", ip: ""} in network mk-custom-flannel-571803: {Iface:virbr2 ExpiryTime:2024-08-19 20:00:30 +0000 UTC Type:0 Mac:52:54:00:a7:6b:85 Iaid: IPaddr:192.168.50.217 Prefix:24 Hostname:custom-flannel-571803 Clientid:01:52:54:00:a7:6b:85}
	I0819 19:00:40.080447  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | domain custom-flannel-571803 has defined IP address 192.168.50.217 and MAC address 52:54:00:a7:6b:85 in network mk-custom-flannel-571803
	I0819 19:00:40.080658  425784 main.go:141] libmachine: (custom-flannel-571803) Calling .GetSSHPort
	I0819 19:00:40.080892  425784 main.go:141] libmachine: (custom-flannel-571803) Calling .GetSSHKeyPath
	I0819 19:00:40.081076  425784 main.go:141] libmachine: (custom-flannel-571803) Calling .GetSSHUsername
	I0819 19:00:40.081257  425784 sshutil.go:53] new ssh client: &{IP:192.168.50.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/custom-flannel-571803/id_rsa Username:docker}
	I0819 19:00:40.170418  425784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 19:00:40.199418  425784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 19:00:40.226829  425784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0819 19:00:40.254710  425784 provision.go:87] duration metric: took 250.033223ms to configureAuth
	I0819 19:00:40.254766  425784 buildroot.go:189] setting minikube options for container-runtime
	I0819 19:00:40.255004  425784 config.go:182] Loaded profile config "custom-flannel-571803": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:00:40.255112  425784 main.go:141] libmachine: (custom-flannel-571803) Calling .GetSSHHostname
	I0819 19:00:40.258392  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | domain custom-flannel-571803 has defined MAC address 52:54:00:a7:6b:85 in network mk-custom-flannel-571803
	I0819 19:00:40.258833  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:6b:85", ip: ""} in network mk-custom-flannel-571803: {Iface:virbr2 ExpiryTime:2024-08-19 20:00:30 +0000 UTC Type:0 Mac:52:54:00:a7:6b:85 Iaid: IPaddr:192.168.50.217 Prefix:24 Hostname:custom-flannel-571803 Clientid:01:52:54:00:a7:6b:85}
	I0819 19:00:40.258873  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | domain custom-flannel-571803 has defined IP address 192.168.50.217 and MAC address 52:54:00:a7:6b:85 in network mk-custom-flannel-571803
	I0819 19:00:40.259212  425784 main.go:141] libmachine: (custom-flannel-571803) Calling .GetSSHPort
	I0819 19:00:40.259412  425784 main.go:141] libmachine: (custom-flannel-571803) Calling .GetSSHKeyPath
	I0819 19:00:40.259609  425784 main.go:141] libmachine: (custom-flannel-571803) Calling .GetSSHKeyPath
	I0819 19:00:40.259740  425784 main.go:141] libmachine: (custom-flannel-571803) Calling .GetSSHUsername
	I0819 19:00:40.259900  425784 main.go:141] libmachine: Using SSH client type: native
	I0819 19:00:40.260125  425784 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.217 22 <nil> <nil>}
	I0819 19:00:40.260148  425784 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 19:00:40.561731  425784 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 19:00:40.561766  425784 main.go:141] libmachine: Checking connection to Docker...
	I0819 19:00:40.561779  425784 main.go:141] libmachine: (custom-flannel-571803) Calling .GetURL
	I0819 19:00:40.563276  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | Using libvirt version 6000000
	I0819 19:00:40.566084  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | domain custom-flannel-571803 has defined MAC address 52:54:00:a7:6b:85 in network mk-custom-flannel-571803
	I0819 19:00:40.566499  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:6b:85", ip: ""} in network mk-custom-flannel-571803: {Iface:virbr2 ExpiryTime:2024-08-19 20:00:30 +0000 UTC Type:0 Mac:52:54:00:a7:6b:85 Iaid: IPaddr:192.168.50.217 Prefix:24 Hostname:custom-flannel-571803 Clientid:01:52:54:00:a7:6b:85}
	I0819 19:00:40.566542  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | domain custom-flannel-571803 has defined IP address 192.168.50.217 and MAC address 52:54:00:a7:6b:85 in network mk-custom-flannel-571803
	I0819 19:00:40.566825  425784 main.go:141] libmachine: Docker is up and running!
	I0819 19:00:40.566839  425784 main.go:141] libmachine: Reticulating splines...
	I0819 19:00:40.566848  425784 client.go:171] duration metric: took 27.461854645s to LocalClient.Create
	I0819 19:00:40.566874  425784 start.go:167] duration metric: took 27.461921285s to libmachine.API.Create "custom-flannel-571803"
	I0819 19:00:40.566888  425784 start.go:293] postStartSetup for "custom-flannel-571803" (driver="kvm2")
	I0819 19:00:40.566901  425784 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 19:00:40.566924  425784 main.go:141] libmachine: (custom-flannel-571803) Calling .DriverName
	I0819 19:00:40.567220  425784 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 19:00:40.567253  425784 main.go:141] libmachine: (custom-flannel-571803) Calling .GetSSHHostname
	I0819 19:00:40.569615  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | domain custom-flannel-571803 has defined MAC address 52:54:00:a7:6b:85 in network mk-custom-flannel-571803
	I0819 19:00:40.569994  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:6b:85", ip: ""} in network mk-custom-flannel-571803: {Iface:virbr2 ExpiryTime:2024-08-19 20:00:30 +0000 UTC Type:0 Mac:52:54:00:a7:6b:85 Iaid: IPaddr:192.168.50.217 Prefix:24 Hostname:custom-flannel-571803 Clientid:01:52:54:00:a7:6b:85}
	I0819 19:00:40.570021  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | domain custom-flannel-571803 has defined IP address 192.168.50.217 and MAC address 52:54:00:a7:6b:85 in network mk-custom-flannel-571803
	I0819 19:00:40.570208  425784 main.go:141] libmachine: (custom-flannel-571803) Calling .GetSSHPort
	I0819 19:00:40.570390  425784 main.go:141] libmachine: (custom-flannel-571803) Calling .GetSSHKeyPath
	I0819 19:00:40.570553  425784 main.go:141] libmachine: (custom-flannel-571803) Calling .GetSSHUsername
	I0819 19:00:40.570704  425784 sshutil.go:53] new ssh client: &{IP:192.168.50.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/custom-flannel-571803/id_rsa Username:docker}
	I0819 19:00:40.658758  425784 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 19:00:40.663502  425784 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 19:00:40.663537  425784 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-372744/.minikube/addons for local assets ...
	I0819 19:00:40.663619  425784 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-372744/.minikube/files for local assets ...
	I0819 19:00:40.663755  425784 filesync.go:149] local asset: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem -> 3800092.pem in /etc/ssl/certs
	I0819 19:00:40.663875  425784 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 19:00:40.673937  425784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem --> /etc/ssl/certs/3800092.pem (1708 bytes)
	I0819 19:00:40.705096  425784 start.go:296] duration metric: took 138.191141ms for postStartSetup
	I0819 19:00:40.705212  425784 main.go:141] libmachine: (custom-flannel-571803) Calling .GetConfigRaw
	I0819 19:00:40.705877  425784 main.go:141] libmachine: (custom-flannel-571803) Calling .GetIP
	I0819 19:00:40.709096  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | domain custom-flannel-571803 has defined MAC address 52:54:00:a7:6b:85 in network mk-custom-flannel-571803
	I0819 19:00:40.709483  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:6b:85", ip: ""} in network mk-custom-flannel-571803: {Iface:virbr2 ExpiryTime:2024-08-19 20:00:30 +0000 UTC Type:0 Mac:52:54:00:a7:6b:85 Iaid: IPaddr:192.168.50.217 Prefix:24 Hostname:custom-flannel-571803 Clientid:01:52:54:00:a7:6b:85}
	I0819 19:00:40.709513  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | domain custom-flannel-571803 has defined IP address 192.168.50.217 and MAC address 52:54:00:a7:6b:85 in network mk-custom-flannel-571803
	I0819 19:00:40.709866  425784 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/custom-flannel-571803/config.json ...
	I0819 19:00:40.710114  425784 start.go:128] duration metric: took 27.70063751s to createHost
	I0819 19:00:40.710146  425784 main.go:141] libmachine: (custom-flannel-571803) Calling .GetSSHHostname
	I0819 19:00:40.713278  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | domain custom-flannel-571803 has defined MAC address 52:54:00:a7:6b:85 in network mk-custom-flannel-571803
	I0819 19:00:40.713676  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:6b:85", ip: ""} in network mk-custom-flannel-571803: {Iface:virbr2 ExpiryTime:2024-08-19 20:00:30 +0000 UTC Type:0 Mac:52:54:00:a7:6b:85 Iaid: IPaddr:192.168.50.217 Prefix:24 Hostname:custom-flannel-571803 Clientid:01:52:54:00:a7:6b:85}
	I0819 19:00:40.713719  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | domain custom-flannel-571803 has defined IP address 192.168.50.217 and MAC address 52:54:00:a7:6b:85 in network mk-custom-flannel-571803
	I0819 19:00:40.714002  425784 main.go:141] libmachine: (custom-flannel-571803) Calling .GetSSHPort
	I0819 19:00:40.714202  425784 main.go:141] libmachine: (custom-flannel-571803) Calling .GetSSHKeyPath
	I0819 19:00:40.714368  425784 main.go:141] libmachine: (custom-flannel-571803) Calling .GetSSHKeyPath
	I0819 19:00:40.714540  425784 main.go:141] libmachine: (custom-flannel-571803) Calling .GetSSHUsername
	I0819 19:00:40.714735  425784 main.go:141] libmachine: Using SSH client type: native
	I0819 19:00:40.714923  425784 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.217 22 <nil> <nil>}
	I0819 19:00:40.714943  425784 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 19:00:40.832705  425784 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724094040.816462944
	
	I0819 19:00:40.832735  425784 fix.go:216] guest clock: 1724094040.816462944
	I0819 19:00:40.832747  425784 fix.go:229] Guest: 2024-08-19 19:00:40.816462944 +0000 UTC Remote: 2024-08-19 19:00:40.710130118 +0000 UTC m=+33.766012900 (delta=106.332826ms)
	I0819 19:00:40.832775  425784 fix.go:200] guest clock delta is within tolerance: 106.332826ms
	I0819 19:00:40.832781  425784 start.go:83] releasing machines lock for "custom-flannel-571803", held for 27.823513702s
	I0819 19:00:40.832816  425784 main.go:141] libmachine: (custom-flannel-571803) Calling .DriverName
	I0819 19:00:40.833131  425784 main.go:141] libmachine: (custom-flannel-571803) Calling .GetIP
	I0819 19:00:40.837789  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | domain custom-flannel-571803 has defined MAC address 52:54:00:a7:6b:85 in network mk-custom-flannel-571803
	I0819 19:00:40.838273  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:6b:85", ip: ""} in network mk-custom-flannel-571803: {Iface:virbr2 ExpiryTime:2024-08-19 20:00:30 +0000 UTC Type:0 Mac:52:54:00:a7:6b:85 Iaid: IPaddr:192.168.50.217 Prefix:24 Hostname:custom-flannel-571803 Clientid:01:52:54:00:a7:6b:85}
	I0819 19:00:40.838335  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | domain custom-flannel-571803 has defined IP address 192.168.50.217 and MAC address 52:54:00:a7:6b:85 in network mk-custom-flannel-571803
	I0819 19:00:40.838615  425784 main.go:141] libmachine: (custom-flannel-571803) Calling .DriverName
	I0819 19:00:40.839247  425784 main.go:141] libmachine: (custom-flannel-571803) Calling .DriverName
	I0819 19:00:40.839459  425784 main.go:141] libmachine: (custom-flannel-571803) Calling .DriverName
	I0819 19:00:40.839578  425784 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 19:00:40.839630  425784 main.go:141] libmachine: (custom-flannel-571803) Calling .GetSSHHostname
	I0819 19:00:40.839694  425784 ssh_runner.go:195] Run: cat /version.json
	I0819 19:00:40.839759  425784 main.go:141] libmachine: (custom-flannel-571803) Calling .GetSSHHostname
	I0819 19:00:40.842687  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | domain custom-flannel-571803 has defined MAC address 52:54:00:a7:6b:85 in network mk-custom-flannel-571803
	I0819 19:00:40.843001  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | domain custom-flannel-571803 has defined MAC address 52:54:00:a7:6b:85 in network mk-custom-flannel-571803
	I0819 19:00:40.843038  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:6b:85", ip: ""} in network mk-custom-flannel-571803: {Iface:virbr2 ExpiryTime:2024-08-19 20:00:30 +0000 UTC Type:0 Mac:52:54:00:a7:6b:85 Iaid: IPaddr:192.168.50.217 Prefix:24 Hostname:custom-flannel-571803 Clientid:01:52:54:00:a7:6b:85}
	I0819 19:00:40.843052  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | domain custom-flannel-571803 has defined IP address 192.168.50.217 and MAC address 52:54:00:a7:6b:85 in network mk-custom-flannel-571803
	I0819 19:00:40.843205  425784 main.go:141] libmachine: (custom-flannel-571803) Calling .GetSSHPort
	I0819 19:00:40.843437  425784 main.go:141] libmachine: (custom-flannel-571803) Calling .GetSSHKeyPath
	I0819 19:00:40.843480  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:6b:85", ip: ""} in network mk-custom-flannel-571803: {Iface:virbr2 ExpiryTime:2024-08-19 20:00:30 +0000 UTC Type:0 Mac:52:54:00:a7:6b:85 Iaid: IPaddr:192.168.50.217 Prefix:24 Hostname:custom-flannel-571803 Clientid:01:52:54:00:a7:6b:85}
	I0819 19:00:40.843523  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | domain custom-flannel-571803 has defined IP address 192.168.50.217 and MAC address 52:54:00:a7:6b:85 in network mk-custom-flannel-571803
	I0819 19:00:40.843612  425784 main.go:141] libmachine: (custom-flannel-571803) Calling .GetSSHUsername
	I0819 19:00:40.843814  425784 main.go:141] libmachine: (custom-flannel-571803) Calling .GetSSHPort
	I0819 19:00:40.843828  425784 sshutil.go:53] new ssh client: &{IP:192.168.50.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/custom-flannel-571803/id_rsa Username:docker}
	I0819 19:00:40.843976  425784 main.go:141] libmachine: (custom-flannel-571803) Calling .GetSSHKeyPath
	I0819 19:00:40.844113  425784 main.go:141] libmachine: (custom-flannel-571803) Calling .GetSSHUsername
	I0819 19:00:40.844262  425784 sshutil.go:53] new ssh client: &{IP:192.168.50.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/custom-flannel-571803/id_rsa Username:docker}
	I0819 19:00:40.967421  425784 ssh_runner.go:195] Run: systemctl --version
	I0819 19:00:40.981937  425784 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 19:00:41.162909  425784 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 19:00:41.171208  425784 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 19:00:41.171395  425784 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 19:00:41.193616  425784 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 19:00:41.193653  425784 start.go:495] detecting cgroup driver to use...
	I0819 19:00:41.193748  425784 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 19:00:41.217468  425784 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 19:00:41.237645  425784 docker.go:217] disabling cri-docker service (if available) ...
	I0819 19:00:41.237727  425784 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 19:00:41.258127  425784 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 19:00:41.279030  425784 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 19:00:41.454053  425784 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 19:00:41.656586  425784 docker.go:233] disabling docker service ...
	I0819 19:00:41.656666  425784 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 19:00:41.677915  425784 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 19:00:41.697072  425784 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 19:00:41.896452  425784 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 19:00:42.077606  425784 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 19:00:42.097527  425784 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 19:00:42.126522  425784 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 19:00:42.126589  425784 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:00:42.140506  425784 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 19:00:42.140578  425784 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:00:42.155396  425784 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:00:42.170383  425784 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:00:42.185443  425784 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 19:00:42.200853  425784 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:00:42.215777  425784 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:00:42.239315  425784 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:00:42.254126  425784 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 19:00:42.267128  425784 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 19:00:42.267197  425784 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 19:00:42.284327  425784 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 19:00:42.298500  425784 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:00:42.430448  425784 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 19:00:42.621291  425784 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 19:00:42.621382  425784 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 19:00:42.628670  425784 start.go:563] Will wait 60s for crictl version
	I0819 19:00:42.628767  425784 ssh_runner.go:195] Run: which crictl
	I0819 19:00:42.635222  425784 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 19:00:42.694598  425784 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 19:00:42.694742  425784 ssh_runner.go:195] Run: crio --version
	I0819 19:00:42.738507  425784 ssh_runner.go:195] Run: crio --version
	I0819 19:00:42.798462  425784 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 19:00:39.438683  425214 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:00:39.709809  425214 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:00:39.799650  425214 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:00:39.912739  425214 api_server.go:52] waiting for apiserver process to appear ...
	I0819 19:00:39.912841  425214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:00:40.412943  425214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:00:40.913801  425214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:00:40.939965  425214 api_server.go:72] duration metric: took 1.027238988s to wait for apiserver process to appear ...
	I0819 19:00:40.939993  425214 api_server.go:88] waiting for apiserver healthz status ...
	I0819 19:00:40.940016  425214 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8443/healthz ...
	I0819 19:00:40.835192  427182 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0819 19:00:40.835439  427182 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:00:40.835487  427182 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:00:40.859835  427182 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34701
	I0819 19:00:40.860648  427182 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:00:40.864281  427182 main.go:141] libmachine: Using API Version  1
	I0819 19:00:40.864307  427182 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:00:40.864728  427182 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:00:40.869094  427182 main.go:141] libmachine: (flannel-571803) Calling .GetMachineName
	I0819 19:00:40.869359  427182 main.go:141] libmachine: (flannel-571803) Calling .DriverName
	I0819 19:00:40.869559  427182 start.go:159] libmachine.API.Create for "flannel-571803" (driver="kvm2")
	I0819 19:00:40.869594  427182 client.go:168] LocalClient.Create starting
	I0819 19:00:40.869632  427182 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem
	I0819 19:00:40.869687  427182 main.go:141] libmachine: Decoding PEM data...
	I0819 19:00:40.869704  427182 main.go:141] libmachine: Parsing certificate...
	I0819 19:00:40.869774  427182 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem
	I0819 19:00:40.869792  427182 main.go:141] libmachine: Decoding PEM data...
	I0819 19:00:40.869804  427182 main.go:141] libmachine: Parsing certificate...
	I0819 19:00:40.869823  427182 main.go:141] libmachine: Running pre-create checks...
	I0819 19:00:40.869832  427182 main.go:141] libmachine: (flannel-571803) Calling .PreCreateCheck
	I0819 19:00:40.870385  427182 main.go:141] libmachine: (flannel-571803) Calling .GetConfigRaw
	I0819 19:00:40.870862  427182 main.go:141] libmachine: Creating machine...
	I0819 19:00:40.870881  427182 main.go:141] libmachine: (flannel-571803) Calling .Create
	I0819 19:00:40.872191  427182 main.go:141] libmachine: (flannel-571803) Creating KVM machine...
	I0819 19:00:40.874665  427182 main.go:141] libmachine: (flannel-571803) DBG | found existing default KVM network
	I0819 19:00:40.876519  427182 main.go:141] libmachine: (flannel-571803) DBG | I0819 19:00:40.876322  427334 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015920}
	I0819 19:00:40.876541  427182 main.go:141] libmachine: (flannel-571803) DBG | created network xml: 
	I0819 19:00:40.876562  427182 main.go:141] libmachine: (flannel-571803) DBG | <network>
	I0819 19:00:40.876570  427182 main.go:141] libmachine: (flannel-571803) DBG |   <name>mk-flannel-571803</name>
	I0819 19:00:40.876579  427182 main.go:141] libmachine: (flannel-571803) DBG |   <dns enable='no'/>
	I0819 19:00:40.876585  427182 main.go:141] libmachine: (flannel-571803) DBG |   
	I0819 19:00:40.876596  427182 main.go:141] libmachine: (flannel-571803) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0819 19:00:40.876604  427182 main.go:141] libmachine: (flannel-571803) DBG |     <dhcp>
	I0819 19:00:40.876614  427182 main.go:141] libmachine: (flannel-571803) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0819 19:00:40.876621  427182 main.go:141] libmachine: (flannel-571803) DBG |     </dhcp>
	I0819 19:00:40.876631  427182 main.go:141] libmachine: (flannel-571803) DBG |   </ip>
	I0819 19:00:40.876637  427182 main.go:141] libmachine: (flannel-571803) DBG |   
	I0819 19:00:40.876645  427182 main.go:141] libmachine: (flannel-571803) DBG | </network>
	I0819 19:00:40.876652  427182 main.go:141] libmachine: (flannel-571803) DBG | 
	I0819 19:00:40.882603  427182 main.go:141] libmachine: (flannel-571803) DBG | trying to create private KVM network mk-flannel-571803 192.168.39.0/24...
	I0819 19:00:40.965566  427182 main.go:141] libmachine: (flannel-571803) DBG | private KVM network mk-flannel-571803 192.168.39.0/24 created
	I0819 19:00:40.965605  427182 main.go:141] libmachine: (flannel-571803) Setting up store path in /home/jenkins/minikube-integration/19468-372744/.minikube/machines/flannel-571803 ...
	I0819 19:00:40.965743  427182 main.go:141] libmachine: (flannel-571803) Building disk image from file:///home/jenkins/minikube-integration/19468-372744/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0819 19:00:40.965775  427182 main.go:141] libmachine: (flannel-571803) DBG | I0819 19:00:40.965685  427334 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19468-372744/.minikube
	I0819 19:00:40.965844  427182 main.go:141] libmachine: (flannel-571803) Downloading /home/jenkins/minikube-integration/19468-372744/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19468-372744/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 19:00:41.262338  427182 main.go:141] libmachine: (flannel-571803) DBG | I0819 19:00:41.262199  427334 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/flannel-571803/id_rsa...
	I0819 19:00:41.361912  427182 main.go:141] libmachine: (flannel-571803) DBG | I0819 19:00:41.361739  427334 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/flannel-571803/flannel-571803.rawdisk...
	I0819 19:00:41.361947  427182 main.go:141] libmachine: (flannel-571803) DBG | Writing magic tar header
	I0819 19:00:41.361980  427182 main.go:141] libmachine: (flannel-571803) DBG | Writing SSH key tar header
	I0819 19:00:41.361995  427182 main.go:141] libmachine: (flannel-571803) DBG | I0819 19:00:41.361857  427334 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19468-372744/.minikube/machines/flannel-571803 ...
	I0819 19:00:41.362012  427182 main.go:141] libmachine: (flannel-571803) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/flannel-571803
	I0819 19:00:41.362023  427182 main.go:141] libmachine: (flannel-571803) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19468-372744/.minikube/machines
	I0819 19:00:41.362036  427182 main.go:141] libmachine: (flannel-571803) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19468-372744/.minikube
	I0819 19:00:41.362046  427182 main.go:141] libmachine: (flannel-571803) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19468-372744
	I0819 19:00:41.362058  427182 main.go:141] libmachine: (flannel-571803) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0819 19:00:41.362067  427182 main.go:141] libmachine: (flannel-571803) DBG | Checking permissions on dir: /home/jenkins
	I0819 19:00:41.362078  427182 main.go:141] libmachine: (flannel-571803) DBG | Checking permissions on dir: /home
	I0819 19:00:41.362090  427182 main.go:141] libmachine: (flannel-571803) DBG | Skipping /home - not owner
	I0819 19:00:41.362122  427182 main.go:141] libmachine: (flannel-571803) Setting executable bit set on /home/jenkins/minikube-integration/19468-372744/.minikube/machines/flannel-571803 (perms=drwx------)
	I0819 19:00:41.362138  427182 main.go:141] libmachine: (flannel-571803) Setting executable bit set on /home/jenkins/minikube-integration/19468-372744/.minikube/machines (perms=drwxr-xr-x)
	I0819 19:00:41.362165  427182 main.go:141] libmachine: (flannel-571803) Setting executable bit set on /home/jenkins/minikube-integration/19468-372744/.minikube (perms=drwxr-xr-x)
	I0819 19:00:41.362184  427182 main.go:141] libmachine: (flannel-571803) Setting executable bit set on /home/jenkins/minikube-integration/19468-372744 (perms=drwxrwxr-x)
	I0819 19:00:41.362198  427182 main.go:141] libmachine: (flannel-571803) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0819 19:00:41.362216  427182 main.go:141] libmachine: (flannel-571803) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0819 19:00:41.362229  427182 main.go:141] libmachine: (flannel-571803) Creating domain...
	I0819 19:00:41.363579  427182 main.go:141] libmachine: (flannel-571803) define libvirt domain using xml: 
	I0819 19:00:41.363611  427182 main.go:141] libmachine: (flannel-571803) <domain type='kvm'>
	I0819 19:00:41.363623  427182 main.go:141] libmachine: (flannel-571803)   <name>flannel-571803</name>
	I0819 19:00:41.363636  427182 main.go:141] libmachine: (flannel-571803)   <memory unit='MiB'>3072</memory>
	I0819 19:00:41.363645  427182 main.go:141] libmachine: (flannel-571803)   <vcpu>2</vcpu>
	I0819 19:00:41.363652  427182 main.go:141] libmachine: (flannel-571803)   <features>
	I0819 19:00:41.363661  427182 main.go:141] libmachine: (flannel-571803)     <acpi/>
	I0819 19:00:41.363684  427182 main.go:141] libmachine: (flannel-571803)     <apic/>
	I0819 19:00:41.363693  427182 main.go:141] libmachine: (flannel-571803)     <pae/>
	I0819 19:00:41.363700  427182 main.go:141] libmachine: (flannel-571803)     
	I0819 19:00:41.363709  427182 main.go:141] libmachine: (flannel-571803)   </features>
	I0819 19:00:41.363717  427182 main.go:141] libmachine: (flannel-571803)   <cpu mode='host-passthrough'>
	I0819 19:00:41.363724  427182 main.go:141] libmachine: (flannel-571803)   
	I0819 19:00:41.363736  427182 main.go:141] libmachine: (flannel-571803)   </cpu>
	I0819 19:00:41.363768  427182 main.go:141] libmachine: (flannel-571803)   <os>
	I0819 19:00:41.363789  427182 main.go:141] libmachine: (flannel-571803)     <type>hvm</type>
	I0819 19:00:41.363809  427182 main.go:141] libmachine: (flannel-571803)     <boot dev='cdrom'/>
	I0819 19:00:41.363820  427182 main.go:141] libmachine: (flannel-571803)     <boot dev='hd'/>
	I0819 19:00:41.363830  427182 main.go:141] libmachine: (flannel-571803)     <bootmenu enable='no'/>
	I0819 19:00:41.363840  427182 main.go:141] libmachine: (flannel-571803)   </os>
	I0819 19:00:41.363849  427182 main.go:141] libmachine: (flannel-571803)   <devices>
	I0819 19:00:41.363860  427182 main.go:141] libmachine: (flannel-571803)     <disk type='file' device='cdrom'>
	I0819 19:00:41.363875  427182 main.go:141] libmachine: (flannel-571803)       <source file='/home/jenkins/minikube-integration/19468-372744/.minikube/machines/flannel-571803/boot2docker.iso'/>
	I0819 19:00:41.363886  427182 main.go:141] libmachine: (flannel-571803)       <target dev='hdc' bus='scsi'/>
	I0819 19:00:41.363900  427182 main.go:141] libmachine: (flannel-571803)       <readonly/>
	I0819 19:00:41.363907  427182 main.go:141] libmachine: (flannel-571803)     </disk>
	I0819 19:00:41.363916  427182 main.go:141] libmachine: (flannel-571803)     <disk type='file' device='disk'>
	I0819 19:00:41.363925  427182 main.go:141] libmachine: (flannel-571803)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0819 19:00:41.363938  427182 main.go:141] libmachine: (flannel-571803)       <source file='/home/jenkins/minikube-integration/19468-372744/.minikube/machines/flannel-571803/flannel-571803.rawdisk'/>
	I0819 19:00:41.363946  427182 main.go:141] libmachine: (flannel-571803)       <target dev='hda' bus='virtio'/>
	I0819 19:00:41.363954  427182 main.go:141] libmachine: (flannel-571803)     </disk>
	I0819 19:00:41.363961  427182 main.go:141] libmachine: (flannel-571803)     <interface type='network'>
	I0819 19:00:41.363971  427182 main.go:141] libmachine: (flannel-571803)       <source network='mk-flannel-571803'/>
	I0819 19:00:41.363978  427182 main.go:141] libmachine: (flannel-571803)       <model type='virtio'/>
	I0819 19:00:41.363986  427182 main.go:141] libmachine: (flannel-571803)     </interface>
	I0819 19:00:41.363994  427182 main.go:141] libmachine: (flannel-571803)     <interface type='network'>
	I0819 19:00:41.364003  427182 main.go:141] libmachine: (flannel-571803)       <source network='default'/>
	I0819 19:00:41.364009  427182 main.go:141] libmachine: (flannel-571803)       <model type='virtio'/>
	I0819 19:00:41.364024  427182 main.go:141] libmachine: (flannel-571803)     </interface>
	I0819 19:00:41.364031  427182 main.go:141] libmachine: (flannel-571803)     <serial type='pty'>
	I0819 19:00:41.364040  427182 main.go:141] libmachine: (flannel-571803)       <target port='0'/>
	I0819 19:00:41.364046  427182 main.go:141] libmachine: (flannel-571803)     </serial>
	I0819 19:00:41.364055  427182 main.go:141] libmachine: (flannel-571803)     <console type='pty'>
	I0819 19:00:41.364062  427182 main.go:141] libmachine: (flannel-571803)       <target type='serial' port='0'/>
	I0819 19:00:41.364070  427182 main.go:141] libmachine: (flannel-571803)     </console>
	I0819 19:00:41.364088  427182 main.go:141] libmachine: (flannel-571803)     <rng model='virtio'>
	I0819 19:00:41.364098  427182 main.go:141] libmachine: (flannel-571803)       <backend model='random'>/dev/random</backend>
	I0819 19:00:41.364105  427182 main.go:141] libmachine: (flannel-571803)     </rng>
	I0819 19:00:41.364112  427182 main.go:141] libmachine: (flannel-571803)     
	I0819 19:00:41.364121  427182 main.go:141] libmachine: (flannel-571803)     
	I0819 19:00:41.364129  427182 main.go:141] libmachine: (flannel-571803)   </devices>
	I0819 19:00:41.364135  427182 main.go:141] libmachine: (flannel-571803) </domain>
	I0819 19:00:41.364145  427182 main.go:141] libmachine: (flannel-571803) 
	I0819 19:00:41.368449  427182 main.go:141] libmachine: (flannel-571803) DBG | domain flannel-571803 has defined MAC address 52:54:00:9b:fb:cc in network default
	I0819 19:00:41.369187  427182 main.go:141] libmachine: (flannel-571803) Ensuring networks are active...
	I0819 19:00:41.369213  427182 main.go:141] libmachine: (flannel-571803) DBG | domain flannel-571803 has defined MAC address 52:54:00:55:a6:0c in network mk-flannel-571803
	I0819 19:00:41.370012  427182 main.go:141] libmachine: (flannel-571803) Ensuring network default is active
	I0819 19:00:41.370374  427182 main.go:141] libmachine: (flannel-571803) Ensuring network mk-flannel-571803 is active
	I0819 19:00:41.370981  427182 main.go:141] libmachine: (flannel-571803) Getting domain xml...
	I0819 19:00:41.371951  427182 main.go:141] libmachine: (flannel-571803) Creating domain...
	I0819 19:00:42.901394  427182 main.go:141] libmachine: (flannel-571803) Waiting to get IP...
	I0819 19:00:42.902682  427182 main.go:141] libmachine: (flannel-571803) DBG | domain flannel-571803 has defined MAC address 52:54:00:55:a6:0c in network mk-flannel-571803
	I0819 19:00:42.903406  427182 main.go:141] libmachine: (flannel-571803) DBG | unable to find current IP address of domain flannel-571803 in network mk-flannel-571803
	I0819 19:00:42.903424  427182 main.go:141] libmachine: (flannel-571803) DBG | I0819 19:00:42.903324  427334 retry.go:31] will retry after 311.159403ms: waiting for machine to come up
	I0819 19:00:43.216020  427182 main.go:141] libmachine: (flannel-571803) DBG | domain flannel-571803 has defined MAC address 52:54:00:55:a6:0c in network mk-flannel-571803
	I0819 19:00:43.216902  427182 main.go:141] libmachine: (flannel-571803) DBG | unable to find current IP address of domain flannel-571803 in network mk-flannel-571803
	I0819 19:00:43.216936  427182 main.go:141] libmachine: (flannel-571803) DBG | I0819 19:00:43.216811  427334 retry.go:31] will retry after 384.213116ms: waiting for machine to come up
	I0819 19:00:43.602615  427182 main.go:141] libmachine: (flannel-571803) DBG | domain flannel-571803 has defined MAC address 52:54:00:55:a6:0c in network mk-flannel-571803
	I0819 19:00:43.603369  427182 main.go:141] libmachine: (flannel-571803) DBG | unable to find current IP address of domain flannel-571803 in network mk-flannel-571803
	I0819 19:00:43.603398  427182 main.go:141] libmachine: (flannel-571803) DBG | I0819 19:00:43.603276  427334 retry.go:31] will retry after 324.122642ms: waiting for machine to come up
	I0819 19:00:43.928776  427182 main.go:141] libmachine: (flannel-571803) DBG | domain flannel-571803 has defined MAC address 52:54:00:55:a6:0c in network mk-flannel-571803
	I0819 19:00:43.929510  427182 main.go:141] libmachine: (flannel-571803) DBG | unable to find current IP address of domain flannel-571803 in network mk-flannel-571803
	I0819 19:00:43.929542  427182 main.go:141] libmachine: (flannel-571803) DBG | I0819 19:00:43.929422  427334 retry.go:31] will retry after 580.271445ms: waiting for machine to come up
	I0819 19:00:44.511347  427182 main.go:141] libmachine: (flannel-571803) DBG | domain flannel-571803 has defined MAC address 52:54:00:55:a6:0c in network mk-flannel-571803
	I0819 19:00:44.511979  427182 main.go:141] libmachine: (flannel-571803) DBG | unable to find current IP address of domain flannel-571803 in network mk-flannel-571803
	I0819 19:00:44.512008  427182 main.go:141] libmachine: (flannel-571803) DBG | I0819 19:00:44.511891  427334 retry.go:31] will retry after 693.246432ms: waiting for machine to come up
	I0819 19:00:45.207587  427182 main.go:141] libmachine: (flannel-571803) DBG | domain flannel-571803 has defined MAC address 52:54:00:55:a6:0c in network mk-flannel-571803
	I0819 19:00:45.208332  427182 main.go:141] libmachine: (flannel-571803) DBG | unable to find current IP address of domain flannel-571803 in network mk-flannel-571803
	I0819 19:00:45.208360  427182 main.go:141] libmachine: (flannel-571803) DBG | I0819 19:00:45.208223  427334 retry.go:31] will retry after 712.662363ms: waiting for machine to come up
	I0819 19:00:44.282869  425214 api_server.go:279] https://192.168.72.104:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 19:00:44.282920  425214 api_server.go:103] status: https://192.168.72.104:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 19:00:44.282936  425214 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8443/healthz ...
	I0819 19:00:44.309646  425214 api_server.go:279] https://192.168.72.104:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 19:00:44.309678  425214 api_server.go:103] status: https://192.168.72.104:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 19:00:44.440905  425214 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8443/healthz ...
	I0819 19:00:44.447105  425214 api_server.go:279] https://192.168.72.104:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 19:00:44.447150  425214 api_server.go:103] status: https://192.168.72.104:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 19:00:44.940421  425214 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8443/healthz ...
	I0819 19:00:44.951391  425214 api_server.go:279] https://192.168.72.104:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 19:00:44.951427  425214 api_server.go:103] status: https://192.168.72.104:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 19:00:45.440836  425214 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8443/healthz ...
	I0819 19:00:45.449101  425214 api_server.go:279] https://192.168.72.104:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 19:00:45.449133  425214 api_server.go:103] status: https://192.168.72.104:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 19:00:45.940179  425214 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8443/healthz ...
	I0819 19:00:45.946068  425214 api_server.go:279] https://192.168.72.104:8443/healthz returned 200:
	ok
	I0819 19:00:45.955493  425214 api_server.go:141] control plane version: v1.31.0
	I0819 19:00:45.955530  425214 api_server.go:131] duration metric: took 5.015529026s to wait for apiserver health ...
	I0819 19:00:45.955542  425214 cni.go:84] Creating CNI manager for ""
	I0819 19:00:45.955553  425214 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 19:00:45.957392  425214 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 19:00:41.735899  424989 pod_ready.go:103] pod "calico-kube-controllers-7fbd86d5c5-cftrf" in "kube-system" namespace has status "Ready":"False"
	I0819 19:00:44.243302  424989 pod_ready.go:103] pod "calico-kube-controllers-7fbd86d5c5-cftrf" in "kube-system" namespace has status "Ready":"False"
	I0819 19:00:42.799866  425784 main.go:141] libmachine: (custom-flannel-571803) Calling .GetIP
	I0819 19:00:42.803707  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | domain custom-flannel-571803 has defined MAC address 52:54:00:a7:6b:85 in network mk-custom-flannel-571803
	I0819 19:00:42.804188  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:6b:85", ip: ""} in network mk-custom-flannel-571803: {Iface:virbr2 ExpiryTime:2024-08-19 20:00:30 +0000 UTC Type:0 Mac:52:54:00:a7:6b:85 Iaid: IPaddr:192.168.50.217 Prefix:24 Hostname:custom-flannel-571803 Clientid:01:52:54:00:a7:6b:85}
	I0819 19:00:42.804222  425784 main.go:141] libmachine: (custom-flannel-571803) DBG | domain custom-flannel-571803 has defined IP address 192.168.50.217 and MAC address 52:54:00:a7:6b:85 in network mk-custom-flannel-571803
	I0819 19:00:42.804574  425784 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0819 19:00:42.810424  425784 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 19:00:42.827593  425784 kubeadm.go:883] updating cluster {Name:custom-flannel-571803 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.31.0 ClusterName:custom-flannel-571803 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.50.217 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 19:00:42.827935  425784 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 19:00:42.828029  425784 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 19:00:42.866407  425784 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0819 19:00:42.866547  425784 ssh_runner.go:195] Run: which lz4
	I0819 19:00:42.872476  425784 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 19:00:42.879443  425784 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 19:00:42.879483  425784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0819 19:00:44.698018  425784 crio.go:462] duration metric: took 1.825603057s to copy over tarball
	I0819 19:00:44.698122  425784 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 19:00:45.958803  425214 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 19:00:45.973685  425214 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 19:00:45.998965  425214 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 19:00:45.999088  425214 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0819 19:00:45.999118  425214 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0819 19:00:46.012702  425214 system_pods.go:59] 8 kube-system pods found
	I0819 19:00:46.012741  425214 system_pods.go:61] "coredns-6f6b679f8f-55gvh" [85e38ada-38a7-483c-9e84-8459f659ec4e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0819 19:00:46.012752  425214 system_pods.go:61] "coredns-6f6b679f8f-jm7mn" [744db8f2-6033-403f-88c8-ba90643fe7f0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0819 19:00:46.012762  425214 system_pods.go:61] "etcd-kubernetes-upgrade-127646" [3ea93015-5d22-4e68-bb79-a5b5ecb39f53] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0819 19:00:46.012773  425214 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-127646" [af42386b-2806-453b-8507-2391e79216e9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0819 19:00:46.012783  425214 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-127646" [9b0d8744-17c4-4da1-a2ae-1ceac3bdebae] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0819 19:00:46.012795  425214 system_pods.go:61] "kube-proxy-w249t" [020032dd-a67f-49fc-a785-5bb2067bcd3d] Running
	I0819 19:00:46.012804  425214 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-127646" [bfdc5374-6153-4244-9439-6a4c25f7946d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0819 19:00:46.012816  425214 system_pods.go:61] "storage-provisioner" [307b9991-fa9b-43fc-bdbc-1e81d056af23] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0819 19:00:46.012830  425214 system_pods.go:74] duration metric: took 13.834876ms to wait for pod list to return data ...
	I0819 19:00:46.012843  425214 node_conditions.go:102] verifying NodePressure condition ...
	I0819 19:00:46.020777  425214 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 19:00:46.020809  425214 node_conditions.go:123] node cpu capacity is 2
	I0819 19:00:46.020827  425214 node_conditions.go:105] duration metric: took 7.974373ms to run NodePressure ...
	I0819 19:00:46.020850  425214 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:00:47.396529  425784 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.698367824s)
	I0819 19:00:47.396561  425784 crio.go:469] duration metric: took 2.698504193s to extract the tarball
	I0819 19:00:47.396573  425784 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 19:00:47.437393  425784 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 19:00:47.487016  425784 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 19:00:47.487047  425784 cache_images.go:84] Images are preloaded, skipping loading
	I0819 19:00:47.487057  425784 kubeadm.go:934] updating node { 192.168.50.217 8443 v1.31.0 crio true true} ...
	I0819 19:00:47.487211  425784 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=custom-flannel-571803 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.217
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:custom-flannel-571803 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml}
	I0819 19:00:47.487314  425784 ssh_runner.go:195] Run: crio config
	I0819 19:00:47.544983  425784 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0819 19:00:47.545040  425784 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 19:00:47.545076  425784 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.217 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:custom-flannel-571803 NodeName:custom-flannel-571803 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.217"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.217 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 19:00:47.545301  425784 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.217
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "custom-flannel-571803"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.217
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.217"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 19:00:47.545387  425784 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 19:00:47.559572  425784 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 19:00:47.559659  425784 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 19:00:47.573510  425784 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0819 19:00:47.591467  425784 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 19:00:47.610752  425784 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2165 bytes)
	I0819 19:00:47.637696  425784 ssh_runner.go:195] Run: grep 192.168.50.217	control-plane.minikube.internal$ /etc/hosts
	I0819 19:00:47.642443  425784 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.217	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 19:00:47.655908  425784 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:00:47.786729  425784 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 19:00:47.806503  425784 certs.go:68] Setting up /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/custom-flannel-571803 for IP: 192.168.50.217
	I0819 19:00:47.806533  425784 certs.go:194] generating shared ca certs ...
	I0819 19:00:47.806554  425784 certs.go:226] acquiring lock for ca certs: {Name:mk639e03f593e0bccac045f6e9f5ba3b96cc81e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:00:47.806757  425784 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key
	I0819 19:00:47.806811  425784 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key
	I0819 19:00:47.806825  425784 certs.go:256] generating profile certs ...
	I0819 19:00:47.806918  425784 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/custom-flannel-571803/client.key
	I0819 19:00:47.806936  425784 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/custom-flannel-571803/client.crt with IP's: []
	I0819 19:00:47.946582  425784 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/custom-flannel-571803/client.crt ...
	I0819 19:00:47.946614  425784 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/custom-flannel-571803/client.crt: {Name:mk749d0180f1ff3ffae0353a31bb75545099cdd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:00:47.946778  425784 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/custom-flannel-571803/client.key ...
	I0819 19:00:47.946791  425784 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/custom-flannel-571803/client.key: {Name:mk3eb139413dfca4789525eedb9bfe628cac2292 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:00:47.946865  425784 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/custom-flannel-571803/apiserver.key.3b998634
	I0819 19:00:47.946881  425784 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/custom-flannel-571803/apiserver.crt.3b998634 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.217]
	I0819 19:00:48.063885  425784 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/custom-flannel-571803/apiserver.crt.3b998634 ...
	I0819 19:00:48.063917  425784 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/custom-flannel-571803/apiserver.crt.3b998634: {Name:mk0297a05b7283cb5462e9685ddccb6311a765b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:00:48.064095  425784 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/custom-flannel-571803/apiserver.key.3b998634 ...
	I0819 19:00:48.064113  425784 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/custom-flannel-571803/apiserver.key.3b998634: {Name:mk22967bc15402694e6bd7e70bba1eb665a2123b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:00:48.064236  425784 certs.go:381] copying /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/custom-flannel-571803/apiserver.crt.3b998634 -> /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/custom-flannel-571803/apiserver.crt
	I0819 19:00:48.064342  425784 certs.go:385] copying /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/custom-flannel-571803/apiserver.key.3b998634 -> /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/custom-flannel-571803/apiserver.key
	I0819 19:00:48.064405  425784 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/custom-flannel-571803/proxy-client.key
	I0819 19:00:48.064421  425784 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/custom-flannel-571803/proxy-client.crt with IP's: []
	I0819 19:00:48.193221  425784 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/custom-flannel-571803/proxy-client.crt ...
	I0819 19:00:48.193267  425784 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/custom-flannel-571803/proxy-client.crt: {Name:mk10902398026c4d09b24a3e7343de90058ea5e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:00:48.193485  425784 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/custom-flannel-571803/proxy-client.key ...
	I0819 19:00:48.193508  425784 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/custom-flannel-571803/proxy-client.key: {Name:mk1167cbadaa28d216f18f0946eab3fca7240f96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:00:48.193704  425784 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem (1338 bytes)
	W0819 19:00:48.193743  425784 certs.go:480] ignoring /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009_empty.pem, impossibly tiny 0 bytes
	I0819 19:00:48.193752  425784 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 19:00:48.193774  425784 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem (1082 bytes)
	I0819 19:00:48.193797  425784 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem (1123 bytes)
	I0819 19:00:48.193820  425784 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem (1675 bytes)
	I0819 19:00:48.193857  425784 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem (1708 bytes)
	I0819 19:00:48.194587  425784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 19:00:48.223246  425784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 19:00:48.252232  425784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 19:00:48.277684  425784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 19:00:48.304848  425784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/custom-flannel-571803/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0819 19:00:48.332786  425784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/custom-flannel-571803/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 19:00:48.360855  425784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/custom-flannel-571803/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 19:00:48.390712  425784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/custom-flannel-571803/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 19:00:48.458014  425784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem --> /usr/share/ca-certificates/380009.pem (1338 bytes)
	I0819 19:00:48.487624  425784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem --> /usr/share/ca-certificates/3800092.pem (1708 bytes)
	I0819 19:00:48.534708  425784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 19:00:48.569627  425784 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 19:00:48.588596  425784 ssh_runner.go:195] Run: openssl version
	I0819 19:00:48.594840  425784 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/380009.pem && ln -fs /usr/share/ca-certificates/380009.pem /etc/ssl/certs/380009.pem"
	I0819 19:00:48.606089  425784 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/380009.pem
	I0819 19:00:48.611463  425784 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 17:56 /usr/share/ca-certificates/380009.pem
	I0819 19:00:48.611526  425784 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/380009.pem
	I0819 19:00:48.617832  425784 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/380009.pem /etc/ssl/certs/51391683.0"
	I0819 19:00:48.632213  425784 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3800092.pem && ln -fs /usr/share/ca-certificates/3800092.pem /etc/ssl/certs/3800092.pem"
	I0819 19:00:48.643603  425784 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3800092.pem
	I0819 19:00:48.648618  425784 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 17:56 /usr/share/ca-certificates/3800092.pem
	I0819 19:00:48.648715  425784 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3800092.pem
	I0819 19:00:48.654940  425784 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3800092.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 19:00:48.665919  425784 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 19:00:48.676750  425784 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:00:48.681462  425784 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:00:48.681535  425784 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:00:48.687264  425784 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 19:00:48.698150  425784 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 19:00:48.702723  425784 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 19:00:48.702793  425784 kubeadm.go:392] StartCluster: {Name:custom-flannel-571803 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.31.0 ClusterName:custom-flannel-571803 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.50.217 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 19:00:48.702889  425784 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 19:00:48.702933  425784 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 19:00:48.739145  425784 cri.go:89] found id: ""
	I0819 19:00:48.739241  425784 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 19:00:48.749735  425784 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 19:00:48.759475  425784 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 19:00:48.769734  425784 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 19:00:48.769765  425784 kubeadm.go:157] found existing configuration files:
	
	I0819 19:00:48.769821  425784 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 19:00:48.779178  425784 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 19:00:48.779252  425784 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 19:00:48.789386  425784 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 19:00:48.799083  425784 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 19:00:48.799158  425784 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 19:00:48.809035  425784 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 19:00:48.818034  425784 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 19:00:48.818102  425784 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 19:00:48.828782  425784 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 19:00:48.838500  425784 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 19:00:48.838563  425784 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 19:00:48.850174  425784 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 19:00:48.915421  425784 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0819 19:00:48.915615  425784 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 19:00:49.030545  425784 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 19:00:49.030724  425784 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 19:00:49.030884  425784 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0819 19:00:49.042243  425784 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 19:00:49.931904  425214 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (3.911017856s)
	I0819 19:00:49.931963  425214 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 19:00:49.952917  425214 ops.go:34] apiserver oom_adj: -16
	I0819 19:00:49.952994  425214 kubeadm.go:597] duration metric: took 31.651806137s to restartPrimaryControlPlane
	I0819 19:00:49.953018  425214 kubeadm.go:394] duration metric: took 31.952712082s to StartCluster
	I0819 19:00:49.953050  425214 settings.go:142] acquiring lock: {Name:mk396fcf49a1d0e69583cf37ff3c819e37118163 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:00:49.953153  425214 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19468-372744/kubeconfig
	I0819 19:00:49.954575  425214 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/kubeconfig: {Name:mk8e7b4e1bb7da665111d2acd83eb48882c66853 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:00:49.955185  425214 config.go:182] Loaded profile config "kubernetes-upgrade-127646": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:00:49.955330  425214 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 19:00:49.955405  425214 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-127646"
	I0819 19:00:49.955440  425214 addons.go:234] Setting addon storage-provisioner=true in "kubernetes-upgrade-127646"
	W0819 19:00:49.955448  425214 addons.go:243] addon storage-provisioner should already be in state true
	I0819 19:00:49.955479  425214 host.go:66] Checking if "kubernetes-upgrade-127646" exists ...
	I0819 19:00:49.955897  425214 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:00:49.955923  425214 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:00:49.955927  425214 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-127646"
	I0819 19:00:49.955957  425214 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-127646"
	I0819 19:00:49.956339  425214 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:00:49.956372  425214 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:00:49.954973  425214 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.104 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 19:00:49.958678  425214 out.go:177] * Verifying Kubernetes components...
	I0819 19:00:49.960360  425214 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:00:49.982103  425214 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38847
	I0819 19:00:49.982436  425214 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43311
	I0819 19:00:49.982712  425214 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:00:49.982871  425214 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:00:49.983919  425214 main.go:141] libmachine: Using API Version  1
	I0819 19:00:49.983940  425214 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:00:49.984112  425214 main.go:141] libmachine: Using API Version  1
	I0819 19:00:49.984130  425214 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:00:49.984322  425214 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:00:49.984500  425214 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:00:49.984549  425214 main.go:141] libmachine: (kubernetes-upgrade-127646) Calling .GetState
	I0819 19:00:49.985110  425214 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:00:49.985151  425214 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:00:49.992400  425214 kapi.go:59] client config for kubernetes-upgrade-127646: &rest.Config{Host:"https://192.168.72.104:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19468-372744/.minikube/profiles/kubernetes-upgrade-127646/client.crt", KeyFile:"/home/jenkins/minikube-integration/19468-372744/.minikube/profiles/kubernetes-upgrade-127646/client.key", CAFile:"/home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(
nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f18d20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0819 19:00:49.992793  425214 addons.go:234] Setting addon default-storageclass=true in "kubernetes-upgrade-127646"
	W0819 19:00:49.992813  425214 addons.go:243] addon default-storageclass should already be in state true
	I0819 19:00:49.992850  425214 host.go:66] Checking if "kubernetes-upgrade-127646" exists ...
	I0819 19:00:49.993237  425214 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:00:49.993277  425214 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:00:50.005893  425214 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43189
	I0819 19:00:50.006474  425214 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:00:50.007034  425214 main.go:141] libmachine: Using API Version  1
	I0819 19:00:50.007055  425214 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:00:50.007439  425214 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:00:50.007737  425214 main.go:141] libmachine: (kubernetes-upgrade-127646) Calling .GetState
	I0819 19:00:50.009665  425214 main.go:141] libmachine: (kubernetes-upgrade-127646) Calling .DriverName
	I0819 19:00:50.012697  425214 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:00:45.922129  427182 main.go:141] libmachine: (flannel-571803) DBG | domain flannel-571803 has defined MAC address 52:54:00:55:a6:0c in network mk-flannel-571803
	I0819 19:00:45.922765  427182 main.go:141] libmachine: (flannel-571803) DBG | unable to find current IP address of domain flannel-571803 in network mk-flannel-571803
	I0819 19:00:45.922817  427182 main.go:141] libmachine: (flannel-571803) DBG | I0819 19:00:45.922716  427334 retry.go:31] will retry after 1.010396193s: waiting for machine to come up
	I0819 19:00:46.935356  427182 main.go:141] libmachine: (flannel-571803) DBG | domain flannel-571803 has defined MAC address 52:54:00:55:a6:0c in network mk-flannel-571803
	I0819 19:00:46.936121  427182 main.go:141] libmachine: (flannel-571803) DBG | unable to find current IP address of domain flannel-571803 in network mk-flannel-571803
	I0819 19:00:46.936147  427182 main.go:141] libmachine: (flannel-571803) DBG | I0819 19:00:46.936016  427334 retry.go:31] will retry after 1.291237022s: waiting for machine to come up
	I0819 19:00:48.230397  427182 main.go:141] libmachine: (flannel-571803) DBG | domain flannel-571803 has defined MAC address 52:54:00:55:a6:0c in network mk-flannel-571803
	I0819 19:00:48.231053  427182 main.go:141] libmachine: (flannel-571803) DBG | unable to find current IP address of domain flannel-571803 in network mk-flannel-571803
	I0819 19:00:48.231079  427182 main.go:141] libmachine: (flannel-571803) DBG | I0819 19:00:48.230996  427334 retry.go:31] will retry after 1.78527027s: waiting for machine to come up
	I0819 19:00:50.024377  427182 main.go:141] libmachine: (flannel-571803) DBG | domain flannel-571803 has defined MAC address 52:54:00:55:a6:0c in network mk-flannel-571803
	I0819 19:00:50.026122  427182 main.go:141] libmachine: (flannel-571803) DBG | unable to find current IP address of domain flannel-571803 in network mk-flannel-571803
	I0819 19:00:50.026153  427182 main.go:141] libmachine: (flannel-571803) DBG | I0819 19:00:50.026063  427334 retry.go:31] will retry after 1.702259648s: waiting for machine to come up
	I0819 19:00:50.014730  425214 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 19:00:50.014750  425214 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 19:00:50.014775  425214 main.go:141] libmachine: (kubernetes-upgrade-127646) Calling .GetSSHHostname
	I0819 19:00:50.015974  425214 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36261
	I0819 19:00:50.016511  425214 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:00:50.017134  425214 main.go:141] libmachine: Using API Version  1
	I0819 19:00:50.017152  425214 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:00:50.017574  425214 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:00:50.018240  425214 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:00:50.018280  425214 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:00:50.018482  425214 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | domain kubernetes-upgrade-127646 has defined MAC address 52:54:00:9a:26:74 in network mk-kubernetes-upgrade-127646
	I0819 19:00:50.018507  425214 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:26:74", ip: ""} in network mk-kubernetes-upgrade-127646: {Iface:virbr4 ExpiryTime:2024-08-19 19:59:15 +0000 UTC Type:0 Mac:52:54:00:9a:26:74 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:kubernetes-upgrade-127646 Clientid:01:52:54:00:9a:26:74}
	I0819 19:00:50.018536  425214 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | domain kubernetes-upgrade-127646 has defined IP address 192.168.72.104 and MAC address 52:54:00:9a:26:74 in network mk-kubernetes-upgrade-127646
	I0819 19:00:50.018751  425214 main.go:141] libmachine: (kubernetes-upgrade-127646) Calling .GetSSHPort
	I0819 19:00:50.019858  425214 main.go:141] libmachine: (kubernetes-upgrade-127646) Calling .GetSSHKeyPath
	I0819 19:00:50.020032  425214 main.go:141] libmachine: (kubernetes-upgrade-127646) Calling .GetSSHUsername
	I0819 19:00:50.020145  425214 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/kubernetes-upgrade-127646/id_rsa Username:docker}
	I0819 19:00:50.037668  425214 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39301
	I0819 19:00:50.038137  425214 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:00:50.038734  425214 main.go:141] libmachine: Using API Version  1
	I0819 19:00:50.038756  425214 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:00:50.039212  425214 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:00:50.039427  425214 main.go:141] libmachine: (kubernetes-upgrade-127646) Calling .GetState
	I0819 19:00:50.041198  425214 main.go:141] libmachine: (kubernetes-upgrade-127646) Calling .DriverName
	I0819 19:00:50.041537  425214 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 19:00:50.041553  425214 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 19:00:50.041572  425214 main.go:141] libmachine: (kubernetes-upgrade-127646) Calling .GetSSHHostname
	I0819 19:00:50.044691  425214 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | domain kubernetes-upgrade-127646 has defined MAC address 52:54:00:9a:26:74 in network mk-kubernetes-upgrade-127646
	I0819 19:00:50.045336  425214 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:26:74", ip: ""} in network mk-kubernetes-upgrade-127646: {Iface:virbr4 ExpiryTime:2024-08-19 19:59:15 +0000 UTC Type:0 Mac:52:54:00:9a:26:74 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:kubernetes-upgrade-127646 Clientid:01:52:54:00:9a:26:74}
	I0819 19:00:50.045358  425214 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | domain kubernetes-upgrade-127646 has defined IP address 192.168.72.104 and MAC address 52:54:00:9a:26:74 in network mk-kubernetes-upgrade-127646
	I0819 19:00:50.045585  425214 main.go:141] libmachine: (kubernetes-upgrade-127646) Calling .GetSSHPort
	I0819 19:00:50.045781  425214 main.go:141] libmachine: (kubernetes-upgrade-127646) Calling .GetSSHKeyPath
	I0819 19:00:50.045965  425214 main.go:141] libmachine: (kubernetes-upgrade-127646) Calling .GetSSHUsername
	I0819 19:00:50.046127  425214 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/kubernetes-upgrade-127646/id_rsa Username:docker}
	I0819 19:00:50.212863  425214 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 19:00:50.251164  425214 api_server.go:52] waiting for apiserver process to appear ...
	I0819 19:00:50.251248  425214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:00:50.278875  425214 api_server.go:72] duration metric: took 322.309036ms to wait for apiserver process to appear ...
	I0819 19:00:50.278910  425214 api_server.go:88] waiting for apiserver healthz status ...
	I0819 19:00:50.278934  425214 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8443/healthz ...
	I0819 19:00:50.289295  425214 api_server.go:279] https://192.168.72.104:8443/healthz returned 200:
	ok
	I0819 19:00:50.290557  425214 api_server.go:141] control plane version: v1.31.0
	I0819 19:00:50.290635  425214 api_server.go:131] duration metric: took 11.713813ms to wait for apiserver health ...
	I0819 19:00:50.290661  425214 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 19:00:50.301292  425214 system_pods.go:59] 8 kube-system pods found
	I0819 19:00:50.301328  425214 system_pods.go:61] "coredns-6f6b679f8f-55gvh" [85e38ada-38a7-483c-9e84-8459f659ec4e] Running
	I0819 19:00:50.301340  425214 system_pods.go:61] "coredns-6f6b679f8f-jm7mn" [744db8f2-6033-403f-88c8-ba90643fe7f0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0819 19:00:50.301353  425214 system_pods.go:61] "etcd-kubernetes-upgrade-127646" [3ea93015-5d22-4e68-bb79-a5b5ecb39f53] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0819 19:00:50.301364  425214 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-127646" [af42386b-2806-453b-8507-2391e79216e9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0819 19:00:50.301403  425214 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-127646" [9b0d8744-17c4-4da1-a2ae-1ceac3bdebae] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0819 19:00:50.301415  425214 system_pods.go:61] "kube-proxy-w249t" [020032dd-a67f-49fc-a785-5bb2067bcd3d] Running
	I0819 19:00:50.301425  425214 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-127646" [bfdc5374-6153-4244-9439-6a4c25f7946d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0819 19:00:50.301430  425214 system_pods.go:61] "storage-provisioner" [307b9991-fa9b-43fc-bdbc-1e81d056af23] Running
	I0819 19:00:50.301440  425214 system_pods.go:74] duration metric: took 10.759779ms to wait for pod list to return data ...
	I0819 19:00:50.301475  425214 kubeadm.go:582] duration metric: took 344.896856ms to wait for: map[apiserver:true system_pods:true]
	I0819 19:00:50.301497  425214 node_conditions.go:102] verifying NodePressure condition ...
	I0819 19:00:50.335823  425214 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 19:00:50.335863  425214 node_conditions.go:123] node cpu capacity is 2
	I0819 19:00:50.335881  425214 node_conditions.go:105] duration metric: took 34.377156ms to run NodePressure ...
	I0819 19:00:50.335899  425214 start.go:241] waiting for startup goroutines ...
	I0819 19:00:50.352074  425214 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 19:00:50.382614  425214 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 19:00:50.742568  425214 main.go:141] libmachine: Making call to close driver server
	I0819 19:00:50.742592  425214 main.go:141] libmachine: (kubernetes-upgrade-127646) Calling .Close
	I0819 19:00:50.742969  425214 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:00:50.742989  425214 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:00:50.743003  425214 main.go:141] libmachine: Making call to close driver server
	I0819 19:00:50.743012  425214 main.go:141] libmachine: (kubernetes-upgrade-127646) Calling .Close
	I0819 19:00:50.743299  425214 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:00:50.743322  425214 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:00:50.754301  425214 main.go:141] libmachine: Making call to close driver server
	I0819 19:00:50.754349  425214 main.go:141] libmachine: (kubernetes-upgrade-127646) Calling .Close
	I0819 19:00:50.754824  425214 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | Closing plugin on server side
	I0819 19:00:50.754872  425214 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:00:50.754889  425214 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:00:51.320310  425214 main.go:141] libmachine: Making call to close driver server
	I0819 19:00:51.320344  425214 main.go:141] libmachine: (kubernetes-upgrade-127646) Calling .Close
	I0819 19:00:51.320668  425214 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:00:51.320683  425214 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:00:51.320693  425214 main.go:141] libmachine: Making call to close driver server
	I0819 19:00:51.320702  425214 main.go:141] libmachine: (kubernetes-upgrade-127646) Calling .Close
	I0819 19:00:51.321132  425214 main.go:141] libmachine: (kubernetes-upgrade-127646) DBG | Closing plugin on server side
	I0819 19:00:51.321183  425214 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:00:51.321191  425214 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:00:51.323806  425214 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0819 19:00:51.325117  425214 addons.go:510] duration metric: took 1.36979288s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0819 19:00:51.325163  425214 start.go:246] waiting for cluster config update ...
	I0819 19:00:51.325179  425214 start.go:255] writing updated cluster config ...
	I0819 19:00:51.325487  425214 ssh_runner.go:195] Run: rm -f paused
	I0819 19:00:51.396448  425214 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 19:00:51.398074  425214 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-127646" cluster and "default" namespace by default
	I0819 19:00:46.851583  424989 pod_ready.go:103] pod "calico-kube-controllers-7fbd86d5c5-cftrf" in "kube-system" namespace has status "Ready":"False"
	I0819 19:00:49.696157  424989 pod_ready.go:103] pod "calico-kube-controllers-7fbd86d5c5-cftrf" in "kube-system" namespace has status "Ready":"False"
	
	
	==> CRI-O <==
	Aug 19 19:00:52 kubernetes-upgrade-127646 crio[2555]: time="2024-08-19 19:00:52.296196388Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724094052296158879,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8ba29b39-117e-40d7-8901-9fae298f4b5f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:00:52 kubernetes-upgrade-127646 crio[2555]: time="2024-08-19 19:00:52.297447621Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d8bd0c02-5c8b-4ffe-aa89-1df654c7f42c name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:00:52 kubernetes-upgrade-127646 crio[2555]: time="2024-08-19 19:00:52.297768204Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d8bd0c02-5c8b-4ffe-aa89-1df654c7f42c name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:00:52 kubernetes-upgrade-127646 crio[2555]: time="2024-08-19 19:00:52.299213152Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:73f1ec173680d1504165ef0172c4143636ff45367ab03c6bb614d076c479c68d,PodSandboxId:c74d17c63858c2ed4cf1a45ae89d930c5728d7bca28a6b582e8129ee926ca1b6,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724094045183864332,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-55gvh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85e38ada-38a7-483c-9e84-8459f659ec4e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:569cd03f9388278ccf33b8c40e91558ee4f63607db9d654904e14c98f4b22a5d,PodSandboxId:56427d814bc69faeabbff652db5b1d38bef5f12eac94e9da763fe97128234e21,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724094045191137716,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: 307b9991-fa9b-43fc-bdbc-1e81d056af23,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bc9a999a0c507c8f28692b732bfff7e38562b8add7a688a83a8f8202a4aad5b,PodSandboxId:b080111636e4d5faa822efbc9cd9e73222cdabcc5742c8e57449971757b5ecaf,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724094040318475705,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-127646,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: e79a9a77108a4471ffe36a9451c04152,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0912ff28f8df34a814823a33ce06f605b65a4f018ae07fdaa99a081e61b76093,PodSandboxId:da4282d4b505c2167c0403f48b87f7f9af111e29c87644034b4c0611fb7f4562,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724094040368141520,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-127646,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 176d3321696d0866c0af3e0be85be813,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4e381223e9f4c041c5b736f2e74249c8df3fb64c6f073f2ee879ec0a092cd4d,PodSandboxId:16ced6d0bf3fae87dd053c6bebd812e78abd7ffa36b3c8c8cc0f66530dc012a5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724094040335273595,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-127646,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90c
2580ea15b62d86e2d9439297c1a84,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c339ccac4b661eee54a5506c371216bf51a9cc205cbc076b3e3229c98effe89,PodSandboxId:1bfee774ab0295bfdd64d0b51e720a8e074aa6822d8c612d0a522bda906364d6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724094040329354675,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-127646,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 9246a9af7a5520a2c37306cf144346ad,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caa887d13670c85c8596bd4736afa530bfd96d5e8f3d02aa1fb666ba088163ca,PodSandboxId:56427d814bc69faeabbff652db5b1d38bef5f12eac94e9da763fe97128234e21,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724094029813340408,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 307b9991-fa9b-43fc-bdbc-1e81d056af23,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:436733a3f58aea9eb1e25d22069d292b3bc4928e150db54e7b6409a1b5b5a3a8,PodSandboxId:103948327d7d90edd731844cec229304fd6c152baa65d8fc6f427503cc534c30,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724094028808700430,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w249t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 020032dd-a67f-49fc-a785-5bb2067
bcd3d,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ca83df3ebe3f995fb0cd56e8e8d49bd28e9de95ecd30bf1cdf09e6e0aa38159,PodSandboxId:d06e507c5f517bc563bb35c70a238d706e7c851c82248772fb1526ab6d2728d9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724094018569268118,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jm7mn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 744db8f2-6033-403f-88c8-ba90643fe7f0,},Annotations:map[string]s
tring{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1769a85d48846f40b1e6eae5c255e7d7072e7323a484aac6b3d650afddf16fe5,PodSandboxId:c74d17c63858c2ed4cf1a45ae89d930c5728d7bca28a6b582e8129ee926ca1b6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724094018241522231,Labels:map[string]string{io.kubernetes.contai
ner.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-55gvh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85e38ada-38a7-483c-9e84-8459f659ec4e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b72e004b8b08a3f79ef0a1e74949a313f877ef366ff327140f8cf509c86d534,PodSandboxId:b080111636e4d5faa822efbc9cd9e73222cdabcc5742c8e57449971757b5ecaf,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724094017185653564,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-127646,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e79a9a77108a4471ffe36a9451c04152,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee0ec838ce8cfbbe48310687e3edd7d47de6f462020d5cfc488ca94cdd254a0e,PodSandboxId:da4282d4b505c2167c0403f48b87f7f9af111e29c87644034b4c0611fb7f4562,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRe
f:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724094016990857963,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-127646,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 176d3321696d0866c0af3e0be85be813,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:addc11f75f4b1c2af731ecf589cd4818327befcbaefe1a3056065c68653b65d4,PodSandboxId:16ced6d0bf3fae87dd053c6bebd812e78abd7ffa36b3c8c8cc0f66530dc012a5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f
5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724094016540504226,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-127646,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90c2580ea15b62d86e2d9439297c1a84,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:903ee1f45ddab3a04ab0218d58cbbb177b0ee6d4e37bdc0c47a28491d10333ca,PodSandboxId:1bfee774ab0295bfdd64d0b51e720a8e074aa6822d8c612d0a522bda906364d6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0
45733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724094016457253437,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-127646,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9246a9af7a5520a2c37306cf144346ad,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d44b07994d9b7e3e8be8f2262a08758339e98082d8d99aef19f595c346ba962c,PodSandboxId:031ae5e9a22816790f4151d160b66dee354081edc2538193ab8e86b3a5242ec3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Image
Ref:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724093988251943014,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jm7mn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 744db8f2-6033-403f-88c8-ba90643fe7f0,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38081dca9bb995a3c844aeedcdb4f097ab4275df3926d5eddd4f21c024fae0ca,PodSandboxId:30ef8598ea1d72c7098ddd108455e3469fa95093ed48439ff990da30c7738fe8,Metadata:&ContainerMet
adata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724093987843650590,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w249t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 020032dd-a67f-49fc-a785-5bb2067bcd3d,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d8bd0c02-5c8b-4ffe-aa89-1df654c7f42c name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:00:52 kubernetes-upgrade-127646 crio[2555]: time="2024-08-19 19:00:52.366122097Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=aa7e1015-e92c-46c4-a72a-85daa07c6ed8 name=/runtime.v1.RuntimeService/Version
	Aug 19 19:00:52 kubernetes-upgrade-127646 crio[2555]: time="2024-08-19 19:00:52.366228036Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=aa7e1015-e92c-46c4-a72a-85daa07c6ed8 name=/runtime.v1.RuntimeService/Version
	Aug 19 19:00:52 kubernetes-upgrade-127646 crio[2555]: time="2024-08-19 19:00:52.368789207Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e93e32df-95d9-4f6e-be07-06c225d37405 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:00:52 kubernetes-upgrade-127646 crio[2555]: time="2024-08-19 19:00:52.369318841Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724094052369285097,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e93e32df-95d9-4f6e-be07-06c225d37405 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:00:52 kubernetes-upgrade-127646 crio[2555]: time="2024-08-19 19:00:52.370443502Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ed4f7bb1-ad27-4596-b816-cc26abb7d948 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:00:52 kubernetes-upgrade-127646 crio[2555]: time="2024-08-19 19:00:52.370526328Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ed4f7bb1-ad27-4596-b816-cc26abb7d948 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:00:52 kubernetes-upgrade-127646 crio[2555]: time="2024-08-19 19:00:52.371431634Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:73f1ec173680d1504165ef0172c4143636ff45367ab03c6bb614d076c479c68d,PodSandboxId:c74d17c63858c2ed4cf1a45ae89d930c5728d7bca28a6b582e8129ee926ca1b6,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724094045183864332,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-55gvh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85e38ada-38a7-483c-9e84-8459f659ec4e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:569cd03f9388278ccf33b8c40e91558ee4f63607db9d654904e14c98f4b22a5d,PodSandboxId:56427d814bc69faeabbff652db5b1d38bef5f12eac94e9da763fe97128234e21,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724094045191137716,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: 307b9991-fa9b-43fc-bdbc-1e81d056af23,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bc9a999a0c507c8f28692b732bfff7e38562b8add7a688a83a8f8202a4aad5b,PodSandboxId:b080111636e4d5faa822efbc9cd9e73222cdabcc5742c8e57449971757b5ecaf,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724094040318475705,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-127646,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: e79a9a77108a4471ffe36a9451c04152,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0912ff28f8df34a814823a33ce06f605b65a4f018ae07fdaa99a081e61b76093,PodSandboxId:da4282d4b505c2167c0403f48b87f7f9af111e29c87644034b4c0611fb7f4562,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724094040368141520,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-127646,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 176d3321696d0866c0af3e0be85be813,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4e381223e9f4c041c5b736f2e74249c8df3fb64c6f073f2ee879ec0a092cd4d,PodSandboxId:16ced6d0bf3fae87dd053c6bebd812e78abd7ffa36b3c8c8cc0f66530dc012a5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724094040335273595,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-127646,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90c
2580ea15b62d86e2d9439297c1a84,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c339ccac4b661eee54a5506c371216bf51a9cc205cbc076b3e3229c98effe89,PodSandboxId:1bfee774ab0295bfdd64d0b51e720a8e074aa6822d8c612d0a522bda906364d6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724094040329354675,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-127646,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 9246a9af7a5520a2c37306cf144346ad,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caa887d13670c85c8596bd4736afa530bfd96d5e8f3d02aa1fb666ba088163ca,PodSandboxId:56427d814bc69faeabbff652db5b1d38bef5f12eac94e9da763fe97128234e21,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724094029813340408,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 307b9991-fa9b-43fc-bdbc-1e81d056af23,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:436733a3f58aea9eb1e25d22069d292b3bc4928e150db54e7b6409a1b5b5a3a8,PodSandboxId:103948327d7d90edd731844cec229304fd6c152baa65d8fc6f427503cc534c30,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724094028808700430,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w249t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 020032dd-a67f-49fc-a785-5bb2067
bcd3d,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ca83df3ebe3f995fb0cd56e8e8d49bd28e9de95ecd30bf1cdf09e6e0aa38159,PodSandboxId:d06e507c5f517bc563bb35c70a238d706e7c851c82248772fb1526ab6d2728d9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724094018569268118,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jm7mn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 744db8f2-6033-403f-88c8-ba90643fe7f0,},Annotations:map[string]s
tring{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1769a85d48846f40b1e6eae5c255e7d7072e7323a484aac6b3d650afddf16fe5,PodSandboxId:c74d17c63858c2ed4cf1a45ae89d930c5728d7bca28a6b582e8129ee926ca1b6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724094018241522231,Labels:map[string]string{io.kubernetes.contai
ner.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-55gvh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85e38ada-38a7-483c-9e84-8459f659ec4e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b72e004b8b08a3f79ef0a1e74949a313f877ef366ff327140f8cf509c86d534,PodSandboxId:b080111636e4d5faa822efbc9cd9e73222cdabcc5742c8e57449971757b5ecaf,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724094017185653564,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-127646,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e79a9a77108a4471ffe36a9451c04152,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee0ec838ce8cfbbe48310687e3edd7d47de6f462020d5cfc488ca94cdd254a0e,PodSandboxId:da4282d4b505c2167c0403f48b87f7f9af111e29c87644034b4c0611fb7f4562,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRe
f:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724094016990857963,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-127646,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 176d3321696d0866c0af3e0be85be813,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:addc11f75f4b1c2af731ecf589cd4818327befcbaefe1a3056065c68653b65d4,PodSandboxId:16ced6d0bf3fae87dd053c6bebd812e78abd7ffa36b3c8c8cc0f66530dc012a5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f
5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724094016540504226,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-127646,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90c2580ea15b62d86e2d9439297c1a84,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:903ee1f45ddab3a04ab0218d58cbbb177b0ee6d4e37bdc0c47a28491d10333ca,PodSandboxId:1bfee774ab0295bfdd64d0b51e720a8e074aa6822d8c612d0a522bda906364d6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0
45733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724094016457253437,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-127646,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9246a9af7a5520a2c37306cf144346ad,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d44b07994d9b7e3e8be8f2262a08758339e98082d8d99aef19f595c346ba962c,PodSandboxId:031ae5e9a22816790f4151d160b66dee354081edc2538193ab8e86b3a5242ec3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Image
Ref:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724093988251943014,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jm7mn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 744db8f2-6033-403f-88c8-ba90643fe7f0,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38081dca9bb995a3c844aeedcdb4f097ab4275df3926d5eddd4f21c024fae0ca,PodSandboxId:30ef8598ea1d72c7098ddd108455e3469fa95093ed48439ff990da30c7738fe8,Metadata:&ContainerMet
adata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724093987843650590,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w249t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 020032dd-a67f-49fc-a785-5bb2067bcd3d,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ed4f7bb1-ad27-4596-b816-cc26abb7d948 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:00:52 kubernetes-upgrade-127646 crio[2555]: time="2024-08-19 19:00:52.436999131Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=711f4efa-1aac-43ba-a37a-beaa8589e469 name=/runtime.v1.RuntimeService/Version
	Aug 19 19:00:52 kubernetes-upgrade-127646 crio[2555]: time="2024-08-19 19:00:52.437102716Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=711f4efa-1aac-43ba-a37a-beaa8589e469 name=/runtime.v1.RuntimeService/Version
	Aug 19 19:00:52 kubernetes-upgrade-127646 crio[2555]: time="2024-08-19 19:00:52.439221038Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=928a35bc-8915-42f9-9a05-c7f0677d503d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:00:52 kubernetes-upgrade-127646 crio[2555]: time="2024-08-19 19:00:52.440147031Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724094052440112970,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=928a35bc-8915-42f9-9a05-c7f0677d503d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:00:52 kubernetes-upgrade-127646 crio[2555]: time="2024-08-19 19:00:52.440849313Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1d1eba51-3adf-445d-95e9-9369cf101761 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:00:52 kubernetes-upgrade-127646 crio[2555]: time="2024-08-19 19:00:52.440980185Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1d1eba51-3adf-445d-95e9-9369cf101761 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:00:52 kubernetes-upgrade-127646 crio[2555]: time="2024-08-19 19:00:52.441533080Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:73f1ec173680d1504165ef0172c4143636ff45367ab03c6bb614d076c479c68d,PodSandboxId:c74d17c63858c2ed4cf1a45ae89d930c5728d7bca28a6b582e8129ee926ca1b6,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724094045183864332,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-55gvh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85e38ada-38a7-483c-9e84-8459f659ec4e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:569cd03f9388278ccf33b8c40e91558ee4f63607db9d654904e14c98f4b22a5d,PodSandboxId:56427d814bc69faeabbff652db5b1d38bef5f12eac94e9da763fe97128234e21,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724094045191137716,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: 307b9991-fa9b-43fc-bdbc-1e81d056af23,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bc9a999a0c507c8f28692b732bfff7e38562b8add7a688a83a8f8202a4aad5b,PodSandboxId:b080111636e4d5faa822efbc9cd9e73222cdabcc5742c8e57449971757b5ecaf,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724094040318475705,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-127646,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: e79a9a77108a4471ffe36a9451c04152,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0912ff28f8df34a814823a33ce06f605b65a4f018ae07fdaa99a081e61b76093,PodSandboxId:da4282d4b505c2167c0403f48b87f7f9af111e29c87644034b4c0611fb7f4562,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724094040368141520,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-127646,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 176d3321696d0866c0af3e0be85be813,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4e381223e9f4c041c5b736f2e74249c8df3fb64c6f073f2ee879ec0a092cd4d,PodSandboxId:16ced6d0bf3fae87dd053c6bebd812e78abd7ffa36b3c8c8cc0f66530dc012a5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724094040335273595,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-127646,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90c
2580ea15b62d86e2d9439297c1a84,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c339ccac4b661eee54a5506c371216bf51a9cc205cbc076b3e3229c98effe89,PodSandboxId:1bfee774ab0295bfdd64d0b51e720a8e074aa6822d8c612d0a522bda906364d6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724094040329354675,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-127646,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 9246a9af7a5520a2c37306cf144346ad,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caa887d13670c85c8596bd4736afa530bfd96d5e8f3d02aa1fb666ba088163ca,PodSandboxId:56427d814bc69faeabbff652db5b1d38bef5f12eac94e9da763fe97128234e21,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724094029813340408,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 307b9991-fa9b-43fc-bdbc-1e81d056af23,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:436733a3f58aea9eb1e25d22069d292b3bc4928e150db54e7b6409a1b5b5a3a8,PodSandboxId:103948327d7d90edd731844cec229304fd6c152baa65d8fc6f427503cc534c30,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724094028808700430,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w249t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 020032dd-a67f-49fc-a785-5bb2067
bcd3d,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ca83df3ebe3f995fb0cd56e8e8d49bd28e9de95ecd30bf1cdf09e6e0aa38159,PodSandboxId:d06e507c5f517bc563bb35c70a238d706e7c851c82248772fb1526ab6d2728d9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724094018569268118,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jm7mn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 744db8f2-6033-403f-88c8-ba90643fe7f0,},Annotations:map[string]s
tring{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1769a85d48846f40b1e6eae5c255e7d7072e7323a484aac6b3d650afddf16fe5,PodSandboxId:c74d17c63858c2ed4cf1a45ae89d930c5728d7bca28a6b582e8129ee926ca1b6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724094018241522231,Labels:map[string]string{io.kubernetes.contai
ner.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-55gvh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85e38ada-38a7-483c-9e84-8459f659ec4e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b72e004b8b08a3f79ef0a1e74949a313f877ef366ff327140f8cf509c86d534,PodSandboxId:b080111636e4d5faa822efbc9cd9e73222cdabcc5742c8e57449971757b5ecaf,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724094017185653564,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-127646,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e79a9a77108a4471ffe36a9451c04152,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee0ec838ce8cfbbe48310687e3edd7d47de6f462020d5cfc488ca94cdd254a0e,PodSandboxId:da4282d4b505c2167c0403f48b87f7f9af111e29c87644034b4c0611fb7f4562,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRe
f:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724094016990857963,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-127646,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 176d3321696d0866c0af3e0be85be813,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:addc11f75f4b1c2af731ecf589cd4818327befcbaefe1a3056065c68653b65d4,PodSandboxId:16ced6d0bf3fae87dd053c6bebd812e78abd7ffa36b3c8c8cc0f66530dc012a5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f
5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724094016540504226,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-127646,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90c2580ea15b62d86e2d9439297c1a84,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:903ee1f45ddab3a04ab0218d58cbbb177b0ee6d4e37bdc0c47a28491d10333ca,PodSandboxId:1bfee774ab0295bfdd64d0b51e720a8e074aa6822d8c612d0a522bda906364d6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0
45733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724094016457253437,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-127646,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9246a9af7a5520a2c37306cf144346ad,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d44b07994d9b7e3e8be8f2262a08758339e98082d8d99aef19f595c346ba962c,PodSandboxId:031ae5e9a22816790f4151d160b66dee354081edc2538193ab8e86b3a5242ec3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Image
Ref:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724093988251943014,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jm7mn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 744db8f2-6033-403f-88c8-ba90643fe7f0,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38081dca9bb995a3c844aeedcdb4f097ab4275df3926d5eddd4f21c024fae0ca,PodSandboxId:30ef8598ea1d72c7098ddd108455e3469fa95093ed48439ff990da30c7738fe8,Metadata:&ContainerMet
adata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724093987843650590,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w249t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 020032dd-a67f-49fc-a785-5bb2067bcd3d,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1d1eba51-3adf-445d-95e9-9369cf101761 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:00:52 kubernetes-upgrade-127646 crio[2555]: time="2024-08-19 19:00:52.494194626Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d8cf6ebd-6152-4f36-b78a-cdc9476b66d0 name=/runtime.v1.RuntimeService/Version
	Aug 19 19:00:52 kubernetes-upgrade-127646 crio[2555]: time="2024-08-19 19:00:52.494319258Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d8cf6ebd-6152-4f36-b78a-cdc9476b66d0 name=/runtime.v1.RuntimeService/Version
	Aug 19 19:00:52 kubernetes-upgrade-127646 crio[2555]: time="2024-08-19 19:00:52.495842740Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1ec7608b-fd01-4406-be3e-d82f14b4c468 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:00:52 kubernetes-upgrade-127646 crio[2555]: time="2024-08-19 19:00:52.496371091Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724094052496342892,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1ec7608b-fd01-4406-be3e-d82f14b4c468 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:00:52 kubernetes-upgrade-127646 crio[2555]: time="2024-08-19 19:00:52.497042039Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=95007057-37bf-4275-812a-bcdd9cbf2a56 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:00:52 kubernetes-upgrade-127646 crio[2555]: time="2024-08-19 19:00:52.497135772Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=95007057-37bf-4275-812a-bcdd9cbf2a56 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:00:52 kubernetes-upgrade-127646 crio[2555]: time="2024-08-19 19:00:52.497559689Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:73f1ec173680d1504165ef0172c4143636ff45367ab03c6bb614d076c479c68d,PodSandboxId:c74d17c63858c2ed4cf1a45ae89d930c5728d7bca28a6b582e8129ee926ca1b6,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724094045183864332,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-55gvh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85e38ada-38a7-483c-9e84-8459f659ec4e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:569cd03f9388278ccf33b8c40e91558ee4f63607db9d654904e14c98f4b22a5d,PodSandboxId:56427d814bc69faeabbff652db5b1d38bef5f12eac94e9da763fe97128234e21,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724094045191137716,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: 307b9991-fa9b-43fc-bdbc-1e81d056af23,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bc9a999a0c507c8f28692b732bfff7e38562b8add7a688a83a8f8202a4aad5b,PodSandboxId:b080111636e4d5faa822efbc9cd9e73222cdabcc5742c8e57449971757b5ecaf,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724094040318475705,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-127646,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: e79a9a77108a4471ffe36a9451c04152,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0912ff28f8df34a814823a33ce06f605b65a4f018ae07fdaa99a081e61b76093,PodSandboxId:da4282d4b505c2167c0403f48b87f7f9af111e29c87644034b4c0611fb7f4562,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724094040368141520,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-127646,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 176d3321696d0866c0af3e0be85be813,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4e381223e9f4c041c5b736f2e74249c8df3fb64c6f073f2ee879ec0a092cd4d,PodSandboxId:16ced6d0bf3fae87dd053c6bebd812e78abd7ffa36b3c8c8cc0f66530dc012a5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724094040335273595,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-127646,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90c
2580ea15b62d86e2d9439297c1a84,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c339ccac4b661eee54a5506c371216bf51a9cc205cbc076b3e3229c98effe89,PodSandboxId:1bfee774ab0295bfdd64d0b51e720a8e074aa6822d8c612d0a522bda906364d6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724094040329354675,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-127646,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 9246a9af7a5520a2c37306cf144346ad,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caa887d13670c85c8596bd4736afa530bfd96d5e8f3d02aa1fb666ba088163ca,PodSandboxId:56427d814bc69faeabbff652db5b1d38bef5f12eac94e9da763fe97128234e21,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724094029813340408,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 307b9991-fa9b-43fc-bdbc-1e81d056af23,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:436733a3f58aea9eb1e25d22069d292b3bc4928e150db54e7b6409a1b5b5a3a8,PodSandboxId:103948327d7d90edd731844cec229304fd6c152baa65d8fc6f427503cc534c30,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724094028808700430,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w249t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 020032dd-a67f-49fc-a785-5bb2067
bcd3d,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ca83df3ebe3f995fb0cd56e8e8d49bd28e9de95ecd30bf1cdf09e6e0aa38159,PodSandboxId:d06e507c5f517bc563bb35c70a238d706e7c851c82248772fb1526ab6d2728d9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724094018569268118,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jm7mn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 744db8f2-6033-403f-88c8-ba90643fe7f0,},Annotations:map[string]s
tring{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1769a85d48846f40b1e6eae5c255e7d7072e7323a484aac6b3d650afddf16fe5,PodSandboxId:c74d17c63858c2ed4cf1a45ae89d930c5728d7bca28a6b582e8129ee926ca1b6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724094018241522231,Labels:map[string]string{io.kubernetes.contai
ner.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-55gvh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85e38ada-38a7-483c-9e84-8459f659ec4e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b72e004b8b08a3f79ef0a1e74949a313f877ef366ff327140f8cf509c86d534,PodSandboxId:b080111636e4d5faa822efbc9cd9e73222cdabcc5742c8e57449971757b5ecaf,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724094017185653564,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-127646,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e79a9a77108a4471ffe36a9451c04152,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee0ec838ce8cfbbe48310687e3edd7d47de6f462020d5cfc488ca94cdd254a0e,PodSandboxId:da4282d4b505c2167c0403f48b87f7f9af111e29c87644034b4c0611fb7f4562,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRe
f:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724094016990857963,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-127646,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 176d3321696d0866c0af3e0be85be813,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:addc11f75f4b1c2af731ecf589cd4818327befcbaefe1a3056065c68653b65d4,PodSandboxId:16ced6d0bf3fae87dd053c6bebd812e78abd7ffa36b3c8c8cc0f66530dc012a5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f
5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724094016540504226,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-127646,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90c2580ea15b62d86e2d9439297c1a84,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:903ee1f45ddab3a04ab0218d58cbbb177b0ee6d4e37bdc0c47a28491d10333ca,PodSandboxId:1bfee774ab0295bfdd64d0b51e720a8e074aa6822d8c612d0a522bda906364d6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0
45733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724094016457253437,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-127646,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9246a9af7a5520a2c37306cf144346ad,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d44b07994d9b7e3e8be8f2262a08758339e98082d8d99aef19f595c346ba962c,PodSandboxId:031ae5e9a22816790f4151d160b66dee354081edc2538193ab8e86b3a5242ec3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Image
Ref:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724093988251943014,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jm7mn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 744db8f2-6033-403f-88c8-ba90643fe7f0,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38081dca9bb995a3c844aeedcdb4f097ab4275df3926d5eddd4f21c024fae0ca,PodSandboxId:30ef8598ea1d72c7098ddd108455e3469fa95093ed48439ff990da30c7738fe8,Metadata:&ContainerMet
adata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724093987843650590,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w249t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 020032dd-a67f-49fc-a785-5bb2067bcd3d,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=95007057-37bf-4275-812a-bcdd9cbf2a56 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	569cd03f93882       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   7 seconds ago        Running             storage-provisioner       2                   56427d814bc69       storage-provisioner
	73f1ec173680d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   7 seconds ago        Running             coredns                   2                   c74d17c63858c       coredns-6f6b679f8f-55gvh
	0912ff28f8df3       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   12 seconds ago       Running             kube-scheduler            2                   da4282d4b505c       kube-scheduler-kubernetes-upgrade-127646
	a4e381223e9f4       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   12 seconds ago       Running             kube-apiserver            2                   16ced6d0bf3fa       kube-apiserver-kubernetes-upgrade-127646
	7c339ccac4b66       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   12 seconds ago       Running             kube-controller-manager   2                   1bfee774ab029       kube-controller-manager-kubernetes-upgrade-127646
	0bc9a999a0c50       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   12 seconds ago       Running             etcd                      2                   b080111636e4d       etcd-kubernetes-upgrade-127646
	caa887d13670c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   22 seconds ago       Exited              storage-provisioner       1                   56427d814bc69       storage-provisioner
	436733a3f58ae       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   23 seconds ago       Running             kube-proxy                1                   103948327d7d9       kube-proxy-w249t
	6ca83df3ebe3f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   34 seconds ago       Running             coredns                   1                   d06e507c5f517       coredns-6f6b679f8f-jm7mn
	1769a85d48846       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   34 seconds ago       Exited              coredns                   1                   c74d17c63858c       coredns-6f6b679f8f-55gvh
	6b72e004b8b08       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   35 seconds ago       Exited              etcd                      1                   b080111636e4d       etcd-kubernetes-upgrade-127646
	ee0ec838ce8cf       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   35 seconds ago       Exited              kube-scheduler            1                   da4282d4b505c       kube-scheduler-kubernetes-upgrade-127646
	addc11f75f4b1       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   36 seconds ago       Exited              kube-apiserver            1                   16ced6d0bf3fa       kube-apiserver-kubernetes-upgrade-127646
	903ee1f45ddab       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   36 seconds ago       Exited              kube-controller-manager   1                   1bfee774ab029       kube-controller-manager-kubernetes-upgrade-127646
	d44b07994d9b7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   About a minute ago   Exited              coredns                   0                   031ae5e9a2281       coredns-6f6b679f8f-jm7mn
	38081dca9bb99       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   About a minute ago   Exited              kube-proxy                0                   30ef8598ea1d7       kube-proxy-w249t
	
	
	==> coredns [1769a85d48846f40b1e6eae5c255e7d7072e7323a484aac6b3d650afddf16fe5] <==
	
	
	==> coredns [6ca83df3ebe3f995fb0cd56e8e8d49bd28e9de95ecd30bf1cdf09e6e0aa38159] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1694315077]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Aug-2024 19:00:18.954) (total time: 10002ms):
	Trace[1694315077]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (19:00:28.956)
	Trace[1694315077]: [10.00292021s] [10.00292021s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1878112531]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Aug-2024 19:00:18.954) (total time: 10010ms):
	Trace[1878112531]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10010ms (19:00:28.965)
	Trace[1878112531]: [10.010867194s] [10.010867194s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:37392->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:37392->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:37398->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:37398->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:37388->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:37388->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [73f1ec173680d1504165ef0172c4143636ff45367ab03c6bb614d076c479c68d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [d44b07994d9b7e3e8be8f2262a08758339e98082d8d99aef19f595c346ba962c] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/kubernetes: Trace[1538803639]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Aug-2024 18:59:48.709) (total time: 17937ms):
	Trace[1538803639]: [17.937484918s] [17.937484918s] END
	[INFO] plugin/kubernetes: Trace[1213248560]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Aug-2024 18:59:48.707) (total time: 17940ms):
	Trace[1213248560]: [17.940270061s] [17.940270061s] END
	[INFO] plugin/kubernetes: Trace[1972128714]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Aug-2024 18:59:48.707) (total time: 17940ms):
	Trace[1972128714]: [17.940177643s] [17.940177643s] END
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-127646
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-127646
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 18:59:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-127646
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 19:00:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 19:00:44 +0000   Mon, 19 Aug 2024 18:59:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 19:00:44 +0000   Mon, 19 Aug 2024 18:59:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 19:00:44 +0000   Mon, 19 Aug 2024 18:59:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 19:00:44 +0000   Mon, 19 Aug 2024 18:59:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.104
	  Hostname:    kubernetes-upgrade-127646
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1cc06e7c36954aaf9c7eeb2567ca34de
	  System UUID:                1cc06e7c-3695-4aaf-9c7e-eb2567ca34de
	  Boot ID:                    805e0943-3da6-4622-bce3-5bc21450b79d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-55gvh                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     65s
	  kube-system                 coredns-6f6b679f8f-jm7mn                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     65s
	  kube-system                 etcd-kubernetes-upgrade-127646                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         66s
	  kube-system                 kube-apiserver-kubernetes-upgrade-127646             250m (12%)    0 (0%)      0 (0%)           0 (0%)         69s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-127646    200m (10%)    0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 kube-proxy-w249t                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         65s
	  kube-system                 kube-scheduler-kubernetes-upgrade-127646             100m (5%)     0 (0%)      0 (0%)           0 (0%)         66s
	  kube-system                 storage-provisioner                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         64s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 64s                kube-proxy       
	  Normal  Starting                 3s                 kube-proxy       
	  Normal  NodeHasNoDiskPressure    77s (x8 over 77s)  kubelet          Node kubernetes-upgrade-127646 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     77s (x7 over 77s)  kubelet          Node kubernetes-upgrade-127646 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  77s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  77s (x8 over 77s)  kubelet          Node kubernetes-upgrade-127646 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           66s                node-controller  Node kubernetes-upgrade-127646 event: Registered Node kubernetes-upgrade-127646 in Controller
	  Normal  Starting                 13s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12s (x8 over 13s)  kubelet          Node kubernetes-upgrade-127646 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12s (x8 over 13s)  kubelet          Node kubernetes-upgrade-127646 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12s (x7 over 13s)  kubelet          Node kubernetes-upgrade-127646 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2s                 node-controller  Node kubernetes-upgrade-127646 event: Registered Node kubernetes-upgrade-127646 in Controller
	
	
	==> dmesg <==
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.174399] systemd-fstab-generator[568]: Ignoring "noauto" option for root device
	[  +0.060831] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.066993] systemd-fstab-generator[580]: Ignoring "noauto" option for root device
	[  +0.248526] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.157634] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.301607] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +4.563949] systemd-fstab-generator[733]: Ignoring "noauto" option for root device
	[  +0.079546] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.887989] systemd-fstab-generator[855]: Ignoring "noauto" option for root device
	[ +10.942654] kauditd_printk_skb: 97 callbacks suppressed
	[  +0.802365] systemd-fstab-generator[1250]: Ignoring "noauto" option for root device
	[Aug19 19:00] systemd-fstab-generator[2203]: Ignoring "noauto" option for root device
	[  +0.102273] kauditd_printk_skb: 101 callbacks suppressed
	[  +0.072620] systemd-fstab-generator[2215]: Ignoring "noauto" option for root device
	[  +0.238307] systemd-fstab-generator[2229]: Ignoring "noauto" option for root device
	[  +0.232826] systemd-fstab-generator[2253]: Ignoring "noauto" option for root device
	[  +0.432635] systemd-fstab-generator[2369]: Ignoring "noauto" option for root device
	[  +1.852060] systemd-fstab-generator[2698]: Ignoring "noauto" option for root device
	[  +2.314214] kauditd_printk_skb: 217 callbacks suppressed
	[ +17.012995] kauditd_printk_skb: 11 callbacks suppressed
	[  +3.886074] systemd-fstab-generator[3631]: Ignoring "noauto" option for root device
	[  +5.726073] kauditd_printk_skb: 43 callbacks suppressed
	[  +4.757762] systemd-fstab-generator[4058]: Ignoring "noauto" option for root device
	[  +0.321758] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [0bc9a999a0c507c8f28692b732bfff7e38562b8add7a688a83a8f8202a4aad5b] <==
	{"level":"info","ts":"2024-08-19T19:00:48.915021Z","caller":"traceutil/trace.go:171","msg":"trace[1969037001] transaction","detail":"{read_only:false; number_of_response:0; response_revision:418; }","duration":"552.900176ms","start":"2024-08-19T19:00:48.362108Z","end":"2024-08-19T19:00:48.915008Z","steps":["trace[1969037001] 'process raft request'  (duration: 172.526288ms)","trace[1969037001] 'compare'  (duration: 379.034603ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-19T19:00:48.915111Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T19:00:48.362085Z","time spent":"552.987076ms","remote":"127.0.0.1:59584","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":27,"request content":"compare:<target:MOD key:\"/registry/clusterrolebindings/system:coredns\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterrolebindings/system:coredns\" value_size:348 >> failure:<>"}
	{"level":"info","ts":"2024-08-19T19:00:49.164300Z","caller":"traceutil/trace.go:171","msg":"trace[2143751925] linearizableReadLoop","detail":"{readStateIndex:441; appliedIndex:440; }","duration":"243.415639ms","start":"2024-08-19T19:00:48.920867Z","end":"2024-08-19T19:00:49.164283Z","steps":["trace[2143751925] 'read index received'  (duration: 243.233217ms)","trace[2143751925] 'applied index is now lower than readState.Index'  (duration: 181.849µs)"],"step_count":2}
	{"level":"info","ts":"2024-08-19T19:00:49.164456Z","caller":"traceutil/trace.go:171","msg":"trace[1435837521] transaction","detail":"{read_only:false; response_revision:419; number_of_response:1; }","duration":"244.181292ms","start":"2024-08-19T19:00:48.920259Z","end":"2024-08-19T19:00:49.164440Z","steps":["trace[1435837521] 'process raft request'  (duration: 243.925164ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T19:00:49.164640Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"243.695992ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/system:coredns\" ","response":"range_response_count:1 size:415"}
	{"level":"info","ts":"2024-08-19T19:00:49.165201Z","caller":"traceutil/trace.go:171","msg":"trace[1791770606] range","detail":"{range_begin:/registry/clusterrolebindings/system:coredns; range_end:; response_count:1; response_revision:419; }","duration":"244.328384ms","start":"2024-08-19T19:00:48.920861Z","end":"2024-08-19T19:00:49.165190Z","steps":["trace[1791770606] 'agreement among raft nodes before linearized reading'  (duration: 243.618763ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T19:00:49.164937Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"151.780537ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/kubernetes-upgrade-127646\" ","response":"range_response_count:1 size:4568"}
	{"level":"info","ts":"2024-08-19T19:00:49.165428Z","caller":"traceutil/trace.go:171","msg":"trace[1928978812] range","detail":"{range_begin:/registry/minions/kubernetes-upgrade-127646; range_end:; response_count:1; response_revision:419; }","duration":"152.276602ms","start":"2024-08-19T19:00:49.013143Z","end":"2024-08-19T19:00:49.165420Z","steps":["trace[1928978812] 'agreement among raft nodes before linearized reading'  (duration: 151.740271ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T19:00:49.165170Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"241.981675ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/root-ca-cert-publisher\" ","response":"range_response_count:1 size:209"}
	{"level":"info","ts":"2024-08-19T19:00:49.166735Z","caller":"traceutil/trace.go:171","msg":"trace[1590232168] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/root-ca-cert-publisher; range_end:; response_count:1; response_revision:419; }","duration":"243.624451ms","start":"2024-08-19T19:00:48.923096Z","end":"2024-08-19T19:00:49.166721Z","steps":["trace[1590232168] 'agreement among raft nodes before linearized reading'  (duration: 241.678571ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T19:00:49.532386Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"297.880262ms","expected-duration":"100ms","prefix":"","request":"header:<ID:641922839998009949 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/coredns\" mod_revision:0 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/coredns\" value_size:112 >> failure:<>>","response":"size:5"}
	{"level":"info","ts":"2024-08-19T19:00:49.532660Z","caller":"traceutil/trace.go:171","msg":"trace[963460046] linearizableReadLoop","detail":"{readStateIndex:443; appliedIndex:441; }","duration":"354.920428ms","start":"2024-08-19T19:00:49.177723Z","end":"2024-08-19T19:00:49.532643Z","steps":["trace[963460046] 'read index received'  (duration: 56.680342ms)","trace[963460046] 'applied index is now lower than readState.Index'  (duration: 298.239267ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-19T19:00:49.532666Z","caller":"traceutil/trace.go:171","msg":"trace[718790459] transaction","detail":"{read_only:false; number_of_response:0; response_revision:419; }","duration":"358.496192ms","start":"2024-08-19T19:00:49.174159Z","end":"2024-08-19T19:00:49.532655Z","steps":["trace[718790459] 'process raft request'  (duration: 60.306134ms)","trace[718790459] 'compare'  (duration: 297.834839ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-19T19:00:49.532698Z","caller":"traceutil/trace.go:171","msg":"trace[836589615] transaction","detail":"{read_only:false; response_revision:420; number_of_response:1; }","duration":"358.280689ms","start":"2024-08-19T19:00:49.174409Z","end":"2024-08-19T19:00:49.532689Z","steps":["trace[836589615] 'process raft request'  (duration: 358.083699ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T19:00:49.532843Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"355.113052ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/node-controller\" ","response":"range_response_count:1 size:195"}
	{"level":"info","ts":"2024-08-19T19:00:49.532899Z","caller":"traceutil/trace.go:171","msg":"trace[330109691] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/node-controller; range_end:; response_count:1; response_revision:420; }","duration":"355.168669ms","start":"2024-08-19T19:00:49.177719Z","end":"2024-08-19T19:00:49.532888Z","steps":["trace[330109691] 'agreement among raft nodes before linearized reading'  (duration: 355.061188ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T19:00:49.532927Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T19:00:49.177689Z","time spent":"355.228865ms","remote":"127.0.0.1:59440","response type":"/etcdserverpb.KV/Range","request count":0,"request size":55,"response count":1,"response size":217,"request content":"key:\"/registry/serviceaccounts/kube-system/node-controller\" "}
	{"level":"warn","ts":"2024-08-19T19:00:49.532847Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T19:00:49.174398Z","time spent":"358.41008ms","remote":"127.0.0.1:59308","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":761,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/events/default/kubernetes-upgrade-127646.17ed3665096cb479\" mod_revision:417 > success:<request_put:<key:\"/registry/events/default/kubernetes-upgrade-127646.17ed3665096cb479\" value_size:676 lease:641922839998009901 >> failure:<request_range:<key:\"/registry/events/default/kubernetes-upgrade-127646.17ed3665096cb479\" > >"}
	{"level":"warn","ts":"2024-08-19T19:00:49.532762Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T19:00:49.174144Z","time spent":"358.582589ms","remote":"127.0.0.1:59440","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":27,"request content":"compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/coredns\" mod_revision:0 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/coredns\" value_size:112 >> failure:<>"}
	{"level":"info","ts":"2024-08-19T19:00:49.676473Z","caller":"traceutil/trace.go:171","msg":"trace[1470836217] linearizableReadLoop","detail":"{readStateIndex:446; appliedIndex:445; }","duration":"136.965814ms","start":"2024-08-19T19:00:49.539487Z","end":"2024-08-19T19:00:49.676453Z","steps":["trace[1470836217] 'read index received'  (duration: 29.933334ms)","trace[1470836217] 'applied index is now lower than readState.Index'  (duration: 107.031446ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-19T19:00:49.676630Z","caller":"traceutil/trace.go:171","msg":"trace[1386971187] transaction","detail":"{read_only:false; response_revision:422; number_of_response:1; }","duration":"137.113707ms","start":"2024-08-19T19:00:49.539432Z","end":"2024-08-19T19:00:49.676546Z","steps":["trace[1386971187] 'process raft request'  (duration: 107.431047ms)","trace[1386971187] 'compare'  (duration: 29.444547ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-19T19:00:49.676836Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"137.328756ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/node-controller\" ","response":"range_response_count:1 size:195"}
	{"level":"info","ts":"2024-08-19T19:00:49.677698Z","caller":"traceutil/trace.go:171","msg":"trace[743657985] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/node-controller; range_end:; response_count:1; response_revision:422; }","duration":"138.197145ms","start":"2024-08-19T19:00:49.539485Z","end":"2024-08-19T19:00:49.677682Z","steps":["trace[743657985] 'agreement among raft nodes before linearized reading'  (duration: 137.260265ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T19:00:49.677136Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"136.86471ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/coredns\" ","response":"range_response_count:1 size:179"}
	{"level":"info","ts":"2024-08-19T19:00:49.678406Z","caller":"traceutil/trace.go:171","msg":"trace[1885351026] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/coredns; range_end:; response_count:1; response_revision:422; }","duration":"138.138131ms","start":"2024-08-19T19:00:49.540256Z","end":"2024-08-19T19:00:49.678394Z","steps":["trace[1885351026] 'agreement among raft nodes before linearized reading'  (duration: 136.802062ms)"],"step_count":1}
	
	
	==> etcd [6b72e004b8b08a3f79ef0a1e74949a313f877ef366ff327140f8cf509c86d534] <==
	{"level":"info","ts":"2024-08-19T19:00:18.302951Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-08-19T19:00:18.342491Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"31eeb21fffaecb88","local-member-id":"50db06592c6308e8","commit-index":408}
	{"level":"info","ts":"2024-08-19T19:00:18.343039Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"50db06592c6308e8 switched to configuration voters=()"}
	{"level":"info","ts":"2024-08-19T19:00:18.343123Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"50db06592c6308e8 became follower at term 2"}
	{"level":"info","ts":"2024-08-19T19:00:18.343142Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 50db06592c6308e8 [peers: [], term: 2, commit: 408, applied: 0, lastindex: 408, lastterm: 2]"}
	{"level":"warn","ts":"2024-08-19T19:00:18.348784Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-08-19T19:00:18.367149Z","caller":"mvcc/kvstore.go:418","msg":"kvstore restored","current-rev":395}
	{"level":"info","ts":"2024-08-19T19:00:18.372664Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-08-19T19:00:18.382314Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"50db06592c6308e8","timeout":"7s"}
	{"level":"info","ts":"2024-08-19T19:00:18.383009Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"50db06592c6308e8"}
	{"level":"info","ts":"2024-08-19T19:00:18.383093Z","caller":"etcdserver/server.go:867","msg":"starting etcd server","local-member-id":"50db06592c6308e8","local-server-version":"3.5.15","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-08-19T19:00:18.383461Z","caller":"etcdserver/server.go:767","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-08-19T19:00:18.383747Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-19T19:00:18.383809Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-19T19:00:18.383820Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-19T19:00:18.384273Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"50db06592c6308e8 switched to configuration voters=(5826257523000412392)"}
	{"level":"info","ts":"2024-08-19T19:00:18.384357Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"31eeb21fffaecb88","local-member-id":"50db06592c6308e8","added-peer-id":"50db06592c6308e8","added-peer-peer-urls":["https://192.168.72.104:2380"]}
	{"level":"info","ts":"2024-08-19T19:00:18.384493Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"31eeb21fffaecb88","local-member-id":"50db06592c6308e8","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T19:00:18.384533Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T19:00:18.395479Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T19:00:18.401975Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.72.104:2380"}
	{"level":"info","ts":"2024-08-19T19:00:18.402098Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.72.104:2380"}
	{"level":"info","ts":"2024-08-19T19:00:18.401787Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-19T19:00:18.408733Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-19T19:00:18.408671Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"50db06592c6308e8","initial-advertise-peer-urls":["https://192.168.72.104:2380"],"listen-peer-urls":["https://192.168.72.104:2380"],"advertise-client-urls":["https://192.168.72.104:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.104:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	
	
	==> kernel <==
	 19:00:53 up 1 min,  0 users,  load average: 1.18, 0.42, 0.15
	Linux kubernetes-upgrade-127646 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [a4e381223e9f4c041c5b736f2e74249c8df3fb64c6f073f2ee879ec0a092cd4d] <==
	I0819 19:00:44.315198       1 establishing_controller.go:81] Starting EstablishingController
	I0819 19:00:44.315208       1 nonstructuralschema_controller.go:195] Starting NonStructuralSchemaConditionController
	I0819 19:00:44.315216       1 apiapproval_controller.go:189] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0819 19:00:44.315224       1 crd_finalizer.go:269] Starting CRDFinalizer
	I0819 19:00:44.337153       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0819 19:00:44.337387       1 shared_informer.go:320] Caches are synced for configmaps
	I0819 19:00:44.409365       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0819 19:00:44.421644       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0819 19:00:44.421748       1 policy_source.go:224] refreshing policies
	I0819 19:00:44.427068       1 cache.go:39] Caches are synced for autoregister controller
	I0819 19:00:44.435731       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0819 19:00:44.435770       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0819 19:00:44.436792       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0819 19:00:44.436967       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0819 19:00:44.437292       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0819 19:00:44.445959       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0819 19:00:44.453970       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0819 19:00:45.253085       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0819 19:00:49.171469       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0819 19:00:49.701463       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0819 19:00:49.772283       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0819 19:00:49.871905       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0819 19:00:49.899861       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0819 19:00:50.352733       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0819 19:00:50.402066       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [addc11f75f4b1c2af731ecf589cd4818327befcbaefe1a3056065c68653b65d4] <==
	I0819 19:00:16.983362       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W0819 19:00:17.997871       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:00:17.999763       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0819 19:00:18.009266       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0819 19:00:18.038236       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0819 19:00:18.086382       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0819 19:00:18.086408       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0819 19:00:18.089832       1 instance.go:232] Using reconciler: lease
	W0819 19:00:18.094021       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:00:19.001091       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:00:19.001268       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:00:19.094638       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:00:20.710162       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:00:20.837441       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:00:20.987266       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:00:23.108723       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:00:23.178101       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:00:23.547534       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:00:26.841358       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:00:26.977296       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:00:28.033775       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:00:32.275892       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:00:32.881306       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:00:33.847720       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F0819 19:00:38.091053       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [7c339ccac4b661eee54a5506c371216bf51a9cc205cbc076b3e3229c98effe89] <==
	I0819 19:00:50.325681       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0819 19:00:50.325726       1 shared_informer.go:320] Caches are synced for TTL
	I0819 19:00:50.339724       1 shared_informer.go:320] Caches are synced for daemon sets
	I0819 19:00:50.343892       1 shared_informer.go:320] Caches are synced for ephemeral
	I0819 19:00:50.343908       1 shared_informer.go:320] Caches are synced for GC
	I0819 19:00:50.345055       1 shared_informer.go:320] Caches are synced for deployment
	I0819 19:00:50.350055       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0819 19:00:50.352840       1 shared_informer.go:320] Caches are synced for job
	I0819 19:00:50.361485       1 shared_informer.go:320] Caches are synced for HPA
	I0819 19:00:50.372463       1 shared_informer.go:320] Caches are synced for taint
	I0819 19:00:50.372629       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0819 19:00:50.372966       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="kubernetes-upgrade-127646"
	I0819 19:00:50.373062       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0819 19:00:50.373420       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0819 19:00:50.377655       1 shared_informer.go:320] Caches are synced for attach detach
	I0819 19:00:50.383452       1 shared_informer.go:320] Caches are synced for stateful set
	I0819 19:00:50.388886       1 shared_informer.go:320] Caches are synced for PVC protection
	I0819 19:00:50.432659       1 shared_informer.go:320] Caches are synced for disruption
	I0819 19:00:50.449435       1 shared_informer.go:320] Caches are synced for resource quota
	I0819 19:00:50.473103       1 shared_informer.go:320] Caches are synced for resource quota
	I0819 19:00:50.610859       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="260.681457ms"
	I0819 19:00:50.617429       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="56.926µs"
	I0819 19:00:50.912990       1 shared_informer.go:320] Caches are synced for garbage collector
	I0819 19:00:50.931492       1 shared_informer.go:320] Caches are synced for garbage collector
	I0819 19:00:50.931527       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [903ee1f45ddab3a04ab0218d58cbbb177b0ee6d4e37bdc0c47a28491d10333ca] <==
	I0819 19:00:18.427662       1 serving.go:386] Generated self-signed cert in-memory
	I0819 19:00:19.094149       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0819 19:00:19.094777       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 19:00:19.097542       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0819 19:00:19.097730       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0819 19:00:19.098369       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0819 19:00:19.098495       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	
	
	==> kube-proxy [38081dca9bb995a3c844aeedcdb4f097ab4275df3926d5eddd4f21c024fae0ca] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 18:59:48.314051       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0819 18:59:48.382950       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.72.104"]
	E0819 18:59:48.383028       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 18:59:48.604312       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 18:59:48.604375       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 18:59:48.604415       1 server_linux.go:169] "Using iptables Proxier"
	I0819 18:59:48.612491       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 18:59:48.612772       1 server.go:483] "Version info" version="v1.31.0"
	I0819 18:59:48.612799       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 18:59:48.614220       1 config.go:197] "Starting service config controller"
	I0819 18:59:48.614304       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 18:59:48.623934       1 config.go:104] "Starting endpoint slice config controller"
	I0819 18:59:48.623979       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 18:59:48.626166       1 config.go:326] "Starting node config controller"
	I0819 18:59:48.628085       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 18:59:48.714818       1 shared_informer.go:320] Caches are synced for service config
	I0819 18:59:48.724100       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0819 18:59:48.728263       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [436733a3f58aea9eb1e25d22069d292b3bc4928e150db54e7b6409a1b5b5a3a8] <==
	E0819 19:00:29.016960       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 19:00:39.099976       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-127646\": dial tcp 192.168.72.104:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.72.104:60304->192.168.72.104:8443: read: connection reset by peer"
	E0819 19:00:40.166822       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-127646\": dial tcp 192.168.72.104:8443: connect: connection refused"
	E0819 19:00:44.347356       1 server.go:666] "Failed to retrieve node info" err="nodes \"kubernetes-upgrade-127646\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot get resource \"nodes\" in API group \"\" at the cluster scope"
	I0819 19:00:49.170066       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.72.104"]
	E0819 19:00:49.171686       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 19:00:49.233726       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 19:00:49.233849       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 19:00:49.233917       1 server_linux.go:169] "Using iptables Proxier"
	I0819 19:00:49.240360       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 19:00:49.241079       1 server.go:483] "Version info" version="v1.31.0"
	I0819 19:00:49.241202       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 19:00:49.243025       1 config.go:197] "Starting service config controller"
	I0819 19:00:49.243105       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 19:00:49.243331       1 config.go:104] "Starting endpoint slice config controller"
	I0819 19:00:49.243366       1 config.go:326] "Starting node config controller"
	I0819 19:00:49.243367       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 19:00:49.243393       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 19:00:49.343939       1 shared_informer.go:320] Caches are synced for service config
	I0819 19:00:49.343982       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0819 19:00:49.344524       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [0912ff28f8df34a814823a33ce06f605b65a4f018ae07fdaa99a081e61b76093] <==
	I0819 19:00:41.699897       1 serving.go:386] Generated self-signed cert in-memory
	W0819 19:00:44.351651       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0819 19:00:44.351817       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0819 19:00:44.351901       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0819 19:00:44.352233       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0819 19:00:44.376452       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0819 19:00:44.376615       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 19:00:44.385256       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0819 19:00:44.386315       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0819 19:00:44.386434       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0819 19:00:44.386625       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0819 19:00:44.503882       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [ee0ec838ce8cfbbe48310687e3edd7d47de6f462020d5cfc488ca94cdd254a0e] <==
	I0819 19:00:19.367292       1 serving.go:386] Generated self-signed cert in-memory
	
	
	==> kubelet <==
	Aug 19 19:00:40 kubernetes-upgrade-127646 kubelet[3638]: I0819 19:00:40.047783    3638 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/90c2580ea15b62d86e2d9439297c1a84-usr-share-ca-certificates\") pod \"kube-apiserver-kubernetes-upgrade-127646\" (UID: \"90c2580ea15b62d86e2d9439297c1a84\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-127646"
	Aug 19 19:00:40 kubernetes-upgrade-127646 kubelet[3638]: I0819 19:00:40.047814    3638 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9246a9af7a5520a2c37306cf144346ad-ca-certs\") pod \"kube-controller-manager-kubernetes-upgrade-127646\" (UID: \"9246a9af7a5520a2c37306cf144346ad\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-127646"
	Aug 19 19:00:40 kubernetes-upgrade-127646 kubelet[3638]: I0819 19:00:40.249774    3638 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-127646"
	Aug 19 19:00:40 kubernetes-upgrade-127646 kubelet[3638]: E0819 19:00:40.251127    3638 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.72.104:8443: connect: connection refused" node="kubernetes-upgrade-127646"
	Aug 19 19:00:40 kubernetes-upgrade-127646 kubelet[3638]: I0819 19:00:40.297967    3638 scope.go:117] "RemoveContainer" containerID="6b72e004b8b08a3f79ef0a1e74949a313f877ef366ff327140f8cf509c86d534"
	Aug 19 19:00:40 kubernetes-upgrade-127646 kubelet[3638]: I0819 19:00:40.299459    3638 scope.go:117] "RemoveContainer" containerID="addc11f75f4b1c2af731ecf589cd4818327befcbaefe1a3056065c68653b65d4"
	Aug 19 19:00:40 kubernetes-upgrade-127646 kubelet[3638]: I0819 19:00:40.301106    3638 scope.go:117] "RemoveContainer" containerID="903ee1f45ddab3a04ab0218d58cbbb177b0ee6d4e37bdc0c47a28491d10333ca"
	Aug 19 19:00:40 kubernetes-upgrade-127646 kubelet[3638]: I0819 19:00:40.303008    3638 scope.go:117] "RemoveContainer" containerID="ee0ec838ce8cfbbe48310687e3edd7d47de6f462020d5cfc488ca94cdd254a0e"
	Aug 19 19:00:40 kubernetes-upgrade-127646 kubelet[3638]: E0819 19:00:40.448930    3638 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-127646?timeout=10s\": dial tcp 192.168.72.104:8443: connect: connection refused" interval="800ms"
	Aug 19 19:00:40 kubernetes-upgrade-127646 kubelet[3638]: I0819 19:00:40.653489    3638 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-127646"
	Aug 19 19:00:40 kubernetes-upgrade-127646 kubelet[3638]: E0819 19:00:40.654437    3638 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.72.104:8443: connect: connection refused" node="kubernetes-upgrade-127646"
	Aug 19 19:00:41 kubernetes-upgrade-127646 kubelet[3638]: I0819 19:00:41.457623    3638 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-127646"
	Aug 19 19:00:44 kubernetes-upgrade-127646 kubelet[3638]: I0819 19:00:44.538037    3638 kubelet_node_status.go:111] "Node was previously registered" node="kubernetes-upgrade-127646"
	Aug 19 19:00:44 kubernetes-upgrade-127646 kubelet[3638]: I0819 19:00:44.538550    3638 kubelet_node_status.go:75] "Successfully registered node" node="kubernetes-upgrade-127646"
	Aug 19 19:00:44 kubernetes-upgrade-127646 kubelet[3638]: I0819 19:00:44.538742    3638 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 19 19:00:44 kubernetes-upgrade-127646 kubelet[3638]: I0819 19:00:44.540339    3638 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 19 19:00:44 kubernetes-upgrade-127646 kubelet[3638]: I0819 19:00:44.822622    3638 apiserver.go:52] "Watching apiserver"
	Aug 19 19:00:44 kubernetes-upgrade-127646 kubelet[3638]: I0819 19:00:44.842782    3638 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Aug 19 19:00:44 kubernetes-upgrade-127646 kubelet[3638]: I0819 19:00:44.855432    3638 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/307b9991-fa9b-43fc-bdbc-1e81d056af23-tmp\") pod \"storage-provisioner\" (UID: \"307b9991-fa9b-43fc-bdbc-1e81d056af23\") " pod="kube-system/storage-provisioner"
	Aug 19 19:00:44 kubernetes-upgrade-127646 kubelet[3638]: I0819 19:00:44.855682    3638 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/020032dd-a67f-49fc-a785-5bb2067bcd3d-xtables-lock\") pod \"kube-proxy-w249t\" (UID: \"020032dd-a67f-49fc-a785-5bb2067bcd3d\") " pod="kube-system/kube-proxy-w249t"
	Aug 19 19:00:44 kubernetes-upgrade-127646 kubelet[3638]: I0819 19:00:44.855793    3638 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/020032dd-a67f-49fc-a785-5bb2067bcd3d-lib-modules\") pod \"kube-proxy-w249t\" (UID: \"020032dd-a67f-49fc-a785-5bb2067bcd3d\") " pod="kube-system/kube-proxy-w249t"
	Aug 19 19:00:45 kubernetes-upgrade-127646 kubelet[3638]: I0819 19:00:45.131264    3638 scope.go:117] "RemoveContainer" containerID="caa887d13670c85c8596bd4736afa530bfd96d5e8f3d02aa1fb666ba088163ca"
	Aug 19 19:00:45 kubernetes-upgrade-127646 kubelet[3638]: I0819 19:00:45.144272    3638 scope.go:117] "RemoveContainer" containerID="1769a85d48846f40b1e6eae5c255e7d7072e7323a484aac6b3d650afddf16fe5"
	Aug 19 19:00:49 kubernetes-upgrade-127646 kubelet[3638]: E0819 19:00:49.955494    3638 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724094049955137477,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:00:49 kubernetes-upgrade-127646 kubelet[3638]: E0819 19:00:49.955523    3638 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724094049955137477,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [569cd03f9388278ccf33b8c40e91558ee4f63607db9d654904e14c98f4b22a5d] <==
	I0819 19:00:45.396657       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0819 19:00:45.444964       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0819 19:00:45.445039       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	
	==> storage-provisioner [caa887d13670c85c8596bd4736afa530bfd96d5e8f3d02aa1fb666ba088163ca] <==
	I0819 19:00:29.887168       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0819 19:00:39.098197       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-127646 -n kubernetes-upgrade-127646
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-127646 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-127646" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-127646
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-127646: (1.275321899s)
--- FAIL: TestKubernetesUpgrade (436.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (294.68s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-104669 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-104669 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m54.390002356s)

                                                
                                                
-- stdout --
	* [old-k8s-version-104669] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19468
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19468-372744/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19468-372744/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-104669" primary control-plane node in "old-k8s-version-104669" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 19:01:57.704552  431169 out.go:345] Setting OutFile to fd 1 ...
	I0819 19:01:57.704664  431169 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:01:57.704674  431169 out.go:358] Setting ErrFile to fd 2...
	I0819 19:01:57.704678  431169 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:01:57.704856  431169 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19468-372744/.minikube/bin
	I0819 19:01:57.705440  431169 out.go:352] Setting JSON to false
	I0819 19:01:57.706656  431169 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":9861,"bootTime":1724084257,"procs":299,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 19:01:57.706723  431169 start.go:139] virtualization: kvm guest
	I0819 19:01:57.709157  431169 out.go:177] * [old-k8s-version-104669] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 19:01:57.710715  431169 notify.go:220] Checking for updates...
	I0819 19:01:57.710724  431169 out.go:177]   - MINIKUBE_LOCATION=19468
	I0819 19:01:57.712255  431169 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 19:01:57.713525  431169 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19468-372744/kubeconfig
	I0819 19:01:57.714798  431169 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19468-372744/.minikube
	I0819 19:01:57.716273  431169 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 19:01:57.717540  431169 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 19:01:57.719637  431169 config.go:182] Loaded profile config "bridge-571803": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:01:57.719802  431169 config.go:182] Loaded profile config "enable-default-cni-571803": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:01:57.719923  431169 config.go:182] Loaded profile config "flannel-571803": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:01:57.720047  431169 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 19:01:57.763602  431169 out.go:177] * Using the kvm2 driver based on user configuration
	I0819 19:01:57.764941  431169 start.go:297] selected driver: kvm2
	I0819 19:01:57.764966  431169 start.go:901] validating driver "kvm2" against <nil>
	I0819 19:01:57.764983  431169 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 19:01:57.765939  431169 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 19:01:57.766034  431169 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19468-372744/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 19:01:57.783782  431169 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 19:01:57.783869  431169 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 19:01:57.784181  431169 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 19:01:57.784269  431169 cni.go:84] Creating CNI manager for ""
	I0819 19:01:57.784283  431169 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 19:01:57.784301  431169 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 19:01:57.784383  431169 start.go:340] cluster config:
	{Name:old-k8s-version-104669 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-104669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 19:01:57.784555  431169 iso.go:125] acquiring lock: {Name:mk4c0ac1c3202b1a296739df622960e7a0bd8566 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 19:01:57.787265  431169 out.go:177] * Starting "old-k8s-version-104669" primary control-plane node in "old-k8s-version-104669" cluster
	I0819 19:01:57.788661  431169 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0819 19:01:57.788738  431169 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0819 19:01:57.788755  431169 cache.go:56] Caching tarball of preloaded images
	I0819 19:01:57.788868  431169 preload.go:172] Found /home/jenkins/minikube-integration/19468-372744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 19:01:57.788884  431169 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0819 19:01:57.789038  431169 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/config.json ...
	I0819 19:01:57.789075  431169 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/config.json: {Name:mk4257c3be6ec4e598292e96648655eb43fff941 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:01:57.789287  431169 start.go:360] acquireMachinesLock for old-k8s-version-104669: {Name:mk24ba67a747357e9ce40f1e460d2bb0bc59cc75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 19:02:15.664777  431169 start.go:364] duration metric: took 17.875434385s to acquireMachinesLock for "old-k8s-version-104669"
	I0819 19:02:15.664838  431169 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-104669 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-104669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 19:02:15.664988  431169 start.go:125] createHost starting for "" (driver="kvm2")
	I0819 19:02:15.667278  431169 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 19:02:15.667486  431169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:02:15.667538  431169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:02:15.688854  431169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38655
	I0819 19:02:15.689402  431169 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:02:15.690000  431169 main.go:141] libmachine: Using API Version  1
	I0819 19:02:15.690028  431169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:02:15.690533  431169 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:02:15.690766  431169 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetMachineName
	I0819 19:02:15.690941  431169 main.go:141] libmachine: (old-k8s-version-104669) Calling .DriverName
	I0819 19:02:15.691066  431169 start.go:159] libmachine.API.Create for "old-k8s-version-104669" (driver="kvm2")
	I0819 19:02:15.691089  431169 client.go:168] LocalClient.Create starting
	I0819 19:02:15.691117  431169 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem
	I0819 19:02:15.691152  431169 main.go:141] libmachine: Decoding PEM data...
	I0819 19:02:15.691166  431169 main.go:141] libmachine: Parsing certificate...
	I0819 19:02:15.691227  431169 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem
	I0819 19:02:15.691248  431169 main.go:141] libmachine: Decoding PEM data...
	I0819 19:02:15.691260  431169 main.go:141] libmachine: Parsing certificate...
	I0819 19:02:15.691279  431169 main.go:141] libmachine: Running pre-create checks...
	I0819 19:02:15.691288  431169 main.go:141] libmachine: (old-k8s-version-104669) Calling .PreCreateCheck
	I0819 19:02:15.691629  431169 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetConfigRaw
	I0819 19:02:15.692101  431169 main.go:141] libmachine: Creating machine...
	I0819 19:02:15.692120  431169 main.go:141] libmachine: (old-k8s-version-104669) Calling .Create
	I0819 19:02:15.692262  431169 main.go:141] libmachine: (old-k8s-version-104669) Creating KVM machine...
	I0819 19:02:15.693488  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | found existing default KVM network
	I0819 19:02:15.694742  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:02:15.694563  431517 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:ca:98:0a} reservation:<nil>}
	I0819 19:02:15.696031  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:02:15.695940  431517 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00011dd60}
	I0819 19:02:15.696056  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | created network xml: 
	I0819 19:02:15.696071  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | <network>
	I0819 19:02:15.696084  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG |   <name>mk-old-k8s-version-104669</name>
	I0819 19:02:15.696097  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG |   <dns enable='no'/>
	I0819 19:02:15.696107  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG |   
	I0819 19:02:15.696119  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0819 19:02:15.696130  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG |     <dhcp>
	I0819 19:02:15.696145  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0819 19:02:15.696164  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG |     </dhcp>
	I0819 19:02:15.696176  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG |   </ip>
	I0819 19:02:15.696182  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG |   
	I0819 19:02:15.696190  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | </network>
	I0819 19:02:15.696197  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | 
	I0819 19:02:15.702113  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | trying to create private KVM network mk-old-k8s-version-104669 192.168.50.0/24...
	I0819 19:02:15.789444  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | private KVM network mk-old-k8s-version-104669 192.168.50.0/24 created
	I0819 19:02:15.789492  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:02:15.789406  431517 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19468-372744/.minikube
	I0819 19:02:15.789512  431169 main.go:141] libmachine: (old-k8s-version-104669) Setting up store path in /home/jenkins/minikube-integration/19468-372744/.minikube/machines/old-k8s-version-104669 ...
	I0819 19:02:15.789527  431169 main.go:141] libmachine: (old-k8s-version-104669) Building disk image from file:///home/jenkins/minikube-integration/19468-372744/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0819 19:02:15.789547  431169 main.go:141] libmachine: (old-k8s-version-104669) Downloading /home/jenkins/minikube-integration/19468-372744/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19468-372744/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 19:02:16.068774  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:02:16.068557  431517 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/old-k8s-version-104669/id_rsa...
	I0819 19:02:16.230593  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:02:16.230448  431517 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/old-k8s-version-104669/old-k8s-version-104669.rawdisk...
	I0819 19:02:16.230641  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | Writing magic tar header
	I0819 19:02:16.230676  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | Writing SSH key tar header
	I0819 19:02:16.230691  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:02:16.230565  431517 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19468-372744/.minikube/machines/old-k8s-version-104669 ...
	I0819 19:02:16.230710  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/old-k8s-version-104669
	I0819 19:02:16.230722  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19468-372744/.minikube/machines
	I0819 19:02:16.230737  431169 main.go:141] libmachine: (old-k8s-version-104669) Setting executable bit set on /home/jenkins/minikube-integration/19468-372744/.minikube/machines/old-k8s-version-104669 (perms=drwx------)
	I0819 19:02:16.230752  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19468-372744/.minikube
	I0819 19:02:16.230764  431169 main.go:141] libmachine: (old-k8s-version-104669) Setting executable bit set on /home/jenkins/minikube-integration/19468-372744/.minikube/machines (perms=drwxr-xr-x)
	I0819 19:02:16.230788  431169 main.go:141] libmachine: (old-k8s-version-104669) Setting executable bit set on /home/jenkins/minikube-integration/19468-372744/.minikube (perms=drwxr-xr-x)
	I0819 19:02:16.230806  431169 main.go:141] libmachine: (old-k8s-version-104669) Setting executable bit set on /home/jenkins/minikube-integration/19468-372744 (perms=drwxrwxr-x)
	I0819 19:02:16.230816  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19468-372744
	I0819 19:02:16.230825  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0819 19:02:16.230832  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | Checking permissions on dir: /home/jenkins
	I0819 19:02:16.230839  431169 main.go:141] libmachine: (old-k8s-version-104669) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0819 19:02:16.230848  431169 main.go:141] libmachine: (old-k8s-version-104669) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0819 19:02:16.230855  431169 main.go:141] libmachine: (old-k8s-version-104669) Creating domain...
	I0819 19:02:16.230865  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | Checking permissions on dir: /home
	I0819 19:02:16.230873  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | Skipping /home - not owner
	I0819 19:02:16.232101  431169 main.go:141] libmachine: (old-k8s-version-104669) define libvirt domain using xml: 
	I0819 19:02:16.232128  431169 main.go:141] libmachine: (old-k8s-version-104669) <domain type='kvm'>
	I0819 19:02:16.232138  431169 main.go:141] libmachine: (old-k8s-version-104669)   <name>old-k8s-version-104669</name>
	I0819 19:02:16.232146  431169 main.go:141] libmachine: (old-k8s-version-104669)   <memory unit='MiB'>2200</memory>
	I0819 19:02:16.232155  431169 main.go:141] libmachine: (old-k8s-version-104669)   <vcpu>2</vcpu>
	I0819 19:02:16.232163  431169 main.go:141] libmachine: (old-k8s-version-104669)   <features>
	I0819 19:02:16.232175  431169 main.go:141] libmachine: (old-k8s-version-104669)     <acpi/>
	I0819 19:02:16.232185  431169 main.go:141] libmachine: (old-k8s-version-104669)     <apic/>
	I0819 19:02:16.232195  431169 main.go:141] libmachine: (old-k8s-version-104669)     <pae/>
	I0819 19:02:16.232211  431169 main.go:141] libmachine: (old-k8s-version-104669)     
	I0819 19:02:16.232225  431169 main.go:141] libmachine: (old-k8s-version-104669)   </features>
	I0819 19:02:16.232235  431169 main.go:141] libmachine: (old-k8s-version-104669)   <cpu mode='host-passthrough'>
	I0819 19:02:16.232247  431169 main.go:141] libmachine: (old-k8s-version-104669)   
	I0819 19:02:16.232256  431169 main.go:141] libmachine: (old-k8s-version-104669)   </cpu>
	I0819 19:02:16.232269  431169 main.go:141] libmachine: (old-k8s-version-104669)   <os>
	I0819 19:02:16.232281  431169 main.go:141] libmachine: (old-k8s-version-104669)     <type>hvm</type>
	I0819 19:02:16.232298  431169 main.go:141] libmachine: (old-k8s-version-104669)     <boot dev='cdrom'/>
	I0819 19:02:16.232310  431169 main.go:141] libmachine: (old-k8s-version-104669)     <boot dev='hd'/>
	I0819 19:02:16.232321  431169 main.go:141] libmachine: (old-k8s-version-104669)     <bootmenu enable='no'/>
	I0819 19:02:16.232331  431169 main.go:141] libmachine: (old-k8s-version-104669)   </os>
	I0819 19:02:16.232343  431169 main.go:141] libmachine: (old-k8s-version-104669)   <devices>
	I0819 19:02:16.232359  431169 main.go:141] libmachine: (old-k8s-version-104669)     <disk type='file' device='cdrom'>
	I0819 19:02:16.232378  431169 main.go:141] libmachine: (old-k8s-version-104669)       <source file='/home/jenkins/minikube-integration/19468-372744/.minikube/machines/old-k8s-version-104669/boot2docker.iso'/>
	I0819 19:02:16.232390  431169 main.go:141] libmachine: (old-k8s-version-104669)       <target dev='hdc' bus='scsi'/>
	I0819 19:02:16.232401  431169 main.go:141] libmachine: (old-k8s-version-104669)       <readonly/>
	I0819 19:02:16.232410  431169 main.go:141] libmachine: (old-k8s-version-104669)     </disk>
	I0819 19:02:16.232421  431169 main.go:141] libmachine: (old-k8s-version-104669)     <disk type='file' device='disk'>
	I0819 19:02:16.232439  431169 main.go:141] libmachine: (old-k8s-version-104669)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0819 19:02:16.232479  431169 main.go:141] libmachine: (old-k8s-version-104669)       <source file='/home/jenkins/minikube-integration/19468-372744/.minikube/machines/old-k8s-version-104669/old-k8s-version-104669.rawdisk'/>
	I0819 19:02:16.232491  431169 main.go:141] libmachine: (old-k8s-version-104669)       <target dev='hda' bus='virtio'/>
	I0819 19:02:16.232531  431169 main.go:141] libmachine: (old-k8s-version-104669)     </disk>
	I0819 19:02:16.232555  431169 main.go:141] libmachine: (old-k8s-version-104669)     <interface type='network'>
	I0819 19:02:16.232566  431169 main.go:141] libmachine: (old-k8s-version-104669)       <source network='mk-old-k8s-version-104669'/>
	I0819 19:02:16.232577  431169 main.go:141] libmachine: (old-k8s-version-104669)       <model type='virtio'/>
	I0819 19:02:16.232591  431169 main.go:141] libmachine: (old-k8s-version-104669)     </interface>
	I0819 19:02:16.232603  431169 main.go:141] libmachine: (old-k8s-version-104669)     <interface type='network'>
	I0819 19:02:16.232614  431169 main.go:141] libmachine: (old-k8s-version-104669)       <source network='default'/>
	I0819 19:02:16.232625  431169 main.go:141] libmachine: (old-k8s-version-104669)       <model type='virtio'/>
	I0819 19:02:16.232662  431169 main.go:141] libmachine: (old-k8s-version-104669)     </interface>
	I0819 19:02:16.232690  431169 main.go:141] libmachine: (old-k8s-version-104669)     <serial type='pty'>
	I0819 19:02:16.232710  431169 main.go:141] libmachine: (old-k8s-version-104669)       <target port='0'/>
	I0819 19:02:16.232728  431169 main.go:141] libmachine: (old-k8s-version-104669)     </serial>
	I0819 19:02:16.232740  431169 main.go:141] libmachine: (old-k8s-version-104669)     <console type='pty'>
	I0819 19:02:16.232752  431169 main.go:141] libmachine: (old-k8s-version-104669)       <target type='serial' port='0'/>
	I0819 19:02:16.232763  431169 main.go:141] libmachine: (old-k8s-version-104669)     </console>
	I0819 19:02:16.232773  431169 main.go:141] libmachine: (old-k8s-version-104669)     <rng model='virtio'>
	I0819 19:02:16.232799  431169 main.go:141] libmachine: (old-k8s-version-104669)       <backend model='random'>/dev/random</backend>
	I0819 19:02:16.232815  431169 main.go:141] libmachine: (old-k8s-version-104669)     </rng>
	I0819 19:02:16.232829  431169 main.go:141] libmachine: (old-k8s-version-104669)     
	I0819 19:02:16.232839  431169 main.go:141] libmachine: (old-k8s-version-104669)     
	I0819 19:02:16.232851  431169 main.go:141] libmachine: (old-k8s-version-104669)   </devices>
	I0819 19:02:16.232860  431169 main.go:141] libmachine: (old-k8s-version-104669) </domain>
	I0819 19:02:16.232872  431169 main.go:141] libmachine: (old-k8s-version-104669) 
	I0819 19:02:16.237064  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:db:40:7b in network default
	I0819 19:02:16.237651  431169 main.go:141] libmachine: (old-k8s-version-104669) Ensuring networks are active...
	I0819 19:02:16.237678  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:02:16.238464  431169 main.go:141] libmachine: (old-k8s-version-104669) Ensuring network default is active
	I0819 19:02:16.238840  431169 main.go:141] libmachine: (old-k8s-version-104669) Ensuring network mk-old-k8s-version-104669 is active
	I0819 19:02:16.239323  431169 main.go:141] libmachine: (old-k8s-version-104669) Getting domain xml...
	I0819 19:02:16.240193  431169 main.go:141] libmachine: (old-k8s-version-104669) Creating domain...
	I0819 19:02:17.581462  431169 main.go:141] libmachine: (old-k8s-version-104669) Waiting to get IP...
	I0819 19:02:17.582262  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:02:17.582722  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:02:17.582754  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:02:17.582724  431517 retry.go:31] will retry after 255.891838ms: waiting for machine to come up
	I0819 19:02:17.840432  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:02:17.840988  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:02:17.841016  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:02:17.840945  431517 retry.go:31] will retry after 368.229938ms: waiting for machine to come up
	I0819 19:02:18.211257  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:02:18.212316  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:02:18.212347  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:02:18.212281  431517 retry.go:31] will retry after 362.555152ms: waiting for machine to come up
	I0819 19:02:18.577063  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:02:18.577707  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:02:18.577738  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:02:18.577655  431517 retry.go:31] will retry after 526.66832ms: waiting for machine to come up
	I0819 19:02:19.106734  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:02:19.107307  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:02:19.107334  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:02:19.107260  431517 retry.go:31] will retry after 477.301708ms: waiting for machine to come up
	I0819 19:02:19.585731  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:02:19.586203  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:02:19.586234  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:02:19.586166  431517 retry.go:31] will retry after 914.671525ms: waiting for machine to come up
	I0819 19:02:20.503135  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:02:20.503969  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:02:20.504004  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:02:20.503908  431517 retry.go:31] will retry after 768.418836ms: waiting for machine to come up
	I0819 19:02:21.275830  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:02:21.275871  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:02:21.275885  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:02:21.274536  431517 retry.go:31] will retry after 1.377092431s: waiting for machine to come up
	I0819 19:02:22.653159  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:02:22.653712  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:02:22.653740  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:02:22.653656  431517 retry.go:31] will retry after 1.290834241s: waiting for machine to come up
	I0819 19:02:23.946081  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:02:23.946568  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:02:23.946600  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:02:23.946505  431517 retry.go:31] will retry after 2.319921189s: waiting for machine to come up
	I0819 19:02:26.267887  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:02:26.268553  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:02:26.268584  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:02:26.268467  431517 retry.go:31] will retry after 2.756634216s: waiting for machine to come up
	I0819 19:02:29.026626  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:02:29.027250  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:02:29.027275  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:02:29.027190  431517 retry.go:31] will retry after 3.512615559s: waiting for machine to come up
	I0819 19:02:32.938013  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:02:32.938601  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:02:32.938631  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:02:32.938547  431517 retry.go:31] will retry after 4.063996441s: waiting for machine to come up
	I0819 19:02:37.004864  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:02:37.005444  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:02:37.005483  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:02:37.005377  431517 retry.go:31] will retry after 4.135723056s: waiting for machine to come up
	I0819 19:02:41.142323  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:02:41.142905  431169 main.go:141] libmachine: (old-k8s-version-104669) Found IP for machine: 192.168.50.32
	I0819 19:02:41.142934  431169 main.go:141] libmachine: (old-k8s-version-104669) Reserving static IP address...
	I0819 19:02:41.142951  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has current primary IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:02:41.143258  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-104669", mac: "52:54:00:8c:ff:a3", ip: "192.168.50.32"} in network mk-old-k8s-version-104669
	I0819 19:02:41.231709  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | Getting to WaitForSSH function...
	I0819 19:02:41.231760  431169 main.go:141] libmachine: (old-k8s-version-104669) Reserved static IP address: 192.168.50.32
	I0819 19:02:41.231775  431169 main.go:141] libmachine: (old-k8s-version-104669) Waiting for SSH to be available...
	I0819 19:02:41.234888  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:02:41.235309  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669
	I0819 19:02:41.235339  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find defined IP address of network mk-old-k8s-version-104669 interface with MAC address 52:54:00:8c:ff:a3
	I0819 19:02:41.235521  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | Using SSH client type: external
	I0819 19:02:41.235571  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | Using SSH private key: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/old-k8s-version-104669/id_rsa (-rw-------)
	I0819 19:02:41.235615  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19468-372744/.minikube/machines/old-k8s-version-104669/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 19:02:41.235643  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | About to run SSH command:
	I0819 19:02:41.235661  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | exit 0
	I0819 19:02:41.239557  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | SSH cmd err, output: exit status 255: 
	I0819 19:02:41.239596  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0819 19:02:41.239615  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | command : exit 0
	I0819 19:02:41.239627  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | err     : exit status 255
	I0819 19:02:41.239638  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | output  : 
	I0819 19:02:44.239930  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | Getting to WaitForSSH function...
	I0819 19:02:44.242914  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:02:44.243458  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:02:32 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:02:44.243493  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:02:44.243693  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | Using SSH client type: external
	I0819 19:02:44.243720  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | Using SSH private key: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/old-k8s-version-104669/id_rsa (-rw-------)
	I0819 19:02:44.243771  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.32 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19468-372744/.minikube/machines/old-k8s-version-104669/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 19:02:44.243798  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | About to run SSH command:
	I0819 19:02:44.243817  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | exit 0
	I0819 19:02:44.385104  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | SSH cmd err, output: <nil>: 
	I0819 19:02:44.385459  431169 main.go:141] libmachine: (old-k8s-version-104669) KVM machine creation complete!
	I0819 19:02:44.385812  431169 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetConfigRaw
	I0819 19:02:44.386498  431169 main.go:141] libmachine: (old-k8s-version-104669) Calling .DriverName
	I0819 19:02:44.386717  431169 main.go:141] libmachine: (old-k8s-version-104669) Calling .DriverName
	I0819 19:02:44.386949  431169 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0819 19:02:44.386970  431169 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetState
	I0819 19:02:44.388298  431169 main.go:141] libmachine: Detecting operating system of created instance...
	I0819 19:02:44.388315  431169 main.go:141] libmachine: Waiting for SSH to be available...
	I0819 19:02:44.388323  431169 main.go:141] libmachine: Getting to WaitForSSH function...
	I0819 19:02:44.388332  431169 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHHostname
	I0819 19:02:44.390981  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:02:44.391386  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:02:32 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:02:44.391425  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:02:44.391705  431169 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHPort
	I0819 19:02:44.391923  431169 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:02:44.392113  431169 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:02:44.392279  431169 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHUsername
	I0819 19:02:44.392467  431169 main.go:141] libmachine: Using SSH client type: native
	I0819 19:02:44.392689  431169 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I0819 19:02:44.392703  431169 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0819 19:02:44.511242  431169 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 19:02:44.511275  431169 main.go:141] libmachine: Detecting the provisioner...
	I0819 19:02:44.511287  431169 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHHostname
	I0819 19:02:44.514049  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:02:44.514456  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:02:32 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:02:44.514486  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:02:44.514703  431169 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHPort
	I0819 19:02:44.514886  431169 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:02:44.515049  431169 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:02:44.515245  431169 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHUsername
	I0819 19:02:44.515414  431169 main.go:141] libmachine: Using SSH client type: native
	I0819 19:02:44.515607  431169 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I0819 19:02:44.515620  431169 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0819 19:02:44.633270  431169 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0819 19:02:44.633338  431169 main.go:141] libmachine: found compatible host: buildroot
	I0819 19:02:44.633348  431169 main.go:141] libmachine: Provisioning with buildroot...
	I0819 19:02:44.633359  431169 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetMachineName
	I0819 19:02:44.633651  431169 buildroot.go:166] provisioning hostname "old-k8s-version-104669"
	I0819 19:02:44.633683  431169 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetMachineName
	I0819 19:02:44.633937  431169 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHHostname
	I0819 19:02:44.637055  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:02:44.637432  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:02:32 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:02:44.637468  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:02:44.637749  431169 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHPort
	I0819 19:02:44.637932  431169 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:02:44.638097  431169 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:02:44.638267  431169 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHUsername
	I0819 19:02:44.638407  431169 main.go:141] libmachine: Using SSH client type: native
	I0819 19:02:44.638644  431169 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I0819 19:02:44.638661  431169 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-104669 && echo "old-k8s-version-104669" | sudo tee /etc/hostname
	I0819 19:02:44.767020  431169 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-104669
	
	I0819 19:02:44.767052  431169 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHHostname
	I0819 19:02:44.770053  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:02:44.770372  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:02:32 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:02:44.770403  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:02:44.770594  431169 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHPort
	I0819 19:02:44.770775  431169 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:02:44.770939  431169 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:02:44.771045  431169 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHUsername
	I0819 19:02:44.771243  431169 main.go:141] libmachine: Using SSH client type: native
	I0819 19:02:44.771422  431169 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I0819 19:02:44.771445  431169 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-104669' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-104669/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-104669' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 19:02:44.893306  431169 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 19:02:44.893342  431169 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19468-372744/.minikube CaCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19468-372744/.minikube}
	I0819 19:02:44.893392  431169 buildroot.go:174] setting up certificates
	I0819 19:02:44.893405  431169 provision.go:84] configureAuth start
	I0819 19:02:44.893417  431169 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetMachineName
	I0819 19:02:44.893704  431169 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetIP
	I0819 19:02:44.896387  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:02:44.896724  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:02:32 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:02:44.896747  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:02:44.896878  431169 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHHostname
	I0819 19:02:44.899586  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:02:44.899946  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:02:32 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:02:44.899975  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:02:44.900117  431169 provision.go:143] copyHostCerts
	I0819 19:02:44.900179  431169 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem, removing ...
	I0819 19:02:44.900196  431169 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem
	I0819 19:02:44.900255  431169 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem (1675 bytes)
	I0819 19:02:44.900399  431169 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem, removing ...
	I0819 19:02:44.900411  431169 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem
	I0819 19:02:44.900434  431169 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem (1082 bytes)
	I0819 19:02:44.900504  431169 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem, removing ...
	I0819 19:02:44.900512  431169 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem
	I0819 19:02:44.900530  431169 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem (1123 bytes)
	I0819 19:02:44.900585  431169 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-104669 san=[127.0.0.1 192.168.50.32 localhost minikube old-k8s-version-104669]
	I0819 19:02:45.019651  431169 provision.go:177] copyRemoteCerts
	I0819 19:02:45.019728  431169 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 19:02:45.019755  431169 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHHostname
	I0819 19:02:45.022412  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:02:45.022715  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:02:32 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:02:45.022749  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:02:45.022923  431169 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHPort
	I0819 19:02:45.023155  431169 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:02:45.023416  431169 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHUsername
	I0819 19:02:45.023585  431169 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/old-k8s-version-104669/id_rsa Username:docker}
	I0819 19:02:45.110648  431169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 19:02:45.136744  431169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0819 19:02:45.161150  431169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 19:02:45.185464  431169 provision.go:87] duration metric: took 292.041631ms to configureAuth
	I0819 19:02:45.185502  431169 buildroot.go:189] setting minikube options for container-runtime
	I0819 19:02:45.185685  431169 config.go:182] Loaded profile config "old-k8s-version-104669": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0819 19:02:45.185773  431169 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHHostname
	I0819 19:02:45.188912  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:02:45.189253  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:02:32 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:02:45.189298  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:02:45.189489  431169 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHPort
	I0819 19:02:45.189693  431169 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:02:45.189819  431169 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:02:45.190007  431169 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHUsername
	I0819 19:02:45.190181  431169 main.go:141] libmachine: Using SSH client type: native
	I0819 19:02:45.190352  431169 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I0819 19:02:45.190366  431169 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 19:02:45.469864  431169 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 19:02:45.469900  431169 main.go:141] libmachine: Checking connection to Docker...
	I0819 19:02:45.469913  431169 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetURL
	I0819 19:02:45.471351  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | Using libvirt version 6000000
	I0819 19:02:45.473713  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:02:45.474081  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:02:32 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:02:45.474114  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:02:45.474271  431169 main.go:141] libmachine: Docker is up and running!
	I0819 19:02:45.474291  431169 main.go:141] libmachine: Reticulating splines...
	I0819 19:02:45.474300  431169 client.go:171] duration metric: took 29.78320122s to LocalClient.Create
	I0819 19:02:45.474327  431169 start.go:167] duration metric: took 29.783261134s to libmachine.API.Create "old-k8s-version-104669"
	I0819 19:02:45.474337  431169 start.go:293] postStartSetup for "old-k8s-version-104669" (driver="kvm2")
	I0819 19:02:45.474348  431169 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 19:02:45.474376  431169 main.go:141] libmachine: (old-k8s-version-104669) Calling .DriverName
	I0819 19:02:45.474633  431169 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 19:02:45.474659  431169 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHHostname
	I0819 19:02:45.477170  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:02:45.477479  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:02:32 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:02:45.477512  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:02:45.477653  431169 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHPort
	I0819 19:02:45.477828  431169 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:02:45.478009  431169 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHUsername
	I0819 19:02:45.478136  431169 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/old-k8s-version-104669/id_rsa Username:docker}
	I0819 19:02:45.566843  431169 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 19:02:45.571257  431169 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 19:02:45.571284  431169 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-372744/.minikube/addons for local assets ...
	I0819 19:02:45.571344  431169 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-372744/.minikube/files for local assets ...
	I0819 19:02:45.571416  431169 filesync.go:149] local asset: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem -> 3800092.pem in /etc/ssl/certs
	I0819 19:02:45.571514  431169 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 19:02:45.580862  431169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem --> /etc/ssl/certs/3800092.pem (1708 bytes)
	I0819 19:02:45.606651  431169 start.go:296] duration metric: took 132.294733ms for postStartSetup
	I0819 19:02:45.606725  431169 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetConfigRaw
	I0819 19:02:45.607338  431169 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetIP
	I0819 19:02:45.610223  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:02:45.610582  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:02:32 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:02:45.610605  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:02:45.610876  431169 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/config.json ...
	I0819 19:02:45.611087  431169 start.go:128] duration metric: took 29.946081493s to createHost
	I0819 19:02:45.611115  431169 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHHostname
	I0819 19:02:45.613395  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:02:45.613720  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:02:32 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:02:45.613748  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:02:45.613947  431169 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHPort
	I0819 19:02:45.614186  431169 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:02:45.614385  431169 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:02:45.614527  431169 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHUsername
	I0819 19:02:45.614687  431169 main.go:141] libmachine: Using SSH client type: native
	I0819 19:02:45.614945  431169 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I0819 19:02:45.614963  431169 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 19:02:45.732736  431169 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724094165.710733372
	
	I0819 19:02:45.732765  431169 fix.go:216] guest clock: 1724094165.710733372
	I0819 19:02:45.732775  431169 fix.go:229] Guest: 2024-08-19 19:02:45.710733372 +0000 UTC Remote: 2024-08-19 19:02:45.611100575 +0000 UTC m=+47.945196205 (delta=99.632797ms)
	I0819 19:02:45.732818  431169 fix.go:200] guest clock delta is within tolerance: 99.632797ms
	I0819 19:02:45.732823  431169 start.go:83] releasing machines lock for "old-k8s-version-104669", held for 30.068015637s
	I0819 19:02:45.732851  431169 main.go:141] libmachine: (old-k8s-version-104669) Calling .DriverName
	I0819 19:02:45.733140  431169 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetIP
	I0819 19:02:45.736117  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:02:45.736473  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:02:32 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:02:45.736503  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:02:45.736726  431169 main.go:141] libmachine: (old-k8s-version-104669) Calling .DriverName
	I0819 19:02:45.737366  431169 main.go:141] libmachine: (old-k8s-version-104669) Calling .DriverName
	I0819 19:02:45.737552  431169 main.go:141] libmachine: (old-k8s-version-104669) Calling .DriverName
	I0819 19:02:45.737665  431169 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 19:02:45.737718  431169 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHHostname
	I0819 19:02:45.737836  431169 ssh_runner.go:195] Run: cat /version.json
	I0819 19:02:45.737865  431169 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHHostname
	I0819 19:02:45.740790  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:02:45.740967  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:02:45.741139  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:02:32 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:02:45.741168  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:02:45.741317  431169 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHPort
	I0819 19:02:45.741327  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:02:32 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:02:45.741372  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:02:45.741490  431169 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:02:45.741550  431169 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHPort
	I0819 19:02:45.741630  431169 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHUsername
	I0819 19:02:45.741723  431169 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:02:45.741735  431169 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/old-k8s-version-104669/id_rsa Username:docker}
	I0819 19:02:45.741902  431169 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHUsername
	I0819 19:02:45.742039  431169 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/old-k8s-version-104669/id_rsa Username:docker}
	I0819 19:02:45.848845  431169 ssh_runner.go:195] Run: systemctl --version
	I0819 19:02:45.855496  431169 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 19:02:46.028635  431169 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 19:02:46.035920  431169 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 19:02:46.036020  431169 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 19:02:46.052944  431169 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 19:02:46.052973  431169 start.go:495] detecting cgroup driver to use...
	I0819 19:02:46.053053  431169 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 19:02:46.071388  431169 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 19:02:46.086055  431169 docker.go:217] disabling cri-docker service (if available) ...
	I0819 19:02:46.086123  431169 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 19:02:46.100706  431169 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 19:02:46.115217  431169 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 19:02:46.234139  431169 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 19:02:46.394046  431169 docker.go:233] disabling docker service ...
	I0819 19:02:46.394123  431169 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 19:02:46.408698  431169 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 19:02:46.423046  431169 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 19:02:46.564733  431169 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 19:02:46.679697  431169 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 19:02:46.697624  431169 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 19:02:46.717571  431169 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0819 19:02:46.717630  431169 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:02:46.729452  431169 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 19:02:46.729538  431169 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:02:46.741112  431169 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:02:46.752381  431169 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:02:46.763846  431169 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 19:02:46.776650  431169 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 19:02:46.786883  431169 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 19:02:46.786937  431169 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 19:02:46.804269  431169 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 19:02:46.814630  431169 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:02:46.937905  431169 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 19:02:47.089643  431169 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 19:02:47.089727  431169 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 19:02:47.094542  431169 start.go:563] Will wait 60s for crictl version
	I0819 19:02:47.094608  431169 ssh_runner.go:195] Run: which crictl
	I0819 19:02:47.098582  431169 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 19:02:47.145695  431169 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 19:02:47.145787  431169 ssh_runner.go:195] Run: crio --version
	I0819 19:02:47.173900  431169 ssh_runner.go:195] Run: crio --version
	I0819 19:02:47.205467  431169 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0819 19:02:47.206857  431169 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetIP
	I0819 19:02:47.210049  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:02:47.210538  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:02:32 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:02:47.210566  431169 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:02:47.210861  431169 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0819 19:02:47.214947  431169 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 19:02:47.233554  431169 kubeadm.go:883] updating cluster {Name:old-k8s-version-104669 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-104669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.32 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 19:02:47.233670  431169 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0819 19:02:47.233712  431169 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 19:02:47.273894  431169 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0819 19:02:47.273979  431169 ssh_runner.go:195] Run: which lz4
	I0819 19:02:47.279023  431169 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 19:02:47.283977  431169 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 19:02:47.284011  431169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0819 19:02:49.008819  431169 crio.go:462] duration metric: took 1.729827884s to copy over tarball
	I0819 19:02:49.008976  431169 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 19:02:51.980376  431169 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.971359868s)
	I0819 19:02:51.980411  431169 crio.go:469] duration metric: took 2.971551956s to extract the tarball
	I0819 19:02:51.980422  431169 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 19:02:52.043708  431169 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 19:02:52.141120  431169 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0819 19:02:52.141153  431169 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0819 19:02:52.141238  431169 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:02:52.141268  431169 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 19:02:52.141278  431169 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0819 19:02:52.141300  431169 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0819 19:02:52.141270  431169 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0819 19:02:52.141359  431169 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0819 19:02:52.141242  431169 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0819 19:02:52.141528  431169 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0819 19:02:52.143278  431169 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0819 19:02:52.143318  431169 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0819 19:02:52.143323  431169 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0819 19:02:52.143321  431169 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0819 19:02:52.143367  431169 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0819 19:02:52.143411  431169 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:02:52.143519  431169 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0819 19:02:52.143555  431169 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 19:02:52.286466  431169 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0819 19:02:52.332079  431169 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0819 19:02:52.332145  431169 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0819 19:02:52.332198  431169 ssh_runner.go:195] Run: which crictl
	I0819 19:02:52.338128  431169 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0819 19:02:52.372713  431169 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0819 19:02:52.382369  431169 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0819 19:02:52.454664  431169 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0819 19:02:52.454798  431169 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0819 19:02:52.454833  431169 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0819 19:02:52.454865  431169 ssh_runner.go:195] Run: which crictl
	I0819 19:02:52.459835  431169 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0819 19:02:52.482903  431169 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 19:02:52.487052  431169 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0819 19:02:52.487085  431169 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0819 19:02:52.497084  431169 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0819 19:02:52.526201  431169 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0819 19:02:52.531659  431169 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0819 19:02:52.560734  431169 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0819 19:02:52.705378  431169 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0819 19:02:52.705404  431169 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0819 19:02:52.705426  431169 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0819 19:02:52.705426  431169 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 19:02:52.705468  431169 ssh_runner.go:195] Run: which crictl
	I0819 19:02:52.705468  431169 ssh_runner.go:195] Run: which crictl
	I0819 19:02:52.714325  431169 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0819 19:02:52.714370  431169 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0819 19:02:52.714416  431169 ssh_runner.go:195] Run: which crictl
	I0819 19:02:52.726619  431169 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0819 19:02:52.726666  431169 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0819 19:02:52.726720  431169 ssh_runner.go:195] Run: which crictl
	I0819 19:02:52.726826  431169 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0819 19:02:52.726849  431169 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0819 19:02:52.726851  431169 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0819 19:02:52.726919  431169 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 19:02:52.726925  431169 ssh_runner.go:195] Run: which crictl
	I0819 19:02:52.726982  431169 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0819 19:02:52.727042  431169 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0819 19:02:52.814581  431169 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0819 19:02:52.814694  431169 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0819 19:02:52.832823  431169 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 19:02:52.832966  431169 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0819 19:02:52.833031  431169 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0819 19:02:52.833234  431169 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0819 19:02:52.912897  431169 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0819 19:02:52.932295  431169 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0819 19:02:52.955152  431169 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0819 19:02:52.955229  431169 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 19:02:52.965972  431169 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0819 19:02:53.022031  431169 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0819 19:02:53.058501  431169 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0819 19:02:53.088317  431169 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0819 19:02:53.088473  431169 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0819 19:02:53.090779  431169 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0819 19:02:53.142107  431169 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0819 19:02:53.142209  431169 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0819 19:02:53.148012  431169 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:02:53.294954  431169 cache_images.go:92] duration metric: took 1.153778154s to LoadCachedImages
	W0819 19:02:53.295052  431169 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0819 19:02:53.295069  431169 kubeadm.go:934] updating node { 192.168.50.32 8443 v1.20.0 crio true true} ...
	I0819 19:02:53.295242  431169 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-104669 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.32
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-104669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 19:02:53.295332  431169 ssh_runner.go:195] Run: crio config
	I0819 19:02:53.349097  431169 cni.go:84] Creating CNI manager for ""
	I0819 19:02:53.349119  431169 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 19:02:53.349131  431169 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 19:02:53.349156  431169 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.32 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-104669 NodeName:old-k8s-version-104669 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.32"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.32 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0819 19:02:53.349337  431169 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.32
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-104669"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.32
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.32"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 19:02:53.349419  431169 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0819 19:02:53.361149  431169 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 19:02:53.361221  431169 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 19:02:53.372055  431169 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0819 19:02:53.391590  431169 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 19:02:53.410780  431169 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0819 19:02:53.430445  431169 ssh_runner.go:195] Run: grep 192.168.50.32	control-plane.minikube.internal$ /etc/hosts
	I0819 19:02:53.435255  431169 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.32	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 19:02:53.448794  431169 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:02:53.580549  431169 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 19:02:53.601645  431169 certs.go:68] Setting up /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669 for IP: 192.168.50.32
	I0819 19:02:53.601676  431169 certs.go:194] generating shared ca certs ...
	I0819 19:02:53.601696  431169 certs.go:226] acquiring lock for ca certs: {Name:mk639e03f593e0bccac045f6e9f5ba3b96cc81e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:02:53.601835  431169 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key
	I0819 19:02:53.601889  431169 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key
	I0819 19:02:53.601901  431169 certs.go:256] generating profile certs ...
	I0819 19:02:53.601976  431169 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/client.key
	I0819 19:02:53.602003  431169 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/client.crt with IP's: []
	I0819 19:02:53.884380  431169 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/client.crt ...
	I0819 19:02:53.884416  431169 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/client.crt: {Name:mk6c401e3df0f25e49c8bc10318125ab025dd14f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:02:53.884627  431169 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/client.key ...
	I0819 19:02:53.884662  431169 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/client.key: {Name:mkd0871925f8071f48385dc02a43159f4179be67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:02:53.884819  431169 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/apiserver.key.7101f8a0
	I0819 19:02:53.884848  431169 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/apiserver.crt.7101f8a0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.32]
	I0819 19:02:54.032819  431169 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/apiserver.crt.7101f8a0 ...
	I0819 19:02:54.032851  431169 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/apiserver.crt.7101f8a0: {Name:mke20ce9bcad8da10234e3ce5139e86a4d709987 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:02:54.033067  431169 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/apiserver.key.7101f8a0 ...
	I0819 19:02:54.033092  431169 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/apiserver.key.7101f8a0: {Name:mk48686063231d55072873df76da7aa6953afd11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:02:54.033225  431169 certs.go:381] copying /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/apiserver.crt.7101f8a0 -> /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/apiserver.crt
	I0819 19:02:54.033371  431169 certs.go:385] copying /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/apiserver.key.7101f8a0 -> /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/apiserver.key
	I0819 19:02:54.033468  431169 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/proxy-client.key
	I0819 19:02:54.033491  431169 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/proxy-client.crt with IP's: []
	I0819 19:02:54.095565  431169 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/proxy-client.crt ...
	I0819 19:02:54.095599  431169 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/proxy-client.crt: {Name:mk4b100f295ec93e5e66f7a92eb632b4b2734d35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:02:54.095808  431169 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/proxy-client.key ...
	I0819 19:02:54.095830  431169 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/proxy-client.key: {Name:mkff36998df1a9e70aaf9a7db7b3d6a7ba3cb607 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:02:54.096044  431169 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem (1338 bytes)
	W0819 19:02:54.096084  431169 certs.go:480] ignoring /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009_empty.pem, impossibly tiny 0 bytes
	I0819 19:02:54.096094  431169 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 19:02:54.096122  431169 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem (1082 bytes)
	I0819 19:02:54.096178  431169 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem (1123 bytes)
	I0819 19:02:54.096204  431169 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem (1675 bytes)
	I0819 19:02:54.096241  431169 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem (1708 bytes)
	I0819 19:02:54.096931  431169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 19:02:54.131345  431169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 19:02:54.160478  431169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 19:02:54.188356  431169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 19:02:54.233929  431169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0819 19:02:54.260838  431169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 19:02:54.289301  431169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 19:02:54.321042  431169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 19:02:54.352358  431169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem --> /usr/share/ca-certificates/3800092.pem (1708 bytes)
	I0819 19:02:54.380895  431169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 19:02:54.409999  431169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem --> /usr/share/ca-certificates/380009.pem (1338 bytes)
	I0819 19:02:54.435688  431169 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 19:02:54.453219  431169 ssh_runner.go:195] Run: openssl version
	I0819 19:02:54.459710  431169 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3800092.pem && ln -fs /usr/share/ca-certificates/3800092.pem /etc/ssl/certs/3800092.pem"
	I0819 19:02:54.470275  431169 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3800092.pem
	I0819 19:02:54.474933  431169 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 17:56 /usr/share/ca-certificates/3800092.pem
	I0819 19:02:54.474988  431169 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3800092.pem
	I0819 19:02:54.482051  431169 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3800092.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 19:02:54.493058  431169 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 19:02:54.503924  431169 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:02:54.509854  431169 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:02:54.509916  431169 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:02:54.516330  431169 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 19:02:54.531224  431169 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/380009.pem && ln -fs /usr/share/ca-certificates/380009.pem /etc/ssl/certs/380009.pem"
	I0819 19:02:54.550397  431169 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/380009.pem
	I0819 19:02:54.556067  431169 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 17:56 /usr/share/ca-certificates/380009.pem
	I0819 19:02:54.556132  431169 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/380009.pem
	I0819 19:02:54.564932  431169 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/380009.pem /etc/ssl/certs/51391683.0"
	I0819 19:02:54.584799  431169 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 19:02:54.595215  431169 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 19:02:54.595284  431169 kubeadm.go:392] StartCluster: {Name:old-k8s-version-104669 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-104669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.32 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 19:02:54.595392  431169 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 19:02:54.595474  431169 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 19:02:54.658279  431169 cri.go:89] found id: ""
	I0819 19:02:54.658360  431169 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 19:02:54.669789  431169 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 19:02:54.680258  431169 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 19:02:54.691882  431169 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 19:02:54.691906  431169 kubeadm.go:157] found existing configuration files:
	
	I0819 19:02:54.691965  431169 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 19:02:54.701857  431169 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 19:02:54.701938  431169 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 19:02:54.712914  431169 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 19:02:54.724237  431169 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 19:02:54.724306  431169 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 19:02:54.736840  431169 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 19:02:54.746786  431169 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 19:02:54.746861  431169 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 19:02:54.758589  431169 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 19:02:54.769764  431169 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 19:02:54.769831  431169 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 19:02:54.783509  431169 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 19:02:55.096863  431169 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 19:04:53.429854  431169 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0819 19:04:53.429996  431169 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0819 19:04:53.431462  431169 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0819 19:04:53.431524  431169 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 19:04:53.431702  431169 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 19:04:53.431831  431169 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 19:04:53.431938  431169 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0819 19:04:53.431996  431169 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 19:04:53.433732  431169 out.go:235]   - Generating certificates and keys ...
	I0819 19:04:53.433813  431169 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 19:04:53.433870  431169 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 19:04:53.433973  431169 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0819 19:04:53.434022  431169 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0819 19:04:53.434073  431169 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0819 19:04:53.434116  431169 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0819 19:04:53.434164  431169 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0819 19:04:53.434359  431169 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-104669] and IPs [192.168.50.32 127.0.0.1 ::1]
	I0819 19:04:53.434441  431169 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0819 19:04:53.434588  431169 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-104669] and IPs [192.168.50.32 127.0.0.1 ::1]
	I0819 19:04:53.434687  431169 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0819 19:04:53.434784  431169 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0819 19:04:53.434839  431169 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0819 19:04:53.434953  431169 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 19:04:53.435035  431169 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 19:04:53.435115  431169 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 19:04:53.435206  431169 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 19:04:53.435292  431169 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 19:04:53.435420  431169 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 19:04:53.435534  431169 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 19:04:53.435600  431169 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 19:04:53.435702  431169 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 19:04:53.437265  431169 out.go:235]   - Booting up control plane ...
	I0819 19:04:53.437365  431169 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 19:04:53.437471  431169 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 19:04:53.437568  431169 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 19:04:53.437672  431169 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 19:04:53.437823  431169 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0819 19:04:53.437891  431169 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0819 19:04:53.437973  431169 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 19:04:53.438124  431169 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 19:04:53.438177  431169 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 19:04:53.438368  431169 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 19:04:53.438464  431169 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 19:04:53.438658  431169 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 19:04:53.438751  431169 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 19:04:53.438936  431169 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 19:04:53.439028  431169 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 19:04:53.439223  431169 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 19:04:53.439232  431169 kubeadm.go:310] 
	I0819 19:04:53.439281  431169 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0819 19:04:53.439325  431169 kubeadm.go:310] 		timed out waiting for the condition
	I0819 19:04:53.439331  431169 kubeadm.go:310] 
	I0819 19:04:53.439359  431169 kubeadm.go:310] 	This error is likely caused by:
	I0819 19:04:53.439413  431169 kubeadm.go:310] 		- The kubelet is not running
	I0819 19:04:53.439538  431169 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0819 19:04:53.439550  431169 kubeadm.go:310] 
	I0819 19:04:53.439637  431169 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0819 19:04:53.439666  431169 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0819 19:04:53.439716  431169 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0819 19:04:53.439724  431169 kubeadm.go:310] 
	I0819 19:04:53.439827  431169 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0819 19:04:53.439894  431169 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0819 19:04:53.439902  431169 kubeadm.go:310] 
	I0819 19:04:53.439983  431169 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0819 19:04:53.440064  431169 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0819 19:04:53.440183  431169 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0819 19:04:53.440266  431169 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0819 19:04:53.440274  431169 kubeadm.go:310] 
	W0819 19:04:53.440414  431169 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-104669] and IPs [192.168.50.32 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-104669] and IPs [192.168.50.32 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-104669] and IPs [192.168.50.32 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-104669] and IPs [192.168.50.32 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0819 19:04:53.440450  431169 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 19:04:54.930456  431169 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.489969772s)
	I0819 19:04:54.930552  431169 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 19:04:54.944941  431169 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 19:04:54.956420  431169 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 19:04:54.956444  431169 kubeadm.go:157] found existing configuration files:
	
	I0819 19:04:54.956500  431169 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 19:04:54.968075  431169 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 19:04:54.968139  431169 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 19:04:54.978183  431169 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 19:04:54.988804  431169 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 19:04:54.988861  431169 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 19:04:54.999591  431169 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 19:04:55.010599  431169 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 19:04:55.010663  431169 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 19:04:55.022296  431169 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 19:04:55.032082  431169 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 19:04:55.032147  431169 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 19:04:55.042603  431169 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 19:04:55.126852  431169 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0819 19:04:55.126915  431169 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 19:04:55.274428  431169 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 19:04:55.274616  431169 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 19:04:55.274783  431169 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0819 19:04:55.485605  431169 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 19:04:55.487030  431169 out.go:235]   - Generating certificates and keys ...
	I0819 19:04:55.487135  431169 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 19:04:55.487219  431169 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 19:04:55.487323  431169 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 19:04:55.487393  431169 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 19:04:55.487475  431169 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 19:04:55.487547  431169 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 19:04:55.487625  431169 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 19:04:55.487722  431169 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 19:04:55.487822  431169 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 19:04:55.487944  431169 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 19:04:55.488016  431169 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 19:04:55.488103  431169 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 19:04:55.852075  431169 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 19:04:56.049347  431169 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 19:04:56.145301  431169 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 19:04:56.262193  431169 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 19:04:56.277457  431169 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 19:04:56.279428  431169 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 19:04:56.279636  431169 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 19:04:56.414592  431169 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 19:04:56.416389  431169 out.go:235]   - Booting up control plane ...
	I0819 19:04:56.416534  431169 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 19:04:56.431030  431169 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 19:04:56.432570  431169 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 19:04:56.433453  431169 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 19:04:56.435916  431169 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0819 19:05:36.439617  431169 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0819 19:05:36.439874  431169 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 19:05:36.440107  431169 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 19:05:41.440647  431169 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 19:05:41.440857  431169 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 19:05:51.440795  431169 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 19:05:51.441054  431169 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 19:06:11.440220  431169 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 19:06:11.440437  431169 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 19:06:51.439744  431169 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 19:06:51.440004  431169 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 19:06:51.440018  431169 kubeadm.go:310] 
	I0819 19:06:51.440092  431169 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0819 19:06:51.440169  431169 kubeadm.go:310] 		timed out waiting for the condition
	I0819 19:06:51.440180  431169 kubeadm.go:310] 
	I0819 19:06:51.440235  431169 kubeadm.go:310] 	This error is likely caused by:
	I0819 19:06:51.440296  431169 kubeadm.go:310] 		- The kubelet is not running
	I0819 19:06:51.440438  431169 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0819 19:06:51.440452  431169 kubeadm.go:310] 
	I0819 19:06:51.440570  431169 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0819 19:06:51.440650  431169 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0819 19:06:51.440714  431169 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0819 19:06:51.440734  431169 kubeadm.go:310] 
	I0819 19:06:51.440878  431169 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0819 19:06:51.441003  431169 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0819 19:06:51.441020  431169 kubeadm.go:310] 
	I0819 19:06:51.441142  431169 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0819 19:06:51.441258  431169 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0819 19:06:51.441364  431169 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0819 19:06:51.441462  431169 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0819 19:06:51.441478  431169 kubeadm.go:310] 
	I0819 19:06:51.442105  431169 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 19:06:51.442222  431169 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0819 19:06:51.442311  431169 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0819 19:06:51.442406  431169 kubeadm.go:394] duration metric: took 3m56.847128616s to StartCluster
	I0819 19:06:51.442449  431169 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:06:51.442503  431169 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:06:51.484671  431169 cri.go:89] found id: ""
	I0819 19:06:51.484717  431169 logs.go:276] 0 containers: []
	W0819 19:06:51.484729  431169 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:06:51.484743  431169 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:06:51.484817  431169 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:06:51.518455  431169 cri.go:89] found id: ""
	I0819 19:06:51.518486  431169 logs.go:276] 0 containers: []
	W0819 19:06:51.518497  431169 logs.go:278] No container was found matching "etcd"
	I0819 19:06:51.518505  431169 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:06:51.518572  431169 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:06:51.553221  431169 cri.go:89] found id: ""
	I0819 19:06:51.553256  431169 logs.go:276] 0 containers: []
	W0819 19:06:51.553264  431169 logs.go:278] No container was found matching "coredns"
	I0819 19:06:51.553272  431169 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:06:51.553333  431169 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:06:51.593191  431169 cri.go:89] found id: ""
	I0819 19:06:51.593231  431169 logs.go:276] 0 containers: []
	W0819 19:06:51.593240  431169 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:06:51.593248  431169 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:06:51.593303  431169 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:06:51.628452  431169 cri.go:89] found id: ""
	I0819 19:06:51.628486  431169 logs.go:276] 0 containers: []
	W0819 19:06:51.628495  431169 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:06:51.628502  431169 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:06:51.628557  431169 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:06:51.661826  431169 cri.go:89] found id: ""
	I0819 19:06:51.661856  431169 logs.go:276] 0 containers: []
	W0819 19:06:51.661866  431169 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:06:51.661874  431169 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:06:51.661931  431169 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:06:51.694801  431169 cri.go:89] found id: ""
	I0819 19:06:51.694835  431169 logs.go:276] 0 containers: []
	W0819 19:06:51.694848  431169 logs.go:278] No container was found matching "kindnet"
	I0819 19:06:51.694863  431169 logs.go:123] Gathering logs for dmesg ...
	I0819 19:06:51.694879  431169 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:06:51.708353  431169 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:06:51.708383  431169 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:06:51.839485  431169 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:06:51.839515  431169 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:06:51.839537  431169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:06:51.952344  431169 logs.go:123] Gathering logs for container status ...
	I0819 19:06:51.952385  431169 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:06:51.991568  431169 logs.go:123] Gathering logs for kubelet ...
	I0819 19:06:51.991611  431169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 19:06:52.039358  431169 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0819 19:06:52.039440  431169 out.go:270] * 
	* 
	W0819 19:06:52.039498  431169 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0819 19:06:52.039514  431169 out.go:270] * 
	* 
	W0819 19:06:52.040302  431169 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 19:06:52.043425  431169 out.go:201] 
	W0819 19:06:52.045080  431169 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0819 19:06:52.045132  431169 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0819 19:06:52.045162  431169 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0819 19:06:52.046699  431169 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-104669 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-104669 -n old-k8s-version-104669
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-104669 -n old-k8s-version-104669: exit status 6 (233.555608ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 19:06:52.322526  437669 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-104669" does not appear in /home/jenkins/minikube-integration/19468-372744/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-104669" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (294.68s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-278232 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-278232 --alsologtostderr -v=3: exit status 82 (2m0.52672512s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-278232"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 19:04:40.090247  436942 out.go:345] Setting OutFile to fd 1 ...
	I0819 19:04:40.090369  436942 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:04:40.090377  436942 out.go:358] Setting ErrFile to fd 2...
	I0819 19:04:40.090380  436942 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:04:40.090839  436942 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19468-372744/.minikube/bin
	I0819 19:04:40.091220  436942 out.go:352] Setting JSON to false
	I0819 19:04:40.091328  436942 mustload.go:65] Loading cluster: no-preload-278232
	I0819 19:04:40.092038  436942 config.go:182] Loaded profile config "no-preload-278232": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:04:40.092115  436942 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/no-preload-278232/config.json ...
	I0819 19:04:40.092309  436942 mustload.go:65] Loading cluster: no-preload-278232
	I0819 19:04:40.092414  436942 config.go:182] Loaded profile config "no-preload-278232": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:04:40.092439  436942 stop.go:39] StopHost: no-preload-278232
	I0819 19:04:40.092797  436942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:04:40.092848  436942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:04:40.108752  436942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43815
	I0819 19:04:40.109250  436942 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:04:40.109826  436942 main.go:141] libmachine: Using API Version  1
	I0819 19:04:40.109849  436942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:04:40.110236  436942 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:04:40.112719  436942 out.go:177] * Stopping node "no-preload-278232"  ...
	I0819 19:04:40.114424  436942 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0819 19:04:40.114455  436942 main.go:141] libmachine: (no-preload-278232) Calling .DriverName
	I0819 19:04:40.114722  436942 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0819 19:04:40.114750  436942 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHHostname
	I0819 19:04:40.117827  436942 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:04:40.118241  436942 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:03:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:04:40.118273  436942 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:04:40.118609  436942 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHPort
	I0819 19:04:40.118789  436942 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:04:40.118949  436942 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHUsername
	I0819 19:04:40.119187  436942 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/no-preload-278232/id_rsa Username:docker}
	I0819 19:04:40.221543  436942 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0819 19:04:40.286175  436942 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0819 19:04:40.358700  436942 main.go:141] libmachine: Stopping "no-preload-278232"...
	I0819 19:04:40.358728  436942 main.go:141] libmachine: (no-preload-278232) Calling .GetState
	I0819 19:04:40.360621  436942 main.go:141] libmachine: (no-preload-278232) Calling .Stop
	I0819 19:04:40.364660  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 0/120
	I0819 19:04:41.366503  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 1/120
	I0819 19:04:42.368304  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 2/120
	I0819 19:04:43.370875  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 3/120
	I0819 19:04:44.372497  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 4/120
	I0819 19:04:45.374282  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 5/120
	I0819 19:04:46.376022  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 6/120
	I0819 19:04:47.377566  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 7/120
	I0819 19:04:48.379063  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 8/120
	I0819 19:04:49.380465  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 9/120
	I0819 19:04:50.382033  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 10/120
	I0819 19:04:51.383563  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 11/120
	I0819 19:04:52.384996  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 12/120
	I0819 19:04:53.386494  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 13/120
	I0819 19:04:54.388101  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 14/120
	I0819 19:04:55.389922  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 15/120
	I0819 19:04:56.391806  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 16/120
	I0819 19:04:57.393258  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 17/120
	I0819 19:04:58.394672  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 18/120
	I0819 19:04:59.396246  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 19/120
	I0819 19:05:00.398643  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 20/120
	I0819 19:05:01.400248  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 21/120
	I0819 19:05:02.401777  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 22/120
	I0819 19:05:03.403428  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 23/120
	I0819 19:05:04.404971  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 24/120
	I0819 19:05:05.407040  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 25/120
	I0819 19:05:06.408652  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 26/120
	I0819 19:05:07.410115  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 27/120
	I0819 19:05:08.411732  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 28/120
	I0819 19:05:09.413222  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 29/120
	I0819 19:05:10.415433  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 30/120
	I0819 19:05:11.416826  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 31/120
	I0819 19:05:12.418521  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 32/120
	I0819 19:05:13.419864  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 33/120
	I0819 19:05:14.421294  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 34/120
	I0819 19:05:15.423256  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 35/120
	I0819 19:05:16.424550  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 36/120
	I0819 19:05:17.426443  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 37/120
	I0819 19:05:18.427987  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 38/120
	I0819 19:05:19.430653  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 39/120
	I0819 19:05:20.432982  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 40/120
	I0819 19:05:21.434435  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 41/120
	I0819 19:05:22.436057  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 42/120
	I0819 19:05:23.438241  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 43/120
	I0819 19:05:24.439920  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 44/120
	I0819 19:05:25.442364  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 45/120
	I0819 19:05:26.443962  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 46/120
	I0819 19:05:27.445558  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 47/120
	I0819 19:05:28.446998  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 48/120
	I0819 19:05:29.448563  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 49/120
	I0819 19:05:30.450887  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 50/120
	I0819 19:05:31.452573  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 51/120
	I0819 19:05:32.454118  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 52/120
	I0819 19:05:33.455563  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 53/120
	I0819 19:05:34.457080  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 54/120
	I0819 19:05:35.459527  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 55/120
	I0819 19:05:36.461015  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 56/120
	I0819 19:05:37.462275  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 57/120
	I0819 19:05:38.463561  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 58/120
	I0819 19:05:39.465197  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 59/120
	I0819 19:05:40.467528  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 60/120
	I0819 19:05:41.469015  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 61/120
	I0819 19:05:42.470443  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 62/120
	I0819 19:05:43.471892  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 63/120
	I0819 19:05:44.473705  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 64/120
	I0819 19:05:45.476340  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 65/120
	I0819 19:05:46.477806  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 66/120
	I0819 19:05:47.479269  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 67/120
	I0819 19:05:48.480878  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 68/120
	I0819 19:05:49.482339  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 69/120
	I0819 19:05:50.484683  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 70/120
	I0819 19:05:51.486042  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 71/120
	I0819 19:05:52.487278  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 72/120
	I0819 19:05:53.488830  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 73/120
	I0819 19:05:54.490780  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 74/120
	I0819 19:05:55.492723  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 75/120
	I0819 19:05:56.494149  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 76/120
	I0819 19:05:57.495423  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 77/120
	I0819 19:05:58.496905  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 78/120
	I0819 19:05:59.498336  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 79/120
	I0819 19:06:00.500945  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 80/120
	I0819 19:06:01.502441  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 81/120
	I0819 19:06:02.503996  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 82/120
	I0819 19:06:03.505243  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 83/120
	I0819 19:06:04.506740  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 84/120
	I0819 19:06:05.508844  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 85/120
	I0819 19:06:06.510255  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 86/120
	I0819 19:06:07.511489  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 87/120
	I0819 19:06:08.513322  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 88/120
	I0819 19:06:09.514689  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 89/120
	I0819 19:06:10.516856  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 90/120
	I0819 19:06:11.518251  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 91/120
	I0819 19:06:12.519570  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 92/120
	I0819 19:06:13.520908  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 93/120
	I0819 19:06:14.522288  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 94/120
	I0819 19:06:15.524500  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 95/120
	I0819 19:06:16.526022  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 96/120
	I0819 19:06:17.527222  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 97/120
	I0819 19:06:18.528646  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 98/120
	I0819 19:06:19.530111  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 99/120
	I0819 19:06:20.531691  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 100/120
	I0819 19:06:21.532921  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 101/120
	I0819 19:06:22.534405  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 102/120
	I0819 19:06:23.535859  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 103/120
	I0819 19:06:24.537315  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 104/120
	I0819 19:06:25.539621  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 105/120
	I0819 19:06:26.541195  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 106/120
	I0819 19:06:27.542780  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 107/120
	I0819 19:06:28.544389  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 108/120
	I0819 19:06:29.545626  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 109/120
	I0819 19:06:30.548037  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 110/120
	I0819 19:06:31.549421  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 111/120
	I0819 19:06:32.550887  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 112/120
	I0819 19:06:33.552250  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 113/120
	I0819 19:06:34.554343  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 114/120
	I0819 19:06:35.556587  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 115/120
	I0819 19:06:36.557899  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 116/120
	I0819 19:06:37.559329  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 117/120
	I0819 19:06:38.560734  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 118/120
	I0819 19:06:39.562067  436942 main.go:141] libmachine: (no-preload-278232) Waiting for machine to stop 119/120
	I0819 19:06:40.563185  436942 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0819 19:06:40.563275  436942 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0819 19:06:40.564996  436942 out.go:201] 
	W0819 19:06:40.566262  436942 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0819 19:06:40.566281  436942 out.go:270] * 
	* 
	W0819 19:06:40.570069  436942 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 19:06:40.571427  436942 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-278232 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-278232 -n no-preload-278232
E0819 19:06:48.784802  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/calico-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:06:49.660999  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/custom-flannel-571803/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-278232 -n no-preload-278232: exit status 3 (18.631737871s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 19:06:59.204187  437605 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.106:22: connect: no route to host
	E0819 19:06:59.204207  437605 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.106:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-278232" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-982795 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-982795 --alsologtostderr -v=3: exit status 82 (2m0.522104779s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-982795"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 19:04:55.769174  437100 out.go:345] Setting OutFile to fd 1 ...
	I0819 19:04:55.769505  437100 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:04:55.769518  437100 out.go:358] Setting ErrFile to fd 2...
	I0819 19:04:55.769525  437100 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:04:55.769817  437100 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19468-372744/.minikube/bin
	I0819 19:04:55.770054  437100 out.go:352] Setting JSON to false
	I0819 19:04:55.770130  437100 mustload.go:65] Loading cluster: default-k8s-diff-port-982795
	I0819 19:04:55.770464  437100 config.go:182] Loaded profile config "default-k8s-diff-port-982795": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:04:55.770537  437100 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/default-k8s-diff-port-982795/config.json ...
	I0819 19:04:55.770703  437100 mustload.go:65] Loading cluster: default-k8s-diff-port-982795
	I0819 19:04:55.770822  437100 config.go:182] Loaded profile config "default-k8s-diff-port-982795": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:04:55.770871  437100 stop.go:39] StopHost: default-k8s-diff-port-982795
	I0819 19:04:55.771269  437100 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:04:55.771322  437100 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:04:55.787065  437100 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36727
	I0819 19:04:55.787504  437100 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:04:55.788115  437100 main.go:141] libmachine: Using API Version  1
	I0819 19:04:55.788142  437100 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:04:55.788541  437100 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:04:55.790873  437100 out.go:177] * Stopping node "default-k8s-diff-port-982795"  ...
	I0819 19:04:55.792138  437100 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0819 19:04:55.792167  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .DriverName
	I0819 19:04:55.792431  437100 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0819 19:04:55.792463  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHHostname
	I0819 19:04:55.795593  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:04:55.796071  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:04:06 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:04:55.796100  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:04:55.796250  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHPort
	I0819 19:04:55.796444  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:04:55.796590  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHUsername
	I0819 19:04:55.796785  437100 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/default-k8s-diff-port-982795/id_rsa Username:docker}
	I0819 19:04:55.914618  437100 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0819 19:04:55.971749  437100 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0819 19:04:56.032405  437100 main.go:141] libmachine: Stopping "default-k8s-diff-port-982795"...
	I0819 19:04:56.032464  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetState
	I0819 19:04:56.034159  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .Stop
	I0819 19:04:56.038491  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 0/120
	I0819 19:04:57.039890  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 1/120
	I0819 19:04:58.042692  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 2/120
	I0819 19:04:59.044075  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 3/120
	I0819 19:05:00.045616  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 4/120
	I0819 19:05:01.047861  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 5/120
	I0819 19:05:02.049359  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 6/120
	I0819 19:05:03.050867  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 7/120
	I0819 19:05:04.052311  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 8/120
	I0819 19:05:05.054431  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 9/120
	I0819 19:05:06.055939  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 10/120
	I0819 19:05:07.057337  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 11/120
	I0819 19:05:08.058803  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 12/120
	I0819 19:05:09.060211  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 13/120
	I0819 19:05:10.062163  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 14/120
	I0819 19:05:11.064029  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 15/120
	I0819 19:05:12.065425  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 16/120
	I0819 19:05:13.067528  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 17/120
	I0819 19:05:14.069355  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 18/120
	I0819 19:05:15.070964  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 19/120
	I0819 19:05:16.073188  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 20/120
	I0819 19:05:17.074727  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 21/120
	I0819 19:05:18.076111  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 22/120
	I0819 19:05:19.077602  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 23/120
	I0819 19:05:20.079082  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 24/120
	I0819 19:05:21.081047  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 25/120
	I0819 19:05:22.082567  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 26/120
	I0819 19:05:23.083929  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 27/120
	I0819 19:05:24.085701  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 28/120
	I0819 19:05:25.087326  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 29/120
	I0819 19:05:26.090215  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 30/120
	I0819 19:05:27.091789  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 31/120
	I0819 19:05:28.093251  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 32/120
	I0819 19:05:29.094596  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 33/120
	I0819 19:05:30.096027  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 34/120
	I0819 19:05:31.098113  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 35/120
	I0819 19:05:32.099695  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 36/120
	I0819 19:05:33.101007  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 37/120
	I0819 19:05:34.102486  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 38/120
	I0819 19:05:35.103876  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 39/120
	I0819 19:05:36.106097  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 40/120
	I0819 19:05:37.107436  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 41/120
	I0819 19:05:38.108971  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 42/120
	I0819 19:05:39.110327  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 43/120
	I0819 19:05:40.111871  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 44/120
	I0819 19:05:41.113878  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 45/120
	I0819 19:05:42.115285  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 46/120
	I0819 19:05:43.116732  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 47/120
	I0819 19:05:44.118169  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 48/120
	I0819 19:05:45.119799  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 49/120
	I0819 19:05:46.122167  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 50/120
	I0819 19:05:47.123642  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 51/120
	I0819 19:05:48.125068  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 52/120
	I0819 19:05:49.126229  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 53/120
	I0819 19:05:50.127512  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 54/120
	I0819 19:05:51.129627  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 55/120
	I0819 19:05:52.131032  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 56/120
	I0819 19:05:53.132248  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 57/120
	I0819 19:05:54.133724  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 58/120
	I0819 19:05:55.135174  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 59/120
	I0819 19:05:56.137557  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 60/120
	I0819 19:05:57.138838  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 61/120
	I0819 19:05:58.140458  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 62/120
	I0819 19:05:59.142153  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 63/120
	I0819 19:06:00.143653  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 64/120
	I0819 19:06:01.145791  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 65/120
	I0819 19:06:02.147142  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 66/120
	I0819 19:06:03.148619  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 67/120
	I0819 19:06:04.150046  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 68/120
	I0819 19:06:05.152656  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 69/120
	I0819 19:06:06.155030  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 70/120
	I0819 19:06:07.156518  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 71/120
	I0819 19:06:08.157934  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 72/120
	I0819 19:06:09.159728  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 73/120
	I0819 19:06:10.161315  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 74/120
	I0819 19:06:11.163694  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 75/120
	I0819 19:06:12.165246  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 76/120
	I0819 19:06:13.166500  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 77/120
	I0819 19:06:14.167978  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 78/120
	I0819 19:06:15.169233  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 79/120
	I0819 19:06:16.171524  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 80/120
	I0819 19:06:17.173010  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 81/120
	I0819 19:06:18.174458  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 82/120
	I0819 19:06:19.175781  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 83/120
	I0819 19:06:20.177169  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 84/120
	I0819 19:06:21.179343  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 85/120
	I0819 19:06:22.180635  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 86/120
	I0819 19:06:23.182173  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 87/120
	I0819 19:06:24.183520  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 88/120
	I0819 19:06:25.185044  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 89/120
	I0819 19:06:26.186579  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 90/120
	I0819 19:06:27.188144  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 91/120
	I0819 19:06:28.190060  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 92/120
	I0819 19:06:29.191376  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 93/120
	I0819 19:06:30.192693  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 94/120
	I0819 19:06:31.194585  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 95/120
	I0819 19:06:32.196036  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 96/120
	I0819 19:06:33.197375  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 97/120
	I0819 19:06:34.198796  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 98/120
	I0819 19:06:35.200263  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 99/120
	I0819 19:06:36.201447  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 100/120
	I0819 19:06:37.202841  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 101/120
	I0819 19:06:38.204079  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 102/120
	I0819 19:06:39.205469  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 103/120
	I0819 19:06:40.207107  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 104/120
	I0819 19:06:41.209272  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 105/120
	I0819 19:06:42.210829  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 106/120
	I0819 19:06:43.212203  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 107/120
	I0819 19:06:44.213750  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 108/120
	I0819 19:06:45.215165  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 109/120
	I0819 19:06:46.216709  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 110/120
	I0819 19:06:47.218225  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 111/120
	I0819 19:06:48.219705  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 112/120
	I0819 19:06:49.221206  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 113/120
	I0819 19:06:50.222781  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 114/120
	I0819 19:06:51.224874  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 115/120
	I0819 19:06:52.226401  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 116/120
	I0819 19:06:53.228073  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 117/120
	I0819 19:06:54.230141  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 118/120
	I0819 19:06:55.231603  437100 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for machine to stop 119/120
	I0819 19:06:56.232364  437100 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0819 19:06:56.232429  437100 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0819 19:06:56.234435  437100 out.go:201] 
	W0819 19:06:56.235652  437100 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0819 19:06:56.235690  437100 out.go:270] * 
	* 
	W0819 19:06:56.238987  437100 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 19:06:56.240236  437100 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-982795 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-982795 -n default-k8s-diff-port-982795
E0819 19:06:57.573772  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/flannel-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:06:57.580210  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/flannel-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:06:57.591607  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/flannel-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:06:57.613113  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/flannel-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:06:57.654555  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/flannel-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:06:57.736050  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/flannel-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:06:57.888786  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/auto-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:06:57.898224  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/flannel-571803/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-982795 -n default-k8s-diff-port-982795: exit status 3 (18.577967438s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 19:07:14.820034  437815 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.48:22: connect: no route to host
	E0819 19:07:14.820069  437815 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.48:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-982795" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-024748 --alsologtostderr -v=3
E0819 19:05:05.159826  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/kindnet-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:05:24.365400  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/functional-499773/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:05:25.641898  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/kindnet-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:05:35.966782  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/auto-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:06:06.603728  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/kindnet-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:06:07.808275  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/calico-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:06:07.814690  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/calico-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:06:07.826052  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/calico-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:06:07.847464  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/calico-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:06:07.888921  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/calico-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:06:07.970418  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/calico-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:06:08.132586  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/calico-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:06:08.454723  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/calico-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:06:09.096327  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/calico-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:06:10.377789  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/calico-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:06:12.939584  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/calico-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:06:18.061516  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/calico-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:06:28.303449  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/calico-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:06:29.165310  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/custom-flannel-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:06:29.171691  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/custom-flannel-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:06:29.183025  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/custom-flannel-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:06:29.204639  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/custom-flannel-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:06:29.246086  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/custom-flannel-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:06:29.327572  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/custom-flannel-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:06:29.489151  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/custom-flannel-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:06:29.811191  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/custom-flannel-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:06:30.453285  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/custom-flannel-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:06:31.735236  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/custom-flannel-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:06:34.297325  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/custom-flannel-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:06:39.419019  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/custom-flannel-571803/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-024748 --alsologtostderr -v=3: exit status 82 (2m0.486473749s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-024748"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 19:04:57.628231  437168 out.go:345] Setting OutFile to fd 1 ...
	I0819 19:04:57.628588  437168 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:04:57.628603  437168 out.go:358] Setting ErrFile to fd 2...
	I0819 19:04:57.628612  437168 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:04:57.628791  437168 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19468-372744/.minikube/bin
	I0819 19:04:57.629051  437168 out.go:352] Setting JSON to false
	I0819 19:04:57.629131  437168 mustload.go:65] Loading cluster: embed-certs-024748
	I0819 19:04:57.629438  437168 config.go:182] Loaded profile config "embed-certs-024748": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:04:57.629511  437168 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/embed-certs-024748/config.json ...
	I0819 19:04:57.629684  437168 mustload.go:65] Loading cluster: embed-certs-024748
	I0819 19:04:57.629782  437168 config.go:182] Loaded profile config "embed-certs-024748": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:04:57.629805  437168 stop.go:39] StopHost: embed-certs-024748
	I0819 19:04:57.630178  437168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:04:57.630218  437168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:04:57.645196  437168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38339
	I0819 19:04:57.645688  437168 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:04:57.646302  437168 main.go:141] libmachine: Using API Version  1
	I0819 19:04:57.646334  437168 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:04:57.646687  437168 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:04:57.649087  437168 out.go:177] * Stopping node "embed-certs-024748"  ...
	I0819 19:04:57.650142  437168 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0819 19:04:57.650178  437168 main.go:141] libmachine: (embed-certs-024748) Calling .DriverName
	I0819 19:04:57.650419  437168 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0819 19:04:57.650440  437168 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHHostname
	I0819 19:04:57.653180  437168 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:04:57.653607  437168 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:04:57.653633  437168 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:04:57.653737  437168 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHPort
	I0819 19:04:57.653906  437168 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:04:57.654063  437168 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHUsername
	I0819 19:04:57.654198  437168 sshutil.go:53] new ssh client: &{IP:192.168.72.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/embed-certs-024748/id_rsa Username:docker}
	I0819 19:04:57.743620  437168 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0819 19:04:57.799152  437168 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0819 19:04:57.862088  437168 main.go:141] libmachine: Stopping "embed-certs-024748"...
	I0819 19:04:57.862119  437168 main.go:141] libmachine: (embed-certs-024748) Calling .GetState
	I0819 19:04:57.863659  437168 main.go:141] libmachine: (embed-certs-024748) Calling .Stop
	I0819 19:04:57.867602  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 0/120
	I0819 19:04:58.869375  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 1/120
	I0819 19:04:59.870991  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 2/120
	I0819 19:05:00.872311  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 3/120
	I0819 19:05:01.874193  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 4/120
	I0819 19:05:02.876240  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 5/120
	I0819 19:05:03.878115  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 6/120
	I0819 19:05:04.879545  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 7/120
	I0819 19:05:05.880998  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 8/120
	I0819 19:05:06.882608  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 9/120
	I0819 19:05:07.884110  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 10/120
	I0819 19:05:08.885746  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 11/120
	I0819 19:05:09.887200  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 12/120
	I0819 19:05:10.888827  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 13/120
	I0819 19:05:11.890140  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 14/120
	I0819 19:05:12.892208  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 15/120
	I0819 19:05:13.893578  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 16/120
	I0819 19:05:14.894999  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 17/120
	I0819 19:05:15.896483  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 18/120
	I0819 19:05:16.898013  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 19/120
	I0819 19:05:17.900347  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 20/120
	I0819 19:05:18.902055  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 21/120
	I0819 19:05:19.903490  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 22/120
	I0819 19:05:20.904879  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 23/120
	I0819 19:05:21.906142  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 24/120
	I0819 19:05:22.907697  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 25/120
	I0819 19:05:23.909225  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 26/120
	I0819 19:05:24.910768  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 27/120
	I0819 19:05:25.912265  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 28/120
	I0819 19:05:26.913670  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 29/120
	I0819 19:05:27.915952  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 30/120
	I0819 19:05:28.917658  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 31/120
	I0819 19:05:29.919148  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 32/120
	I0819 19:05:30.920593  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 33/120
	I0819 19:05:31.922082  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 34/120
	I0819 19:05:32.924105  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 35/120
	I0819 19:05:33.925500  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 36/120
	I0819 19:05:34.926961  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 37/120
	I0819 19:05:35.928408  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 38/120
	I0819 19:05:36.929939  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 39/120
	I0819 19:05:37.932440  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 40/120
	I0819 19:05:38.934099  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 41/120
	I0819 19:05:39.935551  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 42/120
	I0819 19:05:40.937201  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 43/120
	I0819 19:05:41.938605  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 44/120
	I0819 19:05:42.940957  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 45/120
	I0819 19:05:43.942383  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 46/120
	I0819 19:05:44.944240  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 47/120
	I0819 19:05:45.945663  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 48/120
	I0819 19:05:46.947026  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 49/120
	I0819 19:05:47.949315  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 50/120
	I0819 19:05:48.950905  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 51/120
	I0819 19:05:49.952394  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 52/120
	I0819 19:05:50.954327  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 53/120
	I0819 19:05:51.955576  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 54/120
	I0819 19:05:52.957792  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 55/120
	I0819 19:05:53.959090  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 56/120
	I0819 19:05:54.960561  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 57/120
	I0819 19:05:55.961878  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 58/120
	I0819 19:05:56.963423  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 59/120
	I0819 19:05:57.965068  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 60/120
	I0819 19:05:58.966467  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 61/120
	I0819 19:05:59.968074  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 62/120
	I0819 19:06:00.969534  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 63/120
	I0819 19:06:01.971487  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 64/120
	I0819 19:06:02.973767  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 65/120
	I0819 19:06:03.975318  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 66/120
	I0819 19:06:04.976879  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 67/120
	I0819 19:06:05.978274  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 68/120
	I0819 19:06:06.979860  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 69/120
	I0819 19:06:07.982133  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 70/120
	I0819 19:06:08.983848  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 71/120
	I0819 19:06:09.985340  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 72/120
	I0819 19:06:10.987128  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 73/120
	I0819 19:06:11.988643  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 74/120
	I0819 19:06:12.990759  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 75/120
	I0819 19:06:13.992480  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 76/120
	I0819 19:06:14.993799  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 77/120
	I0819 19:06:15.995643  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 78/120
	I0819 19:06:16.996863  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 79/120
	I0819 19:06:17.998146  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 80/120
	I0819 19:06:18.999532  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 81/120
	I0819 19:06:20.000980  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 82/120
	I0819 19:06:21.002503  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 83/120
	I0819 19:06:22.003858  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 84/120
	I0819 19:06:23.005986  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 85/120
	I0819 19:06:24.007255  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 86/120
	I0819 19:06:25.008754  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 87/120
	I0819 19:06:26.010004  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 88/120
	I0819 19:06:27.011820  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 89/120
	I0819 19:06:28.014218  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 90/120
	I0819 19:06:29.015721  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 91/120
	I0819 19:06:30.017155  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 92/120
	I0819 19:06:31.018752  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 93/120
	I0819 19:06:32.020229  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 94/120
	I0819 19:06:33.022334  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 95/120
	I0819 19:06:34.023708  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 96/120
	I0819 19:06:35.025150  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 97/120
	I0819 19:06:36.026666  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 98/120
	I0819 19:06:37.028176  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 99/120
	I0819 19:06:38.030574  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 100/120
	I0819 19:06:39.031964  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 101/120
	I0819 19:06:40.033269  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 102/120
	I0819 19:06:41.034806  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 103/120
	I0819 19:06:42.036423  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 104/120
	I0819 19:06:43.038747  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 105/120
	I0819 19:06:44.040206  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 106/120
	I0819 19:06:45.041555  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 107/120
	I0819 19:06:46.043454  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 108/120
	I0819 19:06:47.044825  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 109/120
	I0819 19:06:48.047280  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 110/120
	I0819 19:06:49.048699  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 111/120
	I0819 19:06:50.050010  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 112/120
	I0819 19:06:51.051479  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 113/120
	I0819 19:06:52.053003  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 114/120
	I0819 19:06:53.054906  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 115/120
	I0819 19:06:54.056437  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 116/120
	I0819 19:06:55.057874  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 117/120
	I0819 19:06:56.059407  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 118/120
	I0819 19:06:57.060807  437168 main.go:141] libmachine: (embed-certs-024748) Waiting for machine to stop 119/120
	I0819 19:06:58.061257  437168 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0819 19:06:58.061320  437168 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0819 19:06:58.063227  437168 out.go:201] 
	W0819 19:06:58.064678  437168 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0819 19:06:58.064701  437168 out.go:270] * 
	* 
	W0819 19:06:58.068158  437168 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 19:06:58.069465  437168 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-024748 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-024748 -n embed-certs-024748
E0819 19:06:58.220225  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/flannel-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:06:58.862539  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/flannel-571803/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-024748 -n embed-certs-024748: exit status 3 (18.540338679s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 19:07:16.611990  437845 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.96:22: connect: no route to host
	E0819 19:07:16.612018  437845 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.96:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-024748" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-104669 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-104669 create -f testdata/busybox.yaml: exit status 1 (44.888784ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-104669" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-104669 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-104669 -n old-k8s-version-104669
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-104669 -n old-k8s-version-104669: exit status 6 (221.479922ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 19:06:52.592192  437709 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-104669" does not appear in /home/jenkins/minikube-integration/19468-372744/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-104669" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-104669 -n old-k8s-version-104669
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-104669 -n old-k8s-version-104669: exit status 6 (226.822915ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 19:06:52.818067  437739 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-104669" does not appear in /home/jenkins/minikube-integration/19468-372744/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-104669" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.49s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (96.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-104669 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-104669 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m35.793593094s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-104669 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-104669 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-104669 describe deploy/metrics-server -n kube-system: exit status 1 (47.354622ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-104669" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-104669 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-104669 -n old-k8s-version-104669
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-104669 -n old-k8s-version-104669: exit status 6 (260.656388ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 19:08:28.919899  438598 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-104669" does not appear in /home/jenkins/minikube-integration/19468-372744/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-104669" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (96.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-278232 -n no-preload-278232
E0819 19:07:00.144174  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/flannel-571803/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-278232 -n no-preload-278232: exit status 3 (3.167366265s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 19:07:02.372084  437892 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.106:22: connect: no route to host
	E0819 19:07:02.372107  437892 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.106:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-278232 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0819 19:07:02.705797  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/flannel-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:07:07.827807  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/flannel-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:07:08.513918  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/bridge-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:07:08.520249  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/bridge-571803/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-278232 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153820571s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.106:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-278232 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-278232 -n no-preload-278232
E0819 19:07:08.532472  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/bridge-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:07:08.553906  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/bridge-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:07:08.595406  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/bridge-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:07:08.676890  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/bridge-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:07:08.838496  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/bridge-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:07:09.160413  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/bridge-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:07:09.802660  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/bridge-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:07:10.114555  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:07:10.143087  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/custom-flannel-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:07:11.084411  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/bridge-571803/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-278232 -n no-preload-278232: exit status 3 (3.061988795s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 19:07:11.588191  437971 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.106:22: connect: no route to host
	E0819 19:07:11.588228  437971 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.106:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-278232" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-982795 -n default-k8s-diff-port-982795
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-982795 -n default-k8s-diff-port-982795: exit status 3 (3.167700889s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 19:07:17.988071  438058 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.48:22: connect: no route to host
	E0819 19:07:17.988093  438058 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.48:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-982795 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0819 19:07:18.069635  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/flannel-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:07:18.768160  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/bridge-571803/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-982795 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.154970294s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.48:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-982795 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-982795 -n default-k8s-diff-port-982795
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-982795 -n default-k8s-diff-port-982795: exit status 3 (3.060824388s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 19:07:27.204077  438186 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.48:22: connect: no route to host
	E0819 19:07:27.204103  438186 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.48:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-982795" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-024748 -n embed-certs-024748
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-024748 -n embed-certs-024748: exit status 3 (3.168186492s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 19:07:19.780036  438088 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.96:22: connect: no route to host
	E0819 19:07:19.780066  438088 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.96:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-024748 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-024748 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153090848s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.96:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-024748 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-024748 -n embed-certs-024748
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-024748 -n embed-certs-024748: exit status 3 (3.062510222s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 19:07:28.996099  438215 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.96:22: connect: no route to host
	E0819 19:07:28.996123  438215 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.96:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-024748" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (749.65s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-104669 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0819 19:08:31.909485  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/enable-default-cni-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:08:42.151874  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/enable-default-cni-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:08:51.668115  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/calico-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:09:02.633750  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/enable-default-cni-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:09:13.026982  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/custom-flannel-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:09:14.028477  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/auto-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:09:41.435754  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/flannel-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:09:41.730712  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/auto-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:09:43.595241  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/enable-default-cni-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:09:44.664421  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/kindnet-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:09:52.375789  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/bridge-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:10:12.367244  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/kindnet-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:10:24.365120  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/functional-499773/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:11:05.516638  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/enable-default-cni-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:11:07.807888  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/calico-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:11:29.165224  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/custom-flannel-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:11:35.509708  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/calico-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:11:47.438529  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/functional-499773/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:11:56.868859  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/custom-flannel-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:11:57.573063  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/flannel-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:12:08.514310  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/bridge-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:12:10.114196  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:12:25.277294  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/flannel-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:12:36.218094  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/bridge-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:13:21.653742  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/enable-default-cni-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:13:49.358533  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/enable-default-cni-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:14:14.028189  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/auto-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:14:44.664532  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/kindnet-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:15:24.365098  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/functional-499773/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:16:07.808012  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/calico-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:16:29.165303  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/custom-flannel-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:16:57.573866  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/flannel-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:17:08.514055  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/bridge-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:17:10.114473  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-104669 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (12m25.936246203s)

                                                
                                                
-- stdout --
	* [old-k8s-version-104669] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19468
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19468-372744/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19468-372744/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-104669" primary control-plane node in "old-k8s-version-104669" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-104669" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 19:08:30.532545  438716 out.go:345] Setting OutFile to fd 1 ...
	I0819 19:08:30.532649  438716 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:08:30.532657  438716 out.go:358] Setting ErrFile to fd 2...
	I0819 19:08:30.532661  438716 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:08:30.532811  438716 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19468-372744/.minikube/bin
	I0819 19:08:30.533379  438716 out.go:352] Setting JSON to false
	I0819 19:08:30.534373  438716 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":10253,"bootTime":1724084257,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 19:08:30.534451  438716 start.go:139] virtualization: kvm guest
	I0819 19:08:30.536658  438716 out.go:177] * [old-k8s-version-104669] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 19:08:30.537921  438716 out.go:177]   - MINIKUBE_LOCATION=19468
	I0819 19:08:30.537959  438716 notify.go:220] Checking for updates...
	I0819 19:08:30.540501  438716 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 19:08:30.541864  438716 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19468-372744/kubeconfig
	I0819 19:08:30.543170  438716 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19468-372744/.minikube
	I0819 19:08:30.544395  438716 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 19:08:30.545614  438716 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 19:08:30.547072  438716 config.go:182] Loaded profile config "old-k8s-version-104669": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0819 19:08:30.547468  438716 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:08:30.547570  438716 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:08:30.563059  438716 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34139
	I0819 19:08:30.563506  438716 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:08:30.564068  438716 main.go:141] libmachine: Using API Version  1
	I0819 19:08:30.564091  438716 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:08:30.564474  438716 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:08:30.564719  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .DriverName
	I0819 19:08:30.566599  438716 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0819 19:08:30.568124  438716 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 19:08:30.568503  438716 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:08:30.568541  438716 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:08:30.583805  438716 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35313
	I0819 19:08:30.584314  438716 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:08:30.584805  438716 main.go:141] libmachine: Using API Version  1
	I0819 19:08:30.584827  438716 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:08:30.585131  438716 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:08:30.585320  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .DriverName
	I0819 19:08:30.621020  438716 out.go:177] * Using the kvm2 driver based on existing profile
	I0819 19:08:30.622137  438716 start.go:297] selected driver: kvm2
	I0819 19:08:30.622158  438716 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-104669 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-104669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.32 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 19:08:30.622252  438716 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 19:08:30.622998  438716 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 19:08:30.623082  438716 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19468-372744/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 19:08:30.638616  438716 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 19:08:30.638998  438716 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 19:08:30.639047  438716 cni.go:84] Creating CNI manager for ""
	I0819 19:08:30.639059  438716 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 19:08:30.639097  438716 start.go:340] cluster config:
	{Name:old-k8s-version-104669 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-104669 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.32 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 19:08:30.639243  438716 iso.go:125] acquiring lock: {Name:mk4c0ac1c3202b1a296739df622960e7a0bd8566 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 19:08:30.641823  438716 out.go:177] * Starting "old-k8s-version-104669" primary control-plane node in "old-k8s-version-104669" cluster
	I0819 19:08:30.643167  438716 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0819 19:08:30.643197  438716 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0819 19:08:30.643205  438716 cache.go:56] Caching tarball of preloaded images
	I0819 19:08:30.643300  438716 preload.go:172] Found /home/jenkins/minikube-integration/19468-372744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 19:08:30.643311  438716 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0819 19:08:30.643409  438716 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/config.json ...
	I0819 19:08:30.643583  438716 start.go:360] acquireMachinesLock for old-k8s-version-104669: {Name:mk24ba67a747357e9ce40f1e460d2bb0bc59cc75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 19:12:29.132976  438716 start.go:364] duration metric: took 3m58.489348567s to acquireMachinesLock for "old-k8s-version-104669"
	I0819 19:12:29.133047  438716 start.go:96] Skipping create...Using existing machine configuration
	I0819 19:12:29.133055  438716 fix.go:54] fixHost starting: 
	I0819 19:12:29.133485  438716 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:29.133524  438716 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:29.151330  438716 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39213
	I0819 19:12:29.151778  438716 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:29.152271  438716 main.go:141] libmachine: Using API Version  1
	I0819 19:12:29.152301  438716 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:29.152682  438716 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:29.152883  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .DriverName
	I0819 19:12:29.153065  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetState
	I0819 19:12:29.154399  438716 fix.go:112] recreateIfNeeded on old-k8s-version-104669: state=Stopped err=<nil>
	I0819 19:12:29.154444  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .DriverName
	W0819 19:12:29.154684  438716 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 19:12:29.156349  438716 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-104669" ...
	I0819 19:12:29.157631  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .Start
	I0819 19:12:29.157825  438716 main.go:141] libmachine: (old-k8s-version-104669) Ensuring networks are active...
	I0819 19:12:29.158635  438716 main.go:141] libmachine: (old-k8s-version-104669) Ensuring network default is active
	I0819 19:12:29.159041  438716 main.go:141] libmachine: (old-k8s-version-104669) Ensuring network mk-old-k8s-version-104669 is active
	I0819 19:12:29.159509  438716 main.go:141] libmachine: (old-k8s-version-104669) Getting domain xml...
	I0819 19:12:29.160383  438716 main.go:141] libmachine: (old-k8s-version-104669) Creating domain...
	I0819 19:12:30.452488  438716 main.go:141] libmachine: (old-k8s-version-104669) Waiting to get IP...
	I0819 19:12:30.453743  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:30.454237  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:30.454323  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:30.454193  439728 retry.go:31] will retry after 197.440033ms: waiting for machine to come up
	I0819 19:12:30.653917  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:30.654521  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:30.654566  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:30.654436  439728 retry.go:31] will retry after 317.038756ms: waiting for machine to come up
	I0819 19:12:30.973003  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:30.973530  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:30.973560  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:30.973487  439728 retry.go:31] will retry after 486.945032ms: waiting for machine to come up
	I0819 19:12:31.461937  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:31.462438  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:31.462470  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:31.462389  439728 retry.go:31] will retry after 441.288745ms: waiting for machine to come up
	I0819 19:12:31.904947  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:31.905564  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:31.905617  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:31.905472  439728 retry.go:31] will retry after 752.583403ms: waiting for machine to come up
	I0819 19:12:32.659642  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:32.660175  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:32.660207  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:32.660128  439728 retry.go:31] will retry after 932.705928ms: waiting for machine to come up
	I0819 19:12:33.594983  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:33.595529  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:33.595556  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:33.595466  439728 retry.go:31] will retry after 936.558157ms: waiting for machine to come up
	I0819 19:12:34.533158  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:34.533717  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:34.533743  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:34.533656  439728 retry.go:31] will retry after 1.435945188s: waiting for machine to come up
	I0819 19:12:35.971138  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:35.971576  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:35.971607  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:35.971514  439728 retry.go:31] will retry after 1.521077744s: waiting for machine to come up
	I0819 19:12:37.493931  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:37.494389  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:37.494415  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:37.494361  439728 retry.go:31] will retry after 1.632508579s: waiting for machine to come up
	I0819 19:12:39.128939  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:39.129429  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:39.129456  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:39.129392  439728 retry.go:31] will retry after 2.634061376s: waiting for machine to come up
	I0819 19:12:41.765706  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:41.766129  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:41.766182  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:41.766093  439728 retry.go:31] will retry after 3.464758587s: waiting for machine to come up
	I0819 19:12:45.232640  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:45.233118  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:45.233151  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:45.233066  439728 retry.go:31] will retry after 3.551527195s: waiting for machine to come up
	I0819 19:12:48.787197  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:48.787586  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has current primary IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:48.787611  438716 main.go:141] libmachine: (old-k8s-version-104669) Found IP for machine: 192.168.50.32
	I0819 19:12:48.787625  438716 main.go:141] libmachine: (old-k8s-version-104669) Reserving static IP address...
	I0819 19:12:48.788104  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "old-k8s-version-104669", mac: "52:54:00:8c:ff:a3", ip: "192.168.50.32"} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:48.788140  438716 main.go:141] libmachine: (old-k8s-version-104669) Reserved static IP address: 192.168.50.32
	I0819 19:12:48.788164  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | skip adding static IP to network mk-old-k8s-version-104669 - found existing host DHCP lease matching {name: "old-k8s-version-104669", mac: "52:54:00:8c:ff:a3", ip: "192.168.50.32"}
	I0819 19:12:48.788186  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | Getting to WaitForSSH function...
	I0819 19:12:48.788202  438716 main.go:141] libmachine: (old-k8s-version-104669) Waiting for SSH to be available...
	I0819 19:12:48.790365  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:48.790765  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:48.790793  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:48.790994  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | Using SSH client type: external
	I0819 19:12:48.791034  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | Using SSH private key: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/old-k8s-version-104669/id_rsa (-rw-------)
	I0819 19:12:48.791073  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.32 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19468-372744/.minikube/machines/old-k8s-version-104669/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 19:12:48.791087  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | About to run SSH command:
	I0819 19:12:48.791103  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | exit 0
	I0819 19:12:48.920087  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | SSH cmd err, output: <nil>: 
	I0819 19:12:48.920464  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetConfigRaw
	I0819 19:12:48.921105  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetIP
	I0819 19:12:48.923637  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:48.924022  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:48.924053  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:48.924242  438716 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/config.json ...
	I0819 19:12:48.924429  438716 machine.go:93] provisionDockerMachine start ...
	I0819 19:12:48.924447  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .DriverName
	I0819 19:12:48.924655  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHHostname
	I0819 19:12:48.926885  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:48.927345  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:48.927376  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:48.927527  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHPort
	I0819 19:12:48.927723  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:48.927846  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:48.927968  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHUsername
	I0819 19:12:48.928241  438716 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:48.928453  438716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I0819 19:12:48.928475  438716 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 19:12:49.039908  438716 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 19:12:49.039944  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetMachineName
	I0819 19:12:49.040200  438716 buildroot.go:166] provisioning hostname "old-k8s-version-104669"
	I0819 19:12:49.040236  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetMachineName
	I0819 19:12:49.040454  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHHostname
	I0819 19:12:49.043462  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.043860  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:49.043892  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.044061  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHPort
	I0819 19:12:49.044256  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:49.044472  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:49.044613  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHUsername
	I0819 19:12:49.044837  438716 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:49.045014  438716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I0819 19:12:49.045027  438716 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-104669 && echo "old-k8s-version-104669" | sudo tee /etc/hostname
	I0819 19:12:49.170660  438716 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-104669
	
	I0819 19:12:49.170695  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHHostname
	I0819 19:12:49.173564  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.173855  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:49.173882  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.174059  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHPort
	I0819 19:12:49.174239  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:49.174432  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:49.174564  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHUsername
	I0819 19:12:49.174732  438716 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:49.174923  438716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I0819 19:12:49.174941  438716 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-104669' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-104669/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-104669' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 19:12:49.298689  438716 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 19:12:49.298731  438716 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19468-372744/.minikube CaCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19468-372744/.minikube}
	I0819 19:12:49.298764  438716 buildroot.go:174] setting up certificates
	I0819 19:12:49.298778  438716 provision.go:84] configureAuth start
	I0819 19:12:49.298793  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetMachineName
	I0819 19:12:49.299157  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetIP
	I0819 19:12:49.301897  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.302290  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:49.302326  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.302462  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHHostname
	I0819 19:12:49.304592  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.304960  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:49.304987  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.305150  438716 provision.go:143] copyHostCerts
	I0819 19:12:49.305219  438716 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem, removing ...
	I0819 19:12:49.305243  438716 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem
	I0819 19:12:49.305310  438716 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem (1082 bytes)
	I0819 19:12:49.305437  438716 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem, removing ...
	I0819 19:12:49.305449  438716 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem
	I0819 19:12:49.305477  438716 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem (1123 bytes)
	I0819 19:12:49.305571  438716 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem, removing ...
	I0819 19:12:49.305583  438716 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem
	I0819 19:12:49.305612  438716 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem (1675 bytes)
	I0819 19:12:49.305699  438716 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-104669 san=[127.0.0.1 192.168.50.32 localhost minikube old-k8s-version-104669]
	I0819 19:12:49.394004  438716 provision.go:177] copyRemoteCerts
	I0819 19:12:49.394074  438716 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 19:12:49.394112  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHHostname
	I0819 19:12:49.396645  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.396906  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:49.396951  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.397108  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHPort
	I0819 19:12:49.397321  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:49.397504  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHUsername
	I0819 19:12:49.397709  438716 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/old-k8s-version-104669/id_rsa Username:docker}
	I0819 19:12:49.483061  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 19:12:49.508297  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 19:12:49.533821  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0819 19:12:49.560064  438716 provision.go:87] duration metric: took 261.270909ms to configureAuth
	I0819 19:12:49.560093  438716 buildroot.go:189] setting minikube options for container-runtime
	I0819 19:12:49.560310  438716 config.go:182] Loaded profile config "old-k8s-version-104669": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0819 19:12:49.560409  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHHostname
	I0819 19:12:49.563173  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.563604  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:49.563633  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.563882  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHPort
	I0819 19:12:49.564075  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:49.564274  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:49.564479  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHUsername
	I0819 19:12:49.564707  438716 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:49.564925  438716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I0819 19:12:49.564948  438716 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 19:12:49.837237  438716 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 19:12:49.837267  438716 machine.go:96] duration metric: took 912.825625ms to provisionDockerMachine
	I0819 19:12:49.837281  438716 start.go:293] postStartSetup for "old-k8s-version-104669" (driver="kvm2")
	I0819 19:12:49.837297  438716 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 19:12:49.837341  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .DriverName
	I0819 19:12:49.837716  438716 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 19:12:49.837757  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHHostname
	I0819 19:12:49.840409  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.840759  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:49.840789  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.840988  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHPort
	I0819 19:12:49.841183  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:49.841345  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHUsername
	I0819 19:12:49.841473  438716 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/old-k8s-version-104669/id_rsa Username:docker}
	I0819 19:12:49.931067  438716 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 19:12:49.935562  438716 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 19:12:49.935590  438716 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-372744/.minikube/addons for local assets ...
	I0819 19:12:49.935694  438716 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-372744/.minikube/files for local assets ...
	I0819 19:12:49.935815  438716 filesync.go:149] local asset: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem -> 3800092.pem in /etc/ssl/certs
	I0819 19:12:49.935941  438716 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 19:12:49.945418  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem --> /etc/ssl/certs/3800092.pem (1708 bytes)
	I0819 19:12:49.969454  438716 start.go:296] duration metric: took 132.15677ms for postStartSetup
	I0819 19:12:49.969494  438716 fix.go:56] duration metric: took 20.836438665s for fixHost
	I0819 19:12:49.969517  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHHostname
	I0819 19:12:49.972127  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.972502  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:49.972542  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.972758  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHPort
	I0819 19:12:49.973000  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:49.973190  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:49.973355  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHUsername
	I0819 19:12:49.973548  438716 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:49.973753  438716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I0819 19:12:49.973766  438716 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 19:12:50.084645  438716 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724094770.056929881
	
	I0819 19:12:50.084672  438716 fix.go:216] guest clock: 1724094770.056929881
	I0819 19:12:50.084681  438716 fix.go:229] Guest: 2024-08-19 19:12:50.056929881 +0000 UTC Remote: 2024-08-19 19:12:49.969497734 +0000 UTC m=+259.472837552 (delta=87.432147ms)
	I0819 19:12:50.084711  438716 fix.go:200] guest clock delta is within tolerance: 87.432147ms
	I0819 19:12:50.084718  438716 start.go:83] releasing machines lock for "old-k8s-version-104669", held for 20.951701853s
	I0819 19:12:50.084752  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .DriverName
	I0819 19:12:50.085050  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetIP
	I0819 19:12:50.087976  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:50.088363  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:50.088391  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:50.088572  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .DriverName
	I0819 19:12:50.089141  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .DriverName
	I0819 19:12:50.089360  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .DriverName
	I0819 19:12:50.089460  438716 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 19:12:50.089526  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHHostname
	I0819 19:12:50.089572  438716 ssh_runner.go:195] Run: cat /version.json
	I0819 19:12:50.089599  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHHostname
	I0819 19:12:50.092427  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:50.092591  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:50.092772  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:50.092797  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:50.092933  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:50.092965  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:50.092965  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHPort
	I0819 19:12:50.093147  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:50.093248  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHPort
	I0819 19:12:50.093328  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHUsername
	I0819 19:12:50.093409  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:50.093503  438716 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/old-k8s-version-104669/id_rsa Username:docker}
	I0819 19:12:50.093532  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHUsername
	I0819 19:12:50.093650  438716 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/old-k8s-version-104669/id_rsa Username:docker}
	I0819 19:12:50.177322  438716 ssh_runner.go:195] Run: systemctl --version
	I0819 19:12:50.200999  438716 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 19:12:50.349276  438716 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 19:12:50.357011  438716 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 19:12:50.357090  438716 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 19:12:50.377691  438716 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 19:12:50.377721  438716 start.go:495] detecting cgroup driver to use...
	I0819 19:12:50.377790  438716 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 19:12:50.394502  438716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 19:12:50.408481  438716 docker.go:217] disabling cri-docker service (if available) ...
	I0819 19:12:50.408556  438716 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 19:12:50.421818  438716 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 19:12:50.434899  438716 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 19:12:50.559399  438716 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 19:12:50.708621  438716 docker.go:233] disabling docker service ...
	I0819 19:12:50.708695  438716 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 19:12:50.726699  438716 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 19:12:50.740605  438716 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 19:12:50.896815  438716 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 19:12:51.037560  438716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 19:12:51.052554  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 19:12:51.072292  438716 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0819 19:12:51.072360  438716 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:51.083248  438716 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 19:12:51.083334  438716 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:51.093721  438716 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:51.105212  438716 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:51.119349  438716 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 19:12:51.134647  438716 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 19:12:51.144553  438716 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 19:12:51.144598  438716 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 19:12:51.159151  438716 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 19:12:51.171260  438716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:12:51.328931  438716 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 19:12:51.500761  438716 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 19:12:51.500831  438716 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 19:12:51.505982  438716 start.go:563] Will wait 60s for crictl version
	I0819 19:12:51.506057  438716 ssh_runner.go:195] Run: which crictl
	I0819 19:12:51.510447  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 19:12:51.552892  438716 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 19:12:51.552982  438716 ssh_runner.go:195] Run: crio --version
	I0819 19:12:51.581931  438716 ssh_runner.go:195] Run: crio --version
	I0819 19:12:51.614565  438716 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0819 19:12:51.615865  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetIP
	I0819 19:12:51.618782  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:51.619238  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:51.619268  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:51.619508  438716 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0819 19:12:51.624020  438716 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 19:12:51.640765  438716 kubeadm.go:883] updating cluster {Name:old-k8s-version-104669 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-104669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.32 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 19:12:51.640905  438716 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0819 19:12:51.640982  438716 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 19:12:51.696872  438716 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0819 19:12:51.696931  438716 ssh_runner.go:195] Run: which lz4
	I0819 19:12:51.702194  438716 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 19:12:51.707228  438716 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 19:12:51.707265  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0819 19:12:53.435062  438716 crio.go:462] duration metric: took 1.732918912s to copy over tarball
	I0819 19:12:53.435149  438716 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 19:12:56.399941  438716 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.96472478s)
	I0819 19:12:56.399971  438716 crio.go:469] duration metric: took 2.964877539s to extract the tarball
	I0819 19:12:56.399986  438716 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 19:12:56.447075  438716 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 19:12:56.491773  438716 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0819 19:12:56.491800  438716 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0819 19:12:56.491876  438716 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:12:56.491876  438716 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0819 19:12:56.491956  438716 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0819 19:12:56.491961  438716 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 19:12:56.492041  438716 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0819 19:12:56.492059  438716 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0819 19:12:56.492280  438716 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0819 19:12:56.492494  438716 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0819 19:12:56.493750  438716 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 19:12:56.493762  438716 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0819 19:12:56.493756  438716 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:12:56.493762  438716 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0819 19:12:56.493765  438716 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0819 19:12:56.493831  438716 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0819 19:12:56.493806  438716 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0819 19:12:56.494099  438716 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0819 19:12:56.694872  438716 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0819 19:12:56.711504  438716 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0819 19:12:56.754045  438716 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0819 19:12:56.754096  438716 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0819 19:12:56.754136  438716 ssh_runner.go:195] Run: which crictl
	I0819 19:12:56.770451  438716 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0819 19:12:56.770510  438716 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0819 19:12:56.770574  438716 ssh_runner.go:195] Run: which crictl
	I0819 19:12:56.770573  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0819 19:12:56.804839  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0819 19:12:56.804872  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0819 19:12:56.825837  438716 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0819 19:12:56.832063  438716 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0819 19:12:56.834072  438716 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0819 19:12:56.837029  438716 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0819 19:12:56.837697  438716 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 19:12:56.902843  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0819 19:12:56.902930  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0819 19:12:57.020902  438716 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0819 19:12:57.020962  438716 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0819 19:12:57.020988  438716 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0819 19:12:57.021017  438716 ssh_runner.go:195] Run: which crictl
	I0819 19:12:57.021025  438716 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0819 19:12:57.021098  438716 ssh_runner.go:195] Run: which crictl
	I0819 19:12:57.023363  438716 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0819 19:12:57.023411  438716 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0819 19:12:57.023457  438716 ssh_runner.go:195] Run: which crictl
	I0819 19:12:57.023541  438716 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0819 19:12:57.023569  438716 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0819 19:12:57.023605  438716 ssh_runner.go:195] Run: which crictl
	I0819 19:12:57.034648  438716 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0819 19:12:57.034698  438716 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 19:12:57.034719  438716 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0819 19:12:57.034748  438716 ssh_runner.go:195] Run: which crictl
	I0819 19:12:57.039577  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0819 19:12:57.039648  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0819 19:12:57.039715  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0819 19:12:57.041644  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0819 19:12:57.041983  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0819 19:12:57.045383  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 19:12:57.149677  438716 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0819 19:12:57.164701  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0819 19:12:57.164821  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0819 19:12:57.202353  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0819 19:12:57.202434  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0819 19:12:57.202465  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 19:12:57.258824  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0819 19:12:57.258858  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0819 19:12:57.285756  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0819 19:12:57.326148  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 19:12:57.326237  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0819 19:12:57.378322  438716 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0819 19:12:57.378369  438716 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0819 19:12:57.390369  438716 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0819 19:12:57.419554  438716 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0819 19:12:57.419627  438716 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0819 19:12:57.438485  438716 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:12:57.583634  438716 cache_images.go:92] duration metric: took 1.091812972s to LoadCachedImages
	W0819 19:12:57.583757  438716 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0819 19:12:57.583777  438716 kubeadm.go:934] updating node { 192.168.50.32 8443 v1.20.0 crio true true} ...
	I0819 19:12:57.583915  438716 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-104669 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.32
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-104669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 19:12:57.584007  438716 ssh_runner.go:195] Run: crio config
	I0819 19:12:57.636714  438716 cni.go:84] Creating CNI manager for ""
	I0819 19:12:57.636738  438716 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 19:12:57.636752  438716 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 19:12:57.636776  438716 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.32 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-104669 NodeName:old-k8s-version-104669 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.32"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.32 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0819 19:12:57.636951  438716 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.32
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-104669"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.32
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.32"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 19:12:57.637028  438716 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0819 19:12:57.648002  438716 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 19:12:57.648093  438716 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 19:12:57.658889  438716 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0819 19:12:57.677316  438716 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 19:12:57.695825  438716 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0819 19:12:57.715396  438716 ssh_runner.go:195] Run: grep 192.168.50.32	control-plane.minikube.internal$ /etc/hosts
	I0819 19:12:57.719886  438716 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.32	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 19:12:57.733179  438716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:12:57.854139  438716 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 19:12:57.871590  438716 certs.go:68] Setting up /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669 for IP: 192.168.50.32
	I0819 19:12:57.871619  438716 certs.go:194] generating shared ca certs ...
	I0819 19:12:57.871642  438716 certs.go:226] acquiring lock for ca certs: {Name:mk639e03f593e0bccac045f6e9f5ba3b96cc81e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:12:57.871850  438716 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key
	I0819 19:12:57.871916  438716 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key
	I0819 19:12:57.871930  438716 certs.go:256] generating profile certs ...
	I0819 19:12:57.872060  438716 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/client.key
	I0819 19:12:57.872131  438716 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/apiserver.key.7101f8a0
	I0819 19:12:57.872197  438716 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/proxy-client.key
	I0819 19:12:57.872336  438716 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem (1338 bytes)
	W0819 19:12:57.872365  438716 certs.go:480] ignoring /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009_empty.pem, impossibly tiny 0 bytes
	I0819 19:12:57.872371  438716 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 19:12:57.872390  438716 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem (1082 bytes)
	I0819 19:12:57.872419  438716 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem (1123 bytes)
	I0819 19:12:57.872441  438716 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem (1675 bytes)
	I0819 19:12:57.872488  438716 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem (1708 bytes)
	I0819 19:12:57.873259  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 19:12:57.907576  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 19:12:57.943535  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 19:12:57.977770  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 19:12:58.021213  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0819 19:12:58.051043  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 19:12:58.080442  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 19:12:58.110888  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 19:12:58.158635  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 19:12:58.184168  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem --> /usr/share/ca-certificates/380009.pem (1338 bytes)
	I0819 19:12:58.210064  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem --> /usr/share/ca-certificates/3800092.pem (1708 bytes)
	I0819 19:12:58.235366  438716 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 19:12:58.254667  438716 ssh_runner.go:195] Run: openssl version
	I0819 19:12:58.260977  438716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3800092.pem && ln -fs /usr/share/ca-certificates/3800092.pem /etc/ssl/certs/3800092.pem"
	I0819 19:12:58.272995  438716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3800092.pem
	I0819 19:12:58.278056  438716 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 17:56 /usr/share/ca-certificates/3800092.pem
	I0819 19:12:58.278154  438716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3800092.pem
	I0819 19:12:58.284420  438716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3800092.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 19:12:58.296945  438716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 19:12:58.309288  438716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:12:58.314695  438716 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:12:58.314774  438716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:12:58.321016  438716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 19:12:58.332728  438716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/380009.pem && ln -fs /usr/share/ca-certificates/380009.pem /etc/ssl/certs/380009.pem"
	I0819 19:12:58.344766  438716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/380009.pem
	I0819 19:12:58.349610  438716 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 17:56 /usr/share/ca-certificates/380009.pem
	I0819 19:12:58.349681  438716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/380009.pem
	I0819 19:12:58.355942  438716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/380009.pem /etc/ssl/certs/51391683.0"
	I0819 19:12:58.368869  438716 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 19:12:58.373681  438716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 19:12:58.380415  438716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 19:12:58.386741  438716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 19:12:58.393362  438716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 19:12:58.399665  438716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 19:12:58.406108  438716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 19:12:58.412486  438716 kubeadm.go:392] StartCluster: {Name:old-k8s-version-104669 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-104669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.32 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 19:12:58.412606  438716 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 19:12:58.412655  438716 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 19:12:58.462379  438716 cri.go:89] found id: ""
	I0819 19:12:58.462463  438716 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 19:12:58.474029  438716 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 19:12:58.474054  438716 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 19:12:58.474112  438716 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 19:12:58.485755  438716 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 19:12:58.486762  438716 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-104669" does not appear in /home/jenkins/minikube-integration/19468-372744/kubeconfig
	I0819 19:12:58.487464  438716 kubeconfig.go:62] /home/jenkins/minikube-integration/19468-372744/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-104669" cluster setting kubeconfig missing "old-k8s-version-104669" context setting]
	I0819 19:12:58.489361  438716 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/kubeconfig: {Name:mk8e7b4e1bb7da665111d2acd83eb48882c66853 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:12:58.508865  438716 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 19:12:58.520577  438716 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.32
	I0819 19:12:58.520622  438716 kubeadm.go:1160] stopping kube-system containers ...
	I0819 19:12:58.520637  438716 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0819 19:12:58.520728  438716 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 19:12:58.561900  438716 cri.go:89] found id: ""
	I0819 19:12:58.561984  438716 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0819 19:12:58.580483  438716 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 19:12:58.591734  438716 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 19:12:58.591754  438716 kubeadm.go:157] found existing configuration files:
	
	I0819 19:12:58.591804  438716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 19:12:58.601694  438716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 19:12:58.601771  438716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 19:12:58.612132  438716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 19:12:58.621911  438716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 19:12:58.621984  438716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 19:12:58.631525  438716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 19:12:58.640802  438716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 19:12:58.640872  438716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 19:12:58.650216  438716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 19:12:58.660647  438716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 19:12:58.660720  438716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 19:12:58.669992  438716 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 19:12:58.679709  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:12:58.809302  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:12:59.757994  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:13:00.006386  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:13:00.136752  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:13:00.222424  438716 api_server.go:52] waiting for apiserver process to appear ...
	I0819 19:13:00.222542  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:00.723213  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:01.222908  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:01.723081  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:02.223465  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:02.722589  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:03.222706  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:03.722930  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:04.222826  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:04.722638  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:05.222666  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:05.723627  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:06.222663  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:06.723230  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:07.222666  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:07.722653  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:08.222861  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:08.723248  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:09.222831  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:09.722738  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:10.223069  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:10.722882  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:11.223650  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:11.722917  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:12.223146  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:12.723410  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:13.222692  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:13.722636  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:14.223152  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:14.722661  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:15.223297  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:15.723053  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:16.223486  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:16.722740  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:17.223337  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:17.723160  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:18.222651  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:18.723509  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:19.223686  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:19.723376  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:20.222953  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:20.723620  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:21.223286  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:21.723663  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:22.223594  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:22.723415  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:23.223643  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:23.723395  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:24.223476  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:24.723236  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:25.223620  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:25.722593  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:26.223582  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:26.722927  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:27.223364  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:27.723223  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:28.223458  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:28.723262  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:29.222823  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:29.722837  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:30.223196  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:30.723537  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:31.223437  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:31.723289  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:32.222714  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:32.723037  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:33.223138  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:33.723303  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:34.223334  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:34.722692  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:35.223021  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:35.722784  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:36.223168  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:36.723041  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:37.222801  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:37.722855  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:38.223296  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:38.722936  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:39.223326  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:39.722883  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:40.223284  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:40.722612  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:41.222700  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:41.723144  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:42.223369  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:42.723209  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:43.222849  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:43.723518  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:44.223585  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:44.722772  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:45.223078  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:45.723287  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:46.223666  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:46.722754  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:47.223414  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:47.723567  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:48.222938  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:48.723011  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:49.223076  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:49.723443  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:50.223627  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:50.723259  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:51.222697  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:51.723284  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:52.222757  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:52.723414  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:53.223202  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:53.722721  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:54.223578  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:54.723400  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:55.222730  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:55.723644  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:56.223212  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:56.722729  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:57.223226  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:57.723045  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:58.222901  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:58.722710  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:59.223149  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:59.723186  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:00.222763  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:00.222844  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:00.271266  438716 cri.go:89] found id: ""
	I0819 19:14:00.271296  438716 logs.go:276] 0 containers: []
	W0819 19:14:00.271305  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:00.271312  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:00.271373  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:00.311870  438716 cri.go:89] found id: ""
	I0819 19:14:00.311900  438716 logs.go:276] 0 containers: []
	W0819 19:14:00.311936  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:00.311946  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:00.312011  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:00.350476  438716 cri.go:89] found id: ""
	I0819 19:14:00.350505  438716 logs.go:276] 0 containers: []
	W0819 19:14:00.350514  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:00.350520  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:00.350586  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:00.387404  438716 cri.go:89] found id: ""
	I0819 19:14:00.387438  438716 logs.go:276] 0 containers: []
	W0819 19:14:00.387447  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:00.387457  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:00.387516  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:00.423493  438716 cri.go:89] found id: ""
	I0819 19:14:00.423521  438716 logs.go:276] 0 containers: []
	W0819 19:14:00.423529  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:00.423535  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:00.423596  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:00.458593  438716 cri.go:89] found id: ""
	I0819 19:14:00.458630  438716 logs.go:276] 0 containers: []
	W0819 19:14:00.458642  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:00.458651  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:00.458722  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:00.495645  438716 cri.go:89] found id: ""
	I0819 19:14:00.495695  438716 logs.go:276] 0 containers: []
	W0819 19:14:00.495709  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:00.495717  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:00.495782  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:00.531464  438716 cri.go:89] found id: ""
	I0819 19:14:00.531498  438716 logs.go:276] 0 containers: []
	W0819 19:14:00.531508  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:00.531529  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:00.531543  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:00.584029  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:00.584078  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:00.597870  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:00.597908  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:00.746061  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:00.746085  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:00.746098  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:00.818001  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:00.818042  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:03.358509  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:03.371262  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:03.371345  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:03.408201  438716 cri.go:89] found id: ""
	I0819 19:14:03.408231  438716 logs.go:276] 0 containers: []
	W0819 19:14:03.408241  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:03.408248  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:03.408306  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:03.445354  438716 cri.go:89] found id: ""
	I0819 19:14:03.445386  438716 logs.go:276] 0 containers: []
	W0819 19:14:03.445396  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:03.445408  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:03.445470  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:03.481144  438716 cri.go:89] found id: ""
	I0819 19:14:03.481178  438716 logs.go:276] 0 containers: []
	W0819 19:14:03.481188  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:03.481195  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:03.481260  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:03.529069  438716 cri.go:89] found id: ""
	I0819 19:14:03.529109  438716 logs.go:276] 0 containers: []
	W0819 19:14:03.529141  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:03.529148  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:03.529216  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:03.590325  438716 cri.go:89] found id: ""
	I0819 19:14:03.590364  438716 logs.go:276] 0 containers: []
	W0819 19:14:03.590377  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:03.590386  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:03.590456  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:03.634924  438716 cri.go:89] found id: ""
	I0819 19:14:03.634969  438716 logs.go:276] 0 containers: []
	W0819 19:14:03.634981  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:03.634990  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:03.635062  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:03.684133  438716 cri.go:89] found id: ""
	I0819 19:14:03.684164  438716 logs.go:276] 0 containers: []
	W0819 19:14:03.684176  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:03.684184  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:03.684253  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:03.722285  438716 cri.go:89] found id: ""
	I0819 19:14:03.722312  438716 logs.go:276] 0 containers: []
	W0819 19:14:03.722321  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:03.722330  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:03.722372  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:03.735937  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:03.735965  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:03.814906  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:03.814931  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:03.814948  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:03.896323  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:03.896363  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:03.943002  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:03.943037  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:06.496886  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:06.510719  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:06.510790  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:06.544692  438716 cri.go:89] found id: ""
	I0819 19:14:06.544724  438716 logs.go:276] 0 containers: []
	W0819 19:14:06.544737  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:06.544747  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:06.544818  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:06.578935  438716 cri.go:89] found id: ""
	I0819 19:14:06.578962  438716 logs.go:276] 0 containers: []
	W0819 19:14:06.578971  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:06.578979  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:06.579033  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:06.614488  438716 cri.go:89] found id: ""
	I0819 19:14:06.614516  438716 logs.go:276] 0 containers: []
	W0819 19:14:06.614525  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:06.614532  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:06.614583  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:06.648579  438716 cri.go:89] found id: ""
	I0819 19:14:06.648612  438716 logs.go:276] 0 containers: []
	W0819 19:14:06.648623  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:06.648630  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:06.648685  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:06.685168  438716 cri.go:89] found id: ""
	I0819 19:14:06.685198  438716 logs.go:276] 0 containers: []
	W0819 19:14:06.685208  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:06.685217  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:06.685280  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:06.720391  438716 cri.go:89] found id: ""
	I0819 19:14:06.720424  438716 logs.go:276] 0 containers: []
	W0819 19:14:06.720433  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:06.720440  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:06.720491  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:06.758183  438716 cri.go:89] found id: ""
	I0819 19:14:06.758217  438716 logs.go:276] 0 containers: []
	W0819 19:14:06.758228  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:06.758237  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:06.758307  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:06.800182  438716 cri.go:89] found id: ""
	I0819 19:14:06.800215  438716 logs.go:276] 0 containers: []
	W0819 19:14:06.800224  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:06.800234  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:06.800247  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:06.852735  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:06.852777  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:06.867214  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:06.867249  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:06.938942  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:06.938967  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:06.938980  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:07.023950  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:07.023992  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:09.568889  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:09.588481  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:09.588545  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:09.630790  438716 cri.go:89] found id: ""
	I0819 19:14:09.630825  438716 logs.go:276] 0 containers: []
	W0819 19:14:09.630839  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:09.630848  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:09.630926  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:09.673258  438716 cri.go:89] found id: ""
	I0819 19:14:09.673291  438716 logs.go:276] 0 containers: []
	W0819 19:14:09.673302  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:09.673311  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:09.673374  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:09.709500  438716 cri.go:89] found id: ""
	I0819 19:14:09.709530  438716 logs.go:276] 0 containers: []
	W0819 19:14:09.709541  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:09.709549  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:09.709617  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:09.743110  438716 cri.go:89] found id: ""
	I0819 19:14:09.743139  438716 logs.go:276] 0 containers: []
	W0819 19:14:09.743150  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:09.743164  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:09.743238  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:09.776717  438716 cri.go:89] found id: ""
	I0819 19:14:09.776746  438716 logs.go:276] 0 containers: []
	W0819 19:14:09.776754  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:09.776761  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:09.776820  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:09.811381  438716 cri.go:89] found id: ""
	I0819 19:14:09.811409  438716 logs.go:276] 0 containers: []
	W0819 19:14:09.811417  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:09.811423  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:09.811474  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:09.843699  438716 cri.go:89] found id: ""
	I0819 19:14:09.843730  438716 logs.go:276] 0 containers: []
	W0819 19:14:09.843741  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:09.843750  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:09.843822  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:09.882972  438716 cri.go:89] found id: ""
	I0819 19:14:09.883005  438716 logs.go:276] 0 containers: []
	W0819 19:14:09.883018  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:09.883033  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:09.883050  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:09.973077  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:09.973114  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:10.014505  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:10.014556  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:10.069779  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:10.069819  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:10.084337  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:10.084367  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:10.164870  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:12.665929  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:12.679881  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:12.679960  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:12.718305  438716 cri.go:89] found id: ""
	I0819 19:14:12.718332  438716 logs.go:276] 0 containers: []
	W0819 19:14:12.718341  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:12.718348  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:12.718398  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:12.759084  438716 cri.go:89] found id: ""
	I0819 19:14:12.759112  438716 logs.go:276] 0 containers: []
	W0819 19:14:12.759127  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:12.759135  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:12.759205  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:12.793193  438716 cri.go:89] found id: ""
	I0819 19:14:12.793228  438716 logs.go:276] 0 containers: []
	W0819 19:14:12.793238  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:12.793245  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:12.793299  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:12.828283  438716 cri.go:89] found id: ""
	I0819 19:14:12.828310  438716 logs.go:276] 0 containers: []
	W0819 19:14:12.828322  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:12.828329  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:12.828379  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:12.861971  438716 cri.go:89] found id: ""
	I0819 19:14:12.862004  438716 logs.go:276] 0 containers: []
	W0819 19:14:12.862016  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:12.862025  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:12.862092  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:12.898173  438716 cri.go:89] found id: ""
	I0819 19:14:12.898203  438716 logs.go:276] 0 containers: []
	W0819 19:14:12.898214  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:12.898223  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:12.898287  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:12.940203  438716 cri.go:89] found id: ""
	I0819 19:14:12.940234  438716 logs.go:276] 0 containers: []
	W0819 19:14:12.940246  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:12.940254  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:12.940309  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:12.978092  438716 cri.go:89] found id: ""
	I0819 19:14:12.978123  438716 logs.go:276] 0 containers: []
	W0819 19:14:12.978134  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:12.978147  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:12.978172  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:12.992082  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:12.992117  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:13.073609  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:13.073636  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:13.073649  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:13.153060  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:13.153105  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:13.196535  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:13.196581  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:15.750298  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:15.763913  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:15.763996  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:15.804515  438716 cri.go:89] found id: ""
	I0819 19:14:15.804542  438716 logs.go:276] 0 containers: []
	W0819 19:14:15.804551  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:15.804558  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:15.804624  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:15.847077  438716 cri.go:89] found id: ""
	I0819 19:14:15.847112  438716 logs.go:276] 0 containers: []
	W0819 19:14:15.847125  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:15.847133  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:15.847200  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:15.882316  438716 cri.go:89] found id: ""
	I0819 19:14:15.882348  438716 logs.go:276] 0 containers: []
	W0819 19:14:15.882358  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:15.882365  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:15.882417  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:15.919084  438716 cri.go:89] found id: ""
	I0819 19:14:15.919114  438716 logs.go:276] 0 containers: []
	W0819 19:14:15.919125  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:15.919132  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:15.919202  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:15.953139  438716 cri.go:89] found id: ""
	I0819 19:14:15.953175  438716 logs.go:276] 0 containers: []
	W0819 19:14:15.953188  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:15.953209  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:15.953276  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:15.993231  438716 cri.go:89] found id: ""
	I0819 19:14:15.993259  438716 logs.go:276] 0 containers: []
	W0819 19:14:15.993268  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:15.993286  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:15.993337  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:16.030382  438716 cri.go:89] found id: ""
	I0819 19:14:16.030412  438716 logs.go:276] 0 containers: []
	W0819 19:14:16.030422  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:16.030428  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:16.030482  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:16.065834  438716 cri.go:89] found id: ""
	I0819 19:14:16.065861  438716 logs.go:276] 0 containers: []
	W0819 19:14:16.065872  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:16.065885  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:16.065901  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:16.117943  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:16.117983  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:16.132010  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:16.132041  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:16.202398  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:16.202416  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:16.202429  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:16.286609  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:16.286653  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:18.830502  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:18.844022  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:18.844107  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:18.880539  438716 cri.go:89] found id: ""
	I0819 19:14:18.880576  438716 logs.go:276] 0 containers: []
	W0819 19:14:18.880588  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:18.880595  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:18.880657  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:18.918426  438716 cri.go:89] found id: ""
	I0819 19:14:18.918454  438716 logs.go:276] 0 containers: []
	W0819 19:14:18.918463  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:18.918470  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:18.918531  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:18.954534  438716 cri.go:89] found id: ""
	I0819 19:14:18.954566  438716 logs.go:276] 0 containers: []
	W0819 19:14:18.954578  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:18.954587  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:18.954651  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:18.993820  438716 cri.go:89] found id: ""
	I0819 19:14:18.993852  438716 logs.go:276] 0 containers: []
	W0819 19:14:18.993864  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:18.993885  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:18.993967  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:19.026947  438716 cri.go:89] found id: ""
	I0819 19:14:19.026982  438716 logs.go:276] 0 containers: []
	W0819 19:14:19.026995  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:19.027005  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:19.027072  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:19.062097  438716 cri.go:89] found id: ""
	I0819 19:14:19.062130  438716 logs.go:276] 0 containers: []
	W0819 19:14:19.062142  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:19.062150  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:19.062207  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:19.099522  438716 cri.go:89] found id: ""
	I0819 19:14:19.099549  438716 logs.go:276] 0 containers: []
	W0819 19:14:19.099559  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:19.099567  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:19.099630  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:19.134766  438716 cri.go:89] found id: ""
	I0819 19:14:19.134803  438716 logs.go:276] 0 containers: []
	W0819 19:14:19.134815  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:19.134850  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:19.134867  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:19.176428  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:19.176458  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:19.231448  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:19.231484  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:19.245631  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:19.245687  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:19.318679  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:19.318703  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:19.318717  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:21.898430  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:21.913840  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:21.913911  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:21.955682  438716 cri.go:89] found id: ""
	I0819 19:14:21.955720  438716 logs.go:276] 0 containers: []
	W0819 19:14:21.955732  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:21.955743  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:21.955820  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:21.994798  438716 cri.go:89] found id: ""
	I0819 19:14:21.994836  438716 logs.go:276] 0 containers: []
	W0819 19:14:21.994845  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:21.994852  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:21.994904  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:22.029155  438716 cri.go:89] found id: ""
	I0819 19:14:22.029191  438716 logs.go:276] 0 containers: []
	W0819 19:14:22.029202  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:22.029210  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:22.029281  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:22.072489  438716 cri.go:89] found id: ""
	I0819 19:14:22.072534  438716 logs.go:276] 0 containers: []
	W0819 19:14:22.072546  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:22.072559  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:22.072621  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:22.109160  438716 cri.go:89] found id: ""
	I0819 19:14:22.109192  438716 logs.go:276] 0 containers: []
	W0819 19:14:22.109203  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:22.109211  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:22.109281  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:22.146161  438716 cri.go:89] found id: ""
	I0819 19:14:22.146194  438716 logs.go:276] 0 containers: []
	W0819 19:14:22.146206  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:22.146215  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:22.146276  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:22.183005  438716 cri.go:89] found id: ""
	I0819 19:14:22.183033  438716 logs.go:276] 0 containers: []
	W0819 19:14:22.183046  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:22.183054  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:22.183108  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:22.220745  438716 cri.go:89] found id: ""
	I0819 19:14:22.220772  438716 logs.go:276] 0 containers: []
	W0819 19:14:22.220784  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:22.220798  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:22.220817  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:22.297377  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:22.297403  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:22.297416  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:22.373503  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:22.373542  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:22.414922  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:22.414956  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:22.477902  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:22.477944  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:24.993405  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:25.007305  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:25.007379  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:25.041157  438716 cri.go:89] found id: ""
	I0819 19:14:25.041191  438716 logs.go:276] 0 containers: []
	W0819 19:14:25.041203  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:25.041211  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:25.041278  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:25.078572  438716 cri.go:89] found id: ""
	I0819 19:14:25.078605  438716 logs.go:276] 0 containers: []
	W0819 19:14:25.078617  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:25.078625  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:25.078695  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:25.114571  438716 cri.go:89] found id: ""
	I0819 19:14:25.114603  438716 logs.go:276] 0 containers: []
	W0819 19:14:25.114615  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:25.114624  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:25.114690  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:25.154341  438716 cri.go:89] found id: ""
	I0819 19:14:25.154366  438716 logs.go:276] 0 containers: []
	W0819 19:14:25.154375  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:25.154381  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:25.154434  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:25.192592  438716 cri.go:89] found id: ""
	I0819 19:14:25.192620  438716 logs.go:276] 0 containers: []
	W0819 19:14:25.192631  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:25.192640  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:25.192705  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:25.227813  438716 cri.go:89] found id: ""
	I0819 19:14:25.227847  438716 logs.go:276] 0 containers: []
	W0819 19:14:25.227860  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:25.227869  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:25.227933  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:25.264321  438716 cri.go:89] found id: ""
	I0819 19:14:25.264349  438716 logs.go:276] 0 containers: []
	W0819 19:14:25.264357  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:25.264364  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:25.264427  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:25.298562  438716 cri.go:89] found id: ""
	I0819 19:14:25.298596  438716 logs.go:276] 0 containers: []
	W0819 19:14:25.298608  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:25.298621  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:25.298638  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:25.352659  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:25.352695  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:25.366638  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:25.366665  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:25.432964  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:25.432992  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:25.433010  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:25.511487  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:25.511549  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:28.057003  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:28.070849  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:28.070914  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:28.107817  438716 cri.go:89] found id: ""
	I0819 19:14:28.107852  438716 logs.go:276] 0 containers: []
	W0819 19:14:28.107865  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:28.107875  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:28.107948  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:28.141816  438716 cri.go:89] found id: ""
	I0819 19:14:28.141862  438716 logs.go:276] 0 containers: []
	W0819 19:14:28.141874  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:28.141887  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:28.141958  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:28.179854  438716 cri.go:89] found id: ""
	I0819 19:14:28.179885  438716 logs.go:276] 0 containers: []
	W0819 19:14:28.179893  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:28.179905  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:28.179972  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:28.217335  438716 cri.go:89] found id: ""
	I0819 19:14:28.217364  438716 logs.go:276] 0 containers: []
	W0819 19:14:28.217372  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:28.217380  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:28.217438  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:28.254161  438716 cri.go:89] found id: ""
	I0819 19:14:28.254193  438716 logs.go:276] 0 containers: []
	W0819 19:14:28.254204  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:28.254213  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:28.254276  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:28.288658  438716 cri.go:89] found id: ""
	I0819 19:14:28.288682  438716 logs.go:276] 0 containers: []
	W0819 19:14:28.288691  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:28.288698  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:28.288749  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:28.321957  438716 cri.go:89] found id: ""
	I0819 19:14:28.321987  438716 logs.go:276] 0 containers: []
	W0819 19:14:28.321996  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:28.322004  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:28.322057  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:28.355032  438716 cri.go:89] found id: ""
	I0819 19:14:28.355068  438716 logs.go:276] 0 containers: []
	W0819 19:14:28.355080  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:28.355094  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:28.355111  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:28.406220  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:28.406253  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:28.420877  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:28.420907  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:28.502576  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:28.502598  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:28.502614  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:28.582717  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:28.582769  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:31.121960  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:31.135502  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:31.135568  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:31.170423  438716 cri.go:89] found id: ""
	I0819 19:14:31.170451  438716 logs.go:276] 0 containers: []
	W0819 19:14:31.170461  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:31.170467  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:31.170532  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:31.207328  438716 cri.go:89] found id: ""
	I0819 19:14:31.207356  438716 logs.go:276] 0 containers: []
	W0819 19:14:31.207364  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:31.207370  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:31.207430  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:31.245655  438716 cri.go:89] found id: ""
	I0819 19:14:31.245687  438716 logs.go:276] 0 containers: []
	W0819 19:14:31.245698  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:31.245707  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:31.245773  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:31.282174  438716 cri.go:89] found id: ""
	I0819 19:14:31.282208  438716 logs.go:276] 0 containers: []
	W0819 19:14:31.282221  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:31.282230  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:31.282303  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:31.316779  438716 cri.go:89] found id: ""
	I0819 19:14:31.316810  438716 logs.go:276] 0 containers: []
	W0819 19:14:31.316818  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:31.316826  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:31.316879  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:31.356849  438716 cri.go:89] found id: ""
	I0819 19:14:31.356884  438716 logs.go:276] 0 containers: []
	W0819 19:14:31.356894  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:31.356900  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:31.356963  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:31.395102  438716 cri.go:89] found id: ""
	I0819 19:14:31.395135  438716 logs.go:276] 0 containers: []
	W0819 19:14:31.395143  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:31.395150  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:31.395205  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:31.433018  438716 cri.go:89] found id: ""
	I0819 19:14:31.433045  438716 logs.go:276] 0 containers: []
	W0819 19:14:31.433076  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:31.433091  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:31.433108  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:31.446294  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:31.446319  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:31.518158  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:31.518180  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:31.518196  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:31.600568  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:31.600611  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:31.642356  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:31.642386  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:34.195665  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:34.210300  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:34.210370  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:34.248715  438716 cri.go:89] found id: ""
	I0819 19:14:34.248753  438716 logs.go:276] 0 containers: []
	W0819 19:14:34.248767  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:34.248775  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:34.248849  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:34.285305  438716 cri.go:89] found id: ""
	I0819 19:14:34.285334  438716 logs.go:276] 0 containers: []
	W0819 19:14:34.285347  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:34.285355  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:34.285438  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:34.326114  438716 cri.go:89] found id: ""
	I0819 19:14:34.326148  438716 logs.go:276] 0 containers: []
	W0819 19:14:34.326160  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:34.326168  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:34.326235  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:34.360587  438716 cri.go:89] found id: ""
	I0819 19:14:34.360616  438716 logs.go:276] 0 containers: []
	W0819 19:14:34.360628  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:34.360638  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:34.360715  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:34.397452  438716 cri.go:89] found id: ""
	I0819 19:14:34.397483  438716 logs.go:276] 0 containers: []
	W0819 19:14:34.397491  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:34.397498  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:34.397556  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:34.433651  438716 cri.go:89] found id: ""
	I0819 19:14:34.433683  438716 logs.go:276] 0 containers: []
	W0819 19:14:34.433694  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:34.433702  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:34.433771  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:34.468758  438716 cri.go:89] found id: ""
	I0819 19:14:34.468787  438716 logs.go:276] 0 containers: []
	W0819 19:14:34.468796  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:34.468802  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:34.468856  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:34.505787  438716 cri.go:89] found id: ""
	I0819 19:14:34.505816  438716 logs.go:276] 0 containers: []
	W0819 19:14:34.505828  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:34.505842  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:34.505859  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:34.519430  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:34.519463  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:34.592785  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:34.592810  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:34.592827  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:34.671215  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:34.671254  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:34.711248  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:34.711277  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:37.265131  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:37.279035  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:37.279127  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:37.325556  438716 cri.go:89] found id: ""
	I0819 19:14:37.325589  438716 logs.go:276] 0 containers: []
	W0819 19:14:37.325601  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:37.325610  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:37.325676  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:37.360514  438716 cri.go:89] found id: ""
	I0819 19:14:37.360541  438716 logs.go:276] 0 containers: []
	W0819 19:14:37.360553  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:37.360561  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:37.360616  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:37.394428  438716 cri.go:89] found id: ""
	I0819 19:14:37.394456  438716 logs.go:276] 0 containers: []
	W0819 19:14:37.394465  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:37.394472  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:37.394531  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:37.430221  438716 cri.go:89] found id: ""
	I0819 19:14:37.430249  438716 logs.go:276] 0 containers: []
	W0819 19:14:37.430257  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:37.430264  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:37.430324  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:37.466598  438716 cri.go:89] found id: ""
	I0819 19:14:37.466630  438716 logs.go:276] 0 containers: []
	W0819 19:14:37.466641  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:37.466649  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:37.466719  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:37.510455  438716 cri.go:89] found id: ""
	I0819 19:14:37.510484  438716 logs.go:276] 0 containers: []
	W0819 19:14:37.510492  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:37.510499  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:37.510563  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:37.546122  438716 cri.go:89] found id: ""
	I0819 19:14:37.546157  438716 logs.go:276] 0 containers: []
	W0819 19:14:37.546169  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:37.546178  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:37.546247  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:37.579425  438716 cri.go:89] found id: ""
	I0819 19:14:37.579452  438716 logs.go:276] 0 containers: []
	W0819 19:14:37.579463  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:37.579475  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:37.579491  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:37.592673  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:37.592704  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:37.674026  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:37.674048  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:37.674065  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:37.752206  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:37.752244  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:37.791281  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:37.791321  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:40.345520  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:40.358771  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:40.358835  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:40.394515  438716 cri.go:89] found id: ""
	I0819 19:14:40.394549  438716 logs.go:276] 0 containers: []
	W0819 19:14:40.394565  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:40.394575  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:40.394637  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:40.430971  438716 cri.go:89] found id: ""
	I0819 19:14:40.431007  438716 logs.go:276] 0 containers: []
	W0819 19:14:40.431018  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:40.431027  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:40.431094  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:40.471417  438716 cri.go:89] found id: ""
	I0819 19:14:40.471443  438716 logs.go:276] 0 containers: []
	W0819 19:14:40.471452  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:40.471458  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:40.471511  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:40.508641  438716 cri.go:89] found id: ""
	I0819 19:14:40.508670  438716 logs.go:276] 0 containers: []
	W0819 19:14:40.508678  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:40.508684  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:40.508749  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:40.542418  438716 cri.go:89] found id: ""
	I0819 19:14:40.542456  438716 logs.go:276] 0 containers: []
	W0819 19:14:40.542465  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:40.542472  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:40.542533  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:40.577367  438716 cri.go:89] found id: ""
	I0819 19:14:40.577399  438716 logs.go:276] 0 containers: []
	W0819 19:14:40.577408  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:40.577414  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:40.577476  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:40.611111  438716 cri.go:89] found id: ""
	I0819 19:14:40.611138  438716 logs.go:276] 0 containers: []
	W0819 19:14:40.611147  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:40.611155  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:40.611222  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:40.650769  438716 cri.go:89] found id: ""
	I0819 19:14:40.650797  438716 logs.go:276] 0 containers: []
	W0819 19:14:40.650805  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:40.650814  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:40.650827  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:40.688085  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:40.688111  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:40.740187  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:40.740225  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:40.754774  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:40.754803  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:40.828689  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:40.828712  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:40.828728  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:43.419171  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:43.432127  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:43.432201  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:43.468751  438716 cri.go:89] found id: ""
	I0819 19:14:43.468778  438716 logs.go:276] 0 containers: []
	W0819 19:14:43.468787  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:43.468803  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:43.468870  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:43.503290  438716 cri.go:89] found id: ""
	I0819 19:14:43.503319  438716 logs.go:276] 0 containers: []
	W0819 19:14:43.503328  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:43.503334  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:43.503390  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:43.536382  438716 cri.go:89] found id: ""
	I0819 19:14:43.536416  438716 logs.go:276] 0 containers: []
	W0819 19:14:43.536435  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:43.536443  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:43.536494  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:43.571570  438716 cri.go:89] found id: ""
	I0819 19:14:43.571602  438716 logs.go:276] 0 containers: []
	W0819 19:14:43.571611  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:43.571617  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:43.571682  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:43.610421  438716 cri.go:89] found id: ""
	I0819 19:14:43.610455  438716 logs.go:276] 0 containers: []
	W0819 19:14:43.610465  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:43.610473  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:43.610524  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:43.647173  438716 cri.go:89] found id: ""
	I0819 19:14:43.647200  438716 logs.go:276] 0 containers: []
	W0819 19:14:43.647209  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:43.647215  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:43.647266  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:43.684493  438716 cri.go:89] found id: ""
	I0819 19:14:43.684525  438716 logs.go:276] 0 containers: []
	W0819 19:14:43.684535  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:43.684541  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:43.684609  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:43.718781  438716 cri.go:89] found id: ""
	I0819 19:14:43.718811  438716 logs.go:276] 0 containers: []
	W0819 19:14:43.718822  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:43.718834  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:43.718858  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:43.732546  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:43.732578  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:43.819640  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:43.819665  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:43.819700  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:43.900246  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:43.900286  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:43.941751  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:43.941783  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:46.498232  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:46.511167  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:46.511237  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:46.545493  438716 cri.go:89] found id: ""
	I0819 19:14:46.545528  438716 logs.go:276] 0 containers: []
	W0819 19:14:46.545541  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:46.545549  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:46.545607  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:46.580599  438716 cri.go:89] found id: ""
	I0819 19:14:46.580626  438716 logs.go:276] 0 containers: []
	W0819 19:14:46.580634  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:46.580640  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:46.580760  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:46.614515  438716 cri.go:89] found id: ""
	I0819 19:14:46.614551  438716 logs.go:276] 0 containers: []
	W0819 19:14:46.614561  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:46.614570  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:46.614637  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:46.647767  438716 cri.go:89] found id: ""
	I0819 19:14:46.647803  438716 logs.go:276] 0 containers: []
	W0819 19:14:46.647816  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:46.647825  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:46.647893  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:46.681660  438716 cri.go:89] found id: ""
	I0819 19:14:46.681695  438716 logs.go:276] 0 containers: []
	W0819 19:14:46.681707  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:46.681717  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:46.681788  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:46.718828  438716 cri.go:89] found id: ""
	I0819 19:14:46.718858  438716 logs.go:276] 0 containers: []
	W0819 19:14:46.718868  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:46.718875  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:46.718929  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:46.760524  438716 cri.go:89] found id: ""
	I0819 19:14:46.760553  438716 logs.go:276] 0 containers: []
	W0819 19:14:46.760561  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:46.760569  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:46.760634  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:46.799014  438716 cri.go:89] found id: ""
	I0819 19:14:46.799042  438716 logs.go:276] 0 containers: []
	W0819 19:14:46.799054  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:46.799067  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:46.799135  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:46.850769  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:46.850812  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:46.865647  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:46.865698  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:46.942197  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:46.942228  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:46.942244  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:47.019295  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:47.019337  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:49.562713  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:49.575406  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:49.575484  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:49.610067  438716 cri.go:89] found id: ""
	I0819 19:14:49.610105  438716 logs.go:276] 0 containers: []
	W0819 19:14:49.610115  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:49.610121  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:49.610182  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:49.646164  438716 cri.go:89] found id: ""
	I0819 19:14:49.646205  438716 logs.go:276] 0 containers: []
	W0819 19:14:49.646230  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:49.646238  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:49.646317  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:49.680268  438716 cri.go:89] found id: ""
	I0819 19:14:49.680303  438716 logs.go:276] 0 containers: []
	W0819 19:14:49.680314  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:49.680322  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:49.680387  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:49.714952  438716 cri.go:89] found id: ""
	I0819 19:14:49.714981  438716 logs.go:276] 0 containers: []
	W0819 19:14:49.714992  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:49.715001  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:49.715067  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:49.749483  438716 cri.go:89] found id: ""
	I0819 19:14:49.749516  438716 logs.go:276] 0 containers: []
	W0819 19:14:49.749528  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:49.749537  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:49.749616  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:49.794506  438716 cri.go:89] found id: ""
	I0819 19:14:49.794538  438716 logs.go:276] 0 containers: []
	W0819 19:14:49.794550  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:49.794558  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:49.794628  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:49.847284  438716 cri.go:89] found id: ""
	I0819 19:14:49.847313  438716 logs.go:276] 0 containers: []
	W0819 19:14:49.847324  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:49.847334  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:49.847398  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:49.903800  438716 cri.go:89] found id: ""
	I0819 19:14:49.903829  438716 logs.go:276] 0 containers: []
	W0819 19:14:49.903839  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:49.903850  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:49.903867  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:49.972836  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:49.972866  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:49.972885  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:50.049939  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:50.049976  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:50.086514  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:50.086550  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:50.140681  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:50.140718  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:52.656573  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:52.670043  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:52.670124  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:52.704514  438716 cri.go:89] found id: ""
	I0819 19:14:52.704541  438716 logs.go:276] 0 containers: []
	W0819 19:14:52.704551  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:52.704558  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:52.704621  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:52.738329  438716 cri.go:89] found id: ""
	I0819 19:14:52.738357  438716 logs.go:276] 0 containers: []
	W0819 19:14:52.738365  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:52.738371  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:52.738423  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:52.774886  438716 cri.go:89] found id: ""
	I0819 19:14:52.774917  438716 logs.go:276] 0 containers: []
	W0819 19:14:52.774926  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:52.774933  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:52.774986  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:52.810262  438716 cri.go:89] found id: ""
	I0819 19:14:52.810288  438716 logs.go:276] 0 containers: []
	W0819 19:14:52.810296  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:52.810303  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:52.810363  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:52.848429  438716 cri.go:89] found id: ""
	I0819 19:14:52.848455  438716 logs.go:276] 0 containers: []
	W0819 19:14:52.848463  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:52.848474  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:52.848539  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:52.886135  438716 cri.go:89] found id: ""
	I0819 19:14:52.886163  438716 logs.go:276] 0 containers: []
	W0819 19:14:52.886179  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:52.886185  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:52.886241  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:52.923288  438716 cri.go:89] found id: ""
	I0819 19:14:52.923314  438716 logs.go:276] 0 containers: []
	W0819 19:14:52.923325  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:52.923333  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:52.923397  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:52.957273  438716 cri.go:89] found id: ""
	I0819 19:14:52.957303  438716 logs.go:276] 0 containers: []
	W0819 19:14:52.957315  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:52.957328  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:52.957345  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:52.970687  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:52.970714  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:53.045081  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:53.045108  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:53.045125  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:53.122233  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:53.122279  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:53.161525  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:53.161554  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:55.714177  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:55.733726  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:55.733809  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:55.781435  438716 cri.go:89] found id: ""
	I0819 19:14:55.781472  438716 logs.go:276] 0 containers: []
	W0819 19:14:55.781485  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:55.781493  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:55.781560  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:55.846316  438716 cri.go:89] found id: ""
	I0819 19:14:55.846351  438716 logs.go:276] 0 containers: []
	W0819 19:14:55.846362  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:55.846370  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:55.846439  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:55.881587  438716 cri.go:89] found id: ""
	I0819 19:14:55.881623  438716 logs.go:276] 0 containers: []
	W0819 19:14:55.881635  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:55.881644  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:55.881719  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:55.919332  438716 cri.go:89] found id: ""
	I0819 19:14:55.919374  438716 logs.go:276] 0 containers: []
	W0819 19:14:55.919382  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:55.919389  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:55.919441  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:55.954704  438716 cri.go:89] found id: ""
	I0819 19:14:55.954739  438716 logs.go:276] 0 containers: []
	W0819 19:14:55.954752  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:55.954761  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:55.954836  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:55.989289  438716 cri.go:89] found id: ""
	I0819 19:14:55.989321  438716 logs.go:276] 0 containers: []
	W0819 19:14:55.989332  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:55.989340  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:55.989406  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:56.025771  438716 cri.go:89] found id: ""
	I0819 19:14:56.025800  438716 logs.go:276] 0 containers: []
	W0819 19:14:56.025809  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:56.025816  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:56.025883  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:56.065631  438716 cri.go:89] found id: ""
	I0819 19:14:56.065673  438716 logs.go:276] 0 containers: []
	W0819 19:14:56.065686  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:56.065699  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:56.065722  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:56.119482  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:56.119523  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:56.133885  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:56.133915  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:56.207012  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:56.207033  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:56.207045  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:56.288158  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:56.288195  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:58.829677  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:58.844085  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:58.844158  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:58.880900  438716 cri.go:89] found id: ""
	I0819 19:14:58.880934  438716 logs.go:276] 0 containers: []
	W0819 19:14:58.880945  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:58.880951  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:58.881016  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:58.918833  438716 cri.go:89] found id: ""
	I0819 19:14:58.918862  438716 logs.go:276] 0 containers: []
	W0819 19:14:58.918872  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:58.918881  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:58.918939  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:58.956577  438716 cri.go:89] found id: ""
	I0819 19:14:58.956612  438716 logs.go:276] 0 containers: []
	W0819 19:14:58.956623  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:58.956634  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:58.956705  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:58.993884  438716 cri.go:89] found id: ""
	I0819 19:14:58.993914  438716 logs.go:276] 0 containers: []
	W0819 19:14:58.993923  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:58.993930  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:58.993988  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:59.031366  438716 cri.go:89] found id: ""
	I0819 19:14:59.031389  438716 logs.go:276] 0 containers: []
	W0819 19:14:59.031398  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:59.031405  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:59.031464  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:59.072014  438716 cri.go:89] found id: ""
	I0819 19:14:59.072047  438716 logs.go:276] 0 containers: []
	W0819 19:14:59.072058  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:59.072065  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:59.072129  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:59.108713  438716 cri.go:89] found id: ""
	I0819 19:14:59.108744  438716 logs.go:276] 0 containers: []
	W0819 19:14:59.108756  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:59.108765  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:59.108866  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:59.147599  438716 cri.go:89] found id: ""
	I0819 19:14:59.147634  438716 logs.go:276] 0 containers: []
	W0819 19:14:59.147647  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:59.147659  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:59.147695  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:59.224745  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:59.224781  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:59.264586  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:59.264616  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:59.317065  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:59.317104  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:59.331230  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:59.331264  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:59.398370  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:01.899123  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:01.912743  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:01.912824  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:01.949717  438716 cri.go:89] found id: ""
	I0819 19:15:01.949748  438716 logs.go:276] 0 containers: []
	W0819 19:15:01.949756  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:01.949763  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:01.949819  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:01.992776  438716 cri.go:89] found id: ""
	I0819 19:15:01.992802  438716 logs.go:276] 0 containers: []
	W0819 19:15:01.992812  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:01.992819  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:01.992884  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:02.030551  438716 cri.go:89] found id: ""
	I0819 19:15:02.030579  438716 logs.go:276] 0 containers: []
	W0819 19:15:02.030592  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:02.030600  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:02.030672  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:02.069927  438716 cri.go:89] found id: ""
	I0819 19:15:02.069955  438716 logs.go:276] 0 containers: []
	W0819 19:15:02.069964  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:02.069971  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:02.070031  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:02.106584  438716 cri.go:89] found id: ""
	I0819 19:15:02.106609  438716 logs.go:276] 0 containers: []
	W0819 19:15:02.106619  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:02.106629  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:02.106695  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:02.145007  438716 cri.go:89] found id: ""
	I0819 19:15:02.145035  438716 logs.go:276] 0 containers: []
	W0819 19:15:02.145044  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:02.145051  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:02.145113  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:02.180693  438716 cri.go:89] found id: ""
	I0819 19:15:02.180730  438716 logs.go:276] 0 containers: []
	W0819 19:15:02.180741  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:02.180748  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:02.180800  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:02.215563  438716 cri.go:89] found id: ""
	I0819 19:15:02.215597  438716 logs.go:276] 0 containers: []
	W0819 19:15:02.215609  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:02.215623  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:02.215641  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:02.285658  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:02.285692  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:02.285711  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:02.363620  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:02.363660  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:02.414240  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:02.414274  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:02.467336  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:02.467380  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:04.981935  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:04.995537  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:04.995611  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:05.032700  438716 cri.go:89] found id: ""
	I0819 19:15:05.032735  438716 logs.go:276] 0 containers: []
	W0819 19:15:05.032748  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:05.032756  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:05.032827  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:05.069132  438716 cri.go:89] found id: ""
	I0819 19:15:05.069162  438716 logs.go:276] 0 containers: []
	W0819 19:15:05.069173  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:05.069181  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:05.069247  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:05.105320  438716 cri.go:89] found id: ""
	I0819 19:15:05.105346  438716 logs.go:276] 0 containers: []
	W0819 19:15:05.105355  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:05.105361  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:05.105421  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:05.142311  438716 cri.go:89] found id: ""
	I0819 19:15:05.142343  438716 logs.go:276] 0 containers: []
	W0819 19:15:05.142354  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:05.142362  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:05.142412  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:05.177398  438716 cri.go:89] found id: ""
	I0819 19:15:05.177426  438716 logs.go:276] 0 containers: []
	W0819 19:15:05.177437  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:05.177450  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:05.177506  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:05.212749  438716 cri.go:89] found id: ""
	I0819 19:15:05.212780  438716 logs.go:276] 0 containers: []
	W0819 19:15:05.212789  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:05.212796  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:05.212854  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:05.246325  438716 cri.go:89] found id: ""
	I0819 19:15:05.246356  438716 logs.go:276] 0 containers: []
	W0819 19:15:05.246364  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:05.246371  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:05.246420  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:05.287429  438716 cri.go:89] found id: ""
	I0819 19:15:05.287456  438716 logs.go:276] 0 containers: []
	W0819 19:15:05.287466  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:05.287476  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:05.287489  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:05.338742  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:05.338787  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:05.352948  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:05.352978  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:05.421478  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:05.421502  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:05.421529  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:05.497772  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:05.497809  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:08.040403  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:08.053761  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:08.053827  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:08.087047  438716 cri.go:89] found id: ""
	I0819 19:15:08.087073  438716 logs.go:276] 0 containers: []
	W0819 19:15:08.087082  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:08.087089  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:08.087140  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:08.122012  438716 cri.go:89] found id: ""
	I0819 19:15:08.122048  438716 logs.go:276] 0 containers: []
	W0819 19:15:08.122059  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:08.122068  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:08.122134  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:08.155319  438716 cri.go:89] found id: ""
	I0819 19:15:08.155349  438716 logs.go:276] 0 containers: []
	W0819 19:15:08.155360  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:08.155368  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:08.155447  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:08.196003  438716 cri.go:89] found id: ""
	I0819 19:15:08.196027  438716 logs.go:276] 0 containers: []
	W0819 19:15:08.196035  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:08.196041  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:08.196091  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:08.230798  438716 cri.go:89] found id: ""
	I0819 19:15:08.230826  438716 logs.go:276] 0 containers: []
	W0819 19:15:08.230836  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:08.230845  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:08.230910  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:08.267522  438716 cri.go:89] found id: ""
	I0819 19:15:08.267554  438716 logs.go:276] 0 containers: []
	W0819 19:15:08.267562  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:08.267569  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:08.267621  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:08.304775  438716 cri.go:89] found id: ""
	I0819 19:15:08.304801  438716 logs.go:276] 0 containers: []
	W0819 19:15:08.304809  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:08.304815  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:08.304866  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:08.344694  438716 cri.go:89] found id: ""
	I0819 19:15:08.344720  438716 logs.go:276] 0 containers: []
	W0819 19:15:08.344734  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:08.344744  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:08.344757  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:08.383581  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:08.383619  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:08.433868  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:08.433905  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:08.447627  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:08.447657  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:08.518846  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:08.518869  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:08.518887  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:11.104449  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:11.118149  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:11.118228  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:11.157917  438716 cri.go:89] found id: ""
	I0819 19:15:11.157951  438716 logs.go:276] 0 containers: []
	W0819 19:15:11.157963  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:11.157971  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:11.158040  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:11.196685  438716 cri.go:89] found id: ""
	I0819 19:15:11.196711  438716 logs.go:276] 0 containers: []
	W0819 19:15:11.196721  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:11.196729  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:11.196788  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:11.231089  438716 cri.go:89] found id: ""
	I0819 19:15:11.231124  438716 logs.go:276] 0 containers: []
	W0819 19:15:11.231135  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:11.231144  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:11.231223  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:11.267001  438716 cri.go:89] found id: ""
	I0819 19:15:11.267032  438716 logs.go:276] 0 containers: []
	W0819 19:15:11.267041  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:11.267048  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:11.267113  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:11.302178  438716 cri.go:89] found id: ""
	I0819 19:15:11.302210  438716 logs.go:276] 0 containers: []
	W0819 19:15:11.302223  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:11.302232  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:11.302292  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:11.336335  438716 cri.go:89] found id: ""
	I0819 19:15:11.336368  438716 logs.go:276] 0 containers: []
	W0819 19:15:11.336442  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:11.336458  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:11.336525  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:11.370891  438716 cri.go:89] found id: ""
	I0819 19:15:11.370926  438716 logs.go:276] 0 containers: []
	W0819 19:15:11.370937  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:11.370945  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:11.371007  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:11.407439  438716 cri.go:89] found id: ""
	I0819 19:15:11.407466  438716 logs.go:276] 0 containers: []
	W0819 19:15:11.407473  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:11.407482  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:11.407497  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:11.458692  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:11.458735  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:11.473104  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:11.473133  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:11.542004  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:11.542031  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:11.542050  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:11.619972  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:11.620014  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:14.159220  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:14.173135  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:14.173204  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:14.210347  438716 cri.go:89] found id: ""
	I0819 19:15:14.210377  438716 logs.go:276] 0 containers: []
	W0819 19:15:14.210389  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:14.210398  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:14.210468  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:14.247143  438716 cri.go:89] found id: ""
	I0819 19:15:14.247169  438716 logs.go:276] 0 containers: []
	W0819 19:15:14.247180  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:14.247187  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:14.247260  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:14.284949  438716 cri.go:89] found id: ""
	I0819 19:15:14.284981  438716 logs.go:276] 0 containers: []
	W0819 19:15:14.284995  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:14.285003  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:14.285071  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:14.326801  438716 cri.go:89] found id: ""
	I0819 19:15:14.326826  438716 logs.go:276] 0 containers: []
	W0819 19:15:14.326834  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:14.326842  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:14.326903  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:14.362730  438716 cri.go:89] found id: ""
	I0819 19:15:14.362764  438716 logs.go:276] 0 containers: []
	W0819 19:15:14.362775  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:14.362783  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:14.362852  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:14.403406  438716 cri.go:89] found id: ""
	I0819 19:15:14.403437  438716 logs.go:276] 0 containers: []
	W0819 19:15:14.403448  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:14.403456  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:14.403514  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:14.440641  438716 cri.go:89] found id: ""
	I0819 19:15:14.440670  438716 logs.go:276] 0 containers: []
	W0819 19:15:14.440678  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:14.440685  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:14.440737  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:14.479477  438716 cri.go:89] found id: ""
	I0819 19:15:14.479511  438716 logs.go:276] 0 containers: []
	W0819 19:15:14.479521  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:14.479530  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:14.479544  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:14.530573  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:14.530620  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:14.545329  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:14.545368  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:14.619632  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:14.619652  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:14.619680  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:14.694923  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:14.694956  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:17.237830  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:17.250579  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:17.250645  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:17.284706  438716 cri.go:89] found id: ""
	I0819 19:15:17.284738  438716 logs.go:276] 0 containers: []
	W0819 19:15:17.284750  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:17.284759  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:17.284832  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:17.320313  438716 cri.go:89] found id: ""
	I0819 19:15:17.320342  438716 logs.go:276] 0 containers: []
	W0819 19:15:17.320350  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:17.320356  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:17.320419  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:17.355974  438716 cri.go:89] found id: ""
	I0819 19:15:17.356008  438716 logs.go:276] 0 containers: []
	W0819 19:15:17.356018  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:17.356027  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:17.356093  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:17.390759  438716 cri.go:89] found id: ""
	I0819 19:15:17.390786  438716 logs.go:276] 0 containers: []
	W0819 19:15:17.390795  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:17.390803  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:17.390861  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:17.431951  438716 cri.go:89] found id: ""
	I0819 19:15:17.431982  438716 logs.go:276] 0 containers: []
	W0819 19:15:17.431993  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:17.432001  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:17.432068  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:17.467183  438716 cri.go:89] found id: ""
	I0819 19:15:17.467215  438716 logs.go:276] 0 containers: []
	W0819 19:15:17.467227  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:17.467236  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:17.467306  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:17.502678  438716 cri.go:89] found id: ""
	I0819 19:15:17.502709  438716 logs.go:276] 0 containers: []
	W0819 19:15:17.502721  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:17.502730  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:17.502801  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:17.537597  438716 cri.go:89] found id: ""
	I0819 19:15:17.537629  438716 logs.go:276] 0 containers: []
	W0819 19:15:17.537643  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:17.537656  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:17.537672  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:17.620076  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:17.620117  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:17.659979  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:17.660009  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:17.710963  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:17.711006  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:17.725556  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:17.725590  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:17.796176  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:20.297246  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:20.311395  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:20.311476  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:20.352279  438716 cri.go:89] found id: ""
	I0819 19:15:20.352317  438716 logs.go:276] 0 containers: []
	W0819 19:15:20.352328  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:20.352338  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:20.352401  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:20.390335  438716 cri.go:89] found id: ""
	I0819 19:15:20.390368  438716 logs.go:276] 0 containers: []
	W0819 19:15:20.390377  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:20.390384  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:20.390450  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:20.430264  438716 cri.go:89] found id: ""
	I0819 19:15:20.430300  438716 logs.go:276] 0 containers: []
	W0819 19:15:20.430312  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:20.430320  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:20.430386  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:20.469670  438716 cri.go:89] found id: ""
	I0819 19:15:20.469703  438716 logs.go:276] 0 containers: []
	W0819 19:15:20.469715  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:20.469723  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:20.469790  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:20.503233  438716 cri.go:89] found id: ""
	I0819 19:15:20.503263  438716 logs.go:276] 0 containers: []
	W0819 19:15:20.503274  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:20.503283  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:20.503371  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:20.538180  438716 cri.go:89] found id: ""
	I0819 19:15:20.538211  438716 logs.go:276] 0 containers: []
	W0819 19:15:20.538223  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:20.538231  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:20.538302  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:20.573301  438716 cri.go:89] found id: ""
	I0819 19:15:20.573329  438716 logs.go:276] 0 containers: []
	W0819 19:15:20.573337  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:20.573352  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:20.573411  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:20.606962  438716 cri.go:89] found id: ""
	I0819 19:15:20.606995  438716 logs.go:276] 0 containers: []
	W0819 19:15:20.607007  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:20.607019  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:20.607035  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:20.658392  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:20.658428  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:20.672063  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:20.672092  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:20.747987  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:20.748010  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:20.748035  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:20.829367  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:20.829415  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:23.378885  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:23.393711  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:23.393778  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:23.430629  438716 cri.go:89] found id: ""
	I0819 19:15:23.430655  438716 logs.go:276] 0 containers: []
	W0819 19:15:23.430665  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:23.430675  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:23.430727  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:23.467509  438716 cri.go:89] found id: ""
	I0819 19:15:23.467541  438716 logs.go:276] 0 containers: []
	W0819 19:15:23.467552  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:23.467560  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:23.467634  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:23.505313  438716 cri.go:89] found id: ""
	I0819 19:15:23.505351  438716 logs.go:276] 0 containers: []
	W0819 19:15:23.505359  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:23.505366  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:23.505416  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:23.543393  438716 cri.go:89] found id: ""
	I0819 19:15:23.543428  438716 logs.go:276] 0 containers: []
	W0819 19:15:23.543441  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:23.543450  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:23.543514  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:23.578265  438716 cri.go:89] found id: ""
	I0819 19:15:23.578293  438716 logs.go:276] 0 containers: []
	W0819 19:15:23.578301  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:23.578308  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:23.578376  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:23.613951  438716 cri.go:89] found id: ""
	I0819 19:15:23.613981  438716 logs.go:276] 0 containers: []
	W0819 19:15:23.613989  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:23.613996  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:23.614061  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:23.647387  438716 cri.go:89] found id: ""
	I0819 19:15:23.647418  438716 logs.go:276] 0 containers: []
	W0819 19:15:23.647426  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:23.647433  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:23.647501  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:23.682482  438716 cri.go:89] found id: ""
	I0819 19:15:23.682510  438716 logs.go:276] 0 containers: []
	W0819 19:15:23.682519  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:23.682530  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:23.682547  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:23.696601  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:23.696629  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:23.766762  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:23.766788  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:23.766804  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:23.850947  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:23.850988  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:23.891113  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:23.891146  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:26.444086  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:26.457774  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:26.457844  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:26.494525  438716 cri.go:89] found id: ""
	I0819 19:15:26.494552  438716 logs.go:276] 0 containers: []
	W0819 19:15:26.494560  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:26.494567  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:26.494618  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:26.535317  438716 cri.go:89] found id: ""
	I0819 19:15:26.535348  438716 logs.go:276] 0 containers: []
	W0819 19:15:26.535359  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:26.535368  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:26.535437  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:26.570853  438716 cri.go:89] found id: ""
	I0819 19:15:26.570886  438716 logs.go:276] 0 containers: []
	W0819 19:15:26.570896  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:26.570920  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:26.570987  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:26.610739  438716 cri.go:89] found id: ""
	I0819 19:15:26.610773  438716 logs.go:276] 0 containers: []
	W0819 19:15:26.610785  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:26.610794  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:26.610885  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:26.651274  438716 cri.go:89] found id: ""
	I0819 19:15:26.651303  438716 logs.go:276] 0 containers: []
	W0819 19:15:26.651311  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:26.651318  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:26.651367  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:26.689963  438716 cri.go:89] found id: ""
	I0819 19:15:26.689993  438716 logs.go:276] 0 containers: []
	W0819 19:15:26.690005  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:26.690013  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:26.690083  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:26.729433  438716 cri.go:89] found id: ""
	I0819 19:15:26.729465  438716 logs.go:276] 0 containers: []
	W0819 19:15:26.729475  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:26.729483  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:26.729548  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:26.768386  438716 cri.go:89] found id: ""
	I0819 19:15:26.768418  438716 logs.go:276] 0 containers: []
	W0819 19:15:26.768427  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:26.768436  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:26.768449  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:26.821526  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:26.821564  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:26.835714  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:26.835763  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:26.907981  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:26.908007  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:26.908023  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:26.991969  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:26.992008  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:29.529743  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:29.544812  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:29.544883  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:29.581455  438716 cri.go:89] found id: ""
	I0819 19:15:29.581486  438716 logs.go:276] 0 containers: []
	W0819 19:15:29.581496  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:29.581503  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:29.581559  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:29.634542  438716 cri.go:89] found id: ""
	I0819 19:15:29.634576  438716 logs.go:276] 0 containers: []
	W0819 19:15:29.634587  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:29.634596  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:29.634663  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:29.670388  438716 cri.go:89] found id: ""
	I0819 19:15:29.670422  438716 logs.go:276] 0 containers: []
	W0819 19:15:29.670439  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:29.670449  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:29.670511  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:29.712267  438716 cri.go:89] found id: ""
	I0819 19:15:29.712293  438716 logs.go:276] 0 containers: []
	W0819 19:15:29.712304  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:29.712313  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:29.712376  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:29.752392  438716 cri.go:89] found id: ""
	I0819 19:15:29.752423  438716 logs.go:276] 0 containers: []
	W0819 19:15:29.752432  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:29.752438  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:29.752500  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:29.791734  438716 cri.go:89] found id: ""
	I0819 19:15:29.791763  438716 logs.go:276] 0 containers: []
	W0819 19:15:29.791772  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:29.791778  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:29.791830  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:29.832882  438716 cri.go:89] found id: ""
	I0819 19:15:29.832910  438716 logs.go:276] 0 containers: []
	W0819 19:15:29.832921  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:29.832929  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:29.832986  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:29.872035  438716 cri.go:89] found id: ""
	I0819 19:15:29.872068  438716 logs.go:276] 0 containers: []
	W0819 19:15:29.872076  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:29.872086  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:29.872098  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:29.926551  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:29.926588  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:29.940500  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:29.940537  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:30.010327  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:30.010348  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:30.010368  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:30.090864  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:30.090910  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:32.636291  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:32.649264  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:32.649334  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:32.683746  438716 cri.go:89] found id: ""
	I0819 19:15:32.683774  438716 logs.go:276] 0 containers: []
	W0819 19:15:32.683785  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:32.683794  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:32.683867  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:32.723805  438716 cri.go:89] found id: ""
	I0819 19:15:32.723838  438716 logs.go:276] 0 containers: []
	W0819 19:15:32.723850  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:32.723858  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:32.723917  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:32.758119  438716 cri.go:89] found id: ""
	I0819 19:15:32.758148  438716 logs.go:276] 0 containers: []
	W0819 19:15:32.758157  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:32.758164  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:32.758215  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:32.792726  438716 cri.go:89] found id: ""
	I0819 19:15:32.792754  438716 logs.go:276] 0 containers: []
	W0819 19:15:32.792768  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:32.792775  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:32.792823  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:32.829180  438716 cri.go:89] found id: ""
	I0819 19:15:32.829208  438716 logs.go:276] 0 containers: []
	W0819 19:15:32.829217  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:32.829224  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:32.829274  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:32.869045  438716 cri.go:89] found id: ""
	I0819 19:15:32.869081  438716 logs.go:276] 0 containers: []
	W0819 19:15:32.869093  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:32.869102  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:32.869172  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:32.904780  438716 cri.go:89] found id: ""
	I0819 19:15:32.904803  438716 logs.go:276] 0 containers: []
	W0819 19:15:32.904811  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:32.904818  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:32.904870  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:32.940846  438716 cri.go:89] found id: ""
	I0819 19:15:32.940876  438716 logs.go:276] 0 containers: []
	W0819 19:15:32.940886  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:32.940900  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:32.940924  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:33.008569  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:33.008592  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:33.008606  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:33.092605  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:33.092657  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:33.133016  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:33.133045  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:33.188335  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:33.188376  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:35.704043  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:35.717647  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:35.717708  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:35.752337  438716 cri.go:89] found id: ""
	I0819 19:15:35.752364  438716 logs.go:276] 0 containers: []
	W0819 19:15:35.752372  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:35.752378  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:35.752431  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:35.787233  438716 cri.go:89] found id: ""
	I0819 19:15:35.787261  438716 logs.go:276] 0 containers: []
	W0819 19:15:35.787269  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:35.787275  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:35.787334  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:35.819641  438716 cri.go:89] found id: ""
	I0819 19:15:35.819667  438716 logs.go:276] 0 containers: []
	W0819 19:15:35.819697  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:35.819705  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:35.819775  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:35.856133  438716 cri.go:89] found id: ""
	I0819 19:15:35.856160  438716 logs.go:276] 0 containers: []
	W0819 19:15:35.856169  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:35.856176  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:35.856240  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:35.889390  438716 cri.go:89] found id: ""
	I0819 19:15:35.889422  438716 logs.go:276] 0 containers: []
	W0819 19:15:35.889432  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:35.889438  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:35.889501  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:35.927477  438716 cri.go:89] found id: ""
	I0819 19:15:35.927519  438716 logs.go:276] 0 containers: []
	W0819 19:15:35.927531  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:35.927539  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:35.927600  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:35.961787  438716 cri.go:89] found id: ""
	I0819 19:15:35.961825  438716 logs.go:276] 0 containers: []
	W0819 19:15:35.961837  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:35.961845  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:35.961912  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:35.998350  438716 cri.go:89] found id: ""
	I0819 19:15:35.998384  438716 logs.go:276] 0 containers: []
	W0819 19:15:35.998396  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:35.998407  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:35.998419  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:36.054352  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:36.054394  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:36.078278  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:36.078311  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:36.166388  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:36.166416  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:36.166433  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:36.247222  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:36.247269  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:38.786510  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:38.800306  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:38.800364  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:38.834555  438716 cri.go:89] found id: ""
	I0819 19:15:38.834583  438716 logs.go:276] 0 containers: []
	W0819 19:15:38.834591  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:38.834598  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:38.834648  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:38.869078  438716 cri.go:89] found id: ""
	I0819 19:15:38.869105  438716 logs.go:276] 0 containers: []
	W0819 19:15:38.869114  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:38.869120  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:38.869174  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:38.903702  438716 cri.go:89] found id: ""
	I0819 19:15:38.903728  438716 logs.go:276] 0 containers: []
	W0819 19:15:38.903736  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:38.903743  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:38.903795  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:38.938326  438716 cri.go:89] found id: ""
	I0819 19:15:38.938352  438716 logs.go:276] 0 containers: []
	W0819 19:15:38.938360  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:38.938367  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:38.938422  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:38.976032  438716 cri.go:89] found id: ""
	I0819 19:15:38.976063  438716 logs.go:276] 0 containers: []
	W0819 19:15:38.976075  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:38.976084  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:38.976149  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:39.009957  438716 cri.go:89] found id: ""
	I0819 19:15:39.009991  438716 logs.go:276] 0 containers: []
	W0819 19:15:39.010002  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:39.010011  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:39.010077  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:39.046381  438716 cri.go:89] found id: ""
	I0819 19:15:39.046408  438716 logs.go:276] 0 containers: []
	W0819 19:15:39.046416  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:39.046422  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:39.046474  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:39.083022  438716 cri.go:89] found id: ""
	I0819 19:15:39.083050  438716 logs.go:276] 0 containers: []
	W0819 19:15:39.083058  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:39.083067  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:39.083079  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:39.160731  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:39.160768  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:39.204846  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:39.204879  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:39.259248  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:39.259287  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:39.273764  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:39.273796  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:39.344477  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:41.845258  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:41.861691  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:41.861754  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:41.908235  438716 cri.go:89] found id: ""
	I0819 19:15:41.908269  438716 logs.go:276] 0 containers: []
	W0819 19:15:41.908281  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:41.908289  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:41.908357  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:41.965631  438716 cri.go:89] found id: ""
	I0819 19:15:41.965657  438716 logs.go:276] 0 containers: []
	W0819 19:15:41.965667  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:41.965673  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:41.965732  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:42.004540  438716 cri.go:89] found id: ""
	I0819 19:15:42.004569  438716 logs.go:276] 0 containers: []
	W0819 19:15:42.004578  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:42.004585  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:42.004650  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:42.042189  438716 cri.go:89] found id: ""
	I0819 19:15:42.042215  438716 logs.go:276] 0 containers: []
	W0819 19:15:42.042224  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:42.042231  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:42.042299  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:42.079313  438716 cri.go:89] found id: ""
	I0819 19:15:42.079349  438716 logs.go:276] 0 containers: []
	W0819 19:15:42.079361  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:42.079370  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:42.079450  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:42.116130  438716 cri.go:89] found id: ""
	I0819 19:15:42.116164  438716 logs.go:276] 0 containers: []
	W0819 19:15:42.116176  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:42.116184  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:42.116253  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:42.154886  438716 cri.go:89] found id: ""
	I0819 19:15:42.154919  438716 logs.go:276] 0 containers: []
	W0819 19:15:42.154928  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:42.154935  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:42.154987  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:42.191204  438716 cri.go:89] found id: ""
	I0819 19:15:42.191237  438716 logs.go:276] 0 containers: []
	W0819 19:15:42.191248  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:42.191258  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:42.191275  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:42.244395  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:42.244434  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:42.258029  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:42.258066  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:42.323461  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:42.323481  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:42.323498  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:42.401932  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:42.401969  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:44.943615  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:44.958243  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:44.958315  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:44.995181  438716 cri.go:89] found id: ""
	I0819 19:15:44.995217  438716 logs.go:276] 0 containers: []
	W0819 19:15:44.995236  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:44.995244  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:44.995309  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:45.030705  438716 cri.go:89] found id: ""
	I0819 19:15:45.030743  438716 logs.go:276] 0 containers: []
	W0819 19:15:45.030752  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:45.030759  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:45.030814  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:45.068186  438716 cri.go:89] found id: ""
	I0819 19:15:45.068215  438716 logs.go:276] 0 containers: []
	W0819 19:15:45.068224  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:45.068231  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:45.068314  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:45.105415  438716 cri.go:89] found id: ""
	I0819 19:15:45.105443  438716 logs.go:276] 0 containers: []
	W0819 19:15:45.105452  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:45.105458  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:45.105517  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:45.143628  438716 cri.go:89] found id: ""
	I0819 19:15:45.143662  438716 logs.go:276] 0 containers: []
	W0819 19:15:45.143694  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:45.143704  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:45.143771  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:45.184896  438716 cri.go:89] found id: ""
	I0819 19:15:45.184922  438716 logs.go:276] 0 containers: []
	W0819 19:15:45.184930  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:45.184937  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:45.185000  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:45.222599  438716 cri.go:89] found id: ""
	I0819 19:15:45.222631  438716 logs.go:276] 0 containers: []
	W0819 19:15:45.222639  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:45.222645  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:45.222700  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:45.260310  438716 cri.go:89] found id: ""
	I0819 19:15:45.260341  438716 logs.go:276] 0 containers: []
	W0819 19:15:45.260352  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:45.260361  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:45.260379  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:45.273687  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:45.273718  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:45.351367  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:45.351390  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:45.351407  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:45.428751  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:45.428787  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:45.468830  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:45.468869  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:48.023654  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:48.037206  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:48.037294  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:48.071647  438716 cri.go:89] found id: ""
	I0819 19:15:48.071686  438716 logs.go:276] 0 containers: []
	W0819 19:15:48.071695  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:48.071704  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:48.071765  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:48.106542  438716 cri.go:89] found id: ""
	I0819 19:15:48.106575  438716 logs.go:276] 0 containers: []
	W0819 19:15:48.106586  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:48.106596  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:48.106662  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:48.151917  438716 cri.go:89] found id: ""
	I0819 19:15:48.151949  438716 logs.go:276] 0 containers: []
	W0819 19:15:48.151959  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:48.151966  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:48.152022  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:48.190095  438716 cri.go:89] found id: ""
	I0819 19:15:48.190125  438716 logs.go:276] 0 containers: []
	W0819 19:15:48.190137  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:48.190146  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:48.190211  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:48.227193  438716 cri.go:89] found id: ""
	I0819 19:15:48.227228  438716 logs.go:276] 0 containers: []
	W0819 19:15:48.227240  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:48.227248  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:48.227317  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:48.261353  438716 cri.go:89] found id: ""
	I0819 19:15:48.261386  438716 logs.go:276] 0 containers: []
	W0819 19:15:48.261396  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:48.261403  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:48.261455  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:48.295749  438716 cri.go:89] found id: ""
	I0819 19:15:48.295782  438716 logs.go:276] 0 containers: []
	W0819 19:15:48.295794  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:48.295803  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:48.295874  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:48.338350  438716 cri.go:89] found id: ""
	I0819 19:15:48.338383  438716 logs.go:276] 0 containers: []
	W0819 19:15:48.338394  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:48.338404  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:48.338420  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:48.420705  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:48.420749  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:48.464114  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:48.464153  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:48.519461  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:48.519505  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:48.534324  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:48.534357  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:48.603580  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:51.104343  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:51.117552  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:51.117629  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:51.150630  438716 cri.go:89] found id: ""
	I0819 19:15:51.150665  438716 logs.go:276] 0 containers: []
	W0819 19:15:51.150677  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:51.150691  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:51.150765  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:51.184316  438716 cri.go:89] found id: ""
	I0819 19:15:51.184346  438716 logs.go:276] 0 containers: []
	W0819 19:15:51.184356  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:51.184362  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:51.184410  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:51.221252  438716 cri.go:89] found id: ""
	I0819 19:15:51.221277  438716 logs.go:276] 0 containers: []
	W0819 19:15:51.221286  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:51.221292  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:51.221349  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:51.255727  438716 cri.go:89] found id: ""
	I0819 19:15:51.255755  438716 logs.go:276] 0 containers: []
	W0819 19:15:51.255763  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:51.255769  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:51.255823  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:51.290615  438716 cri.go:89] found id: ""
	I0819 19:15:51.290651  438716 logs.go:276] 0 containers: []
	W0819 19:15:51.290660  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:51.290667  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:51.290721  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:51.326895  438716 cri.go:89] found id: ""
	I0819 19:15:51.326922  438716 logs.go:276] 0 containers: []
	W0819 19:15:51.326930  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:51.326937  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:51.326987  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:51.365516  438716 cri.go:89] found id: ""
	I0819 19:15:51.365547  438716 logs.go:276] 0 containers: []
	W0819 19:15:51.365558  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:51.365566  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:51.365632  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:51.399002  438716 cri.go:89] found id: ""
	I0819 19:15:51.399030  438716 logs.go:276] 0 containers: []
	W0819 19:15:51.399038  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:51.399048  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:51.399059  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:51.453481  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:51.453524  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:51.467246  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:51.467277  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:51.548547  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:51.548578  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:51.548595  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:51.635627  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:51.635670  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:54.175003  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:54.190462  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:54.190537  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:54.232140  438716 cri.go:89] found id: ""
	I0819 19:15:54.232168  438716 logs.go:276] 0 containers: []
	W0819 19:15:54.232178  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:54.232186  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:54.232254  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:54.267700  438716 cri.go:89] found id: ""
	I0819 19:15:54.267732  438716 logs.go:276] 0 containers: []
	W0819 19:15:54.267742  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:54.267748  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:54.267807  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:54.306272  438716 cri.go:89] found id: ""
	I0819 19:15:54.306300  438716 logs.go:276] 0 containers: []
	W0819 19:15:54.306308  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:54.306315  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:54.306368  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:54.341503  438716 cri.go:89] found id: ""
	I0819 19:15:54.341536  438716 logs.go:276] 0 containers: []
	W0819 19:15:54.341549  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:54.341556  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:54.341609  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:54.375535  438716 cri.go:89] found id: ""
	I0819 19:15:54.375570  438716 logs.go:276] 0 containers: []
	W0819 19:15:54.375582  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:54.375591  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:54.375661  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:54.409611  438716 cri.go:89] found id: ""
	I0819 19:15:54.409641  438716 logs.go:276] 0 containers: []
	W0819 19:15:54.409653  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:54.409662  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:54.409731  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:54.444318  438716 cri.go:89] found id: ""
	I0819 19:15:54.444346  438716 logs.go:276] 0 containers: []
	W0819 19:15:54.444358  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:54.444366  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:54.444425  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:54.480746  438716 cri.go:89] found id: ""
	I0819 19:15:54.480777  438716 logs.go:276] 0 containers: []
	W0819 19:15:54.480789  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:54.480802  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:54.480817  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:54.534209  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:54.534245  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:54.549557  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:54.549598  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:54.625086  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:54.625111  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:54.625136  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:54.705549  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:54.705589  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:57.257440  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:57.276724  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:57.276812  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:57.319032  438716 cri.go:89] found id: ""
	I0819 19:15:57.319062  438716 logs.go:276] 0 containers: []
	W0819 19:15:57.319073  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:57.319081  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:57.319163  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:57.357093  438716 cri.go:89] found id: ""
	I0819 19:15:57.357129  438716 logs.go:276] 0 containers: []
	W0819 19:15:57.357140  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:57.357152  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:57.357222  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:57.393978  438716 cri.go:89] found id: ""
	I0819 19:15:57.394013  438716 logs.go:276] 0 containers: []
	W0819 19:15:57.394025  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:57.394033  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:57.394102  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:57.428731  438716 cri.go:89] found id: ""
	I0819 19:15:57.428760  438716 logs.go:276] 0 containers: []
	W0819 19:15:57.428768  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:57.428775  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:57.428824  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:57.467772  438716 cri.go:89] found id: ""
	I0819 19:15:57.467810  438716 logs.go:276] 0 containers: []
	W0819 19:15:57.467822  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:57.467832  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:57.467904  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:57.502398  438716 cri.go:89] found id: ""
	I0819 19:15:57.502434  438716 logs.go:276] 0 containers: []
	W0819 19:15:57.502444  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:57.502450  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:57.502503  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:57.536729  438716 cri.go:89] found id: ""
	I0819 19:15:57.536760  438716 logs.go:276] 0 containers: []
	W0819 19:15:57.536771  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:57.536779  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:57.536845  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:57.574738  438716 cri.go:89] found id: ""
	I0819 19:15:57.574762  438716 logs.go:276] 0 containers: []
	W0819 19:15:57.574770  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:57.574780  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:57.574793  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:57.630063  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:57.630113  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:57.643083  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:57.643111  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:57.725081  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:57.725104  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:57.725118  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:57.805065  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:57.805105  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:00.344557  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:00.357940  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:00.358005  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:00.399319  438716 cri.go:89] found id: ""
	I0819 19:16:00.399355  438716 logs.go:276] 0 containers: []
	W0819 19:16:00.399368  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:00.399377  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:00.399446  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:00.444223  438716 cri.go:89] found id: ""
	I0819 19:16:00.444254  438716 logs.go:276] 0 containers: []
	W0819 19:16:00.444264  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:00.444271  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:00.444323  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:00.479903  438716 cri.go:89] found id: ""
	I0819 19:16:00.479932  438716 logs.go:276] 0 containers: []
	W0819 19:16:00.479942  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:00.479948  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:00.480003  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:00.515923  438716 cri.go:89] found id: ""
	I0819 19:16:00.515954  438716 logs.go:276] 0 containers: []
	W0819 19:16:00.515966  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:00.515974  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:00.516043  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:00.551319  438716 cri.go:89] found id: ""
	I0819 19:16:00.551348  438716 logs.go:276] 0 containers: []
	W0819 19:16:00.551360  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:00.551370  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:00.551434  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:00.587847  438716 cri.go:89] found id: ""
	I0819 19:16:00.587882  438716 logs.go:276] 0 containers: []
	W0819 19:16:00.587892  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:00.587901  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:00.587976  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:00.624769  438716 cri.go:89] found id: ""
	I0819 19:16:00.624800  438716 logs.go:276] 0 containers: []
	W0819 19:16:00.624812  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:00.624820  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:00.624894  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:00.659300  438716 cri.go:89] found id: ""
	I0819 19:16:00.659330  438716 logs.go:276] 0 containers: []
	W0819 19:16:00.659342  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:00.659355  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:00.659371  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:00.739073  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:00.739113  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:00.779087  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:00.779116  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:00.831864  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:00.831914  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:00.845832  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:00.845863  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:00.920622  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:03.420751  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:03.434599  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:03.434664  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:03.469288  438716 cri.go:89] found id: ""
	I0819 19:16:03.469326  438716 logs.go:276] 0 containers: []
	W0819 19:16:03.469349  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:03.469372  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:03.469445  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:03.507885  438716 cri.go:89] found id: ""
	I0819 19:16:03.507911  438716 logs.go:276] 0 containers: []
	W0819 19:16:03.507927  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:03.507934  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:03.507987  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:03.543805  438716 cri.go:89] found id: ""
	I0819 19:16:03.543837  438716 logs.go:276] 0 containers: []
	W0819 19:16:03.543847  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:03.543854  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:03.543928  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:03.584060  438716 cri.go:89] found id: ""
	I0819 19:16:03.584093  438716 logs.go:276] 0 containers: []
	W0819 19:16:03.584105  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:03.584114  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:03.584202  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:03.619724  438716 cri.go:89] found id: ""
	I0819 19:16:03.619758  438716 logs.go:276] 0 containers: []
	W0819 19:16:03.619769  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:03.619776  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:03.619854  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:03.657180  438716 cri.go:89] found id: ""
	I0819 19:16:03.657213  438716 logs.go:276] 0 containers: []
	W0819 19:16:03.657225  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:03.657234  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:03.657303  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:03.695099  438716 cri.go:89] found id: ""
	I0819 19:16:03.695125  438716 logs.go:276] 0 containers: []
	W0819 19:16:03.695134  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:03.695139  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:03.695193  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:03.730263  438716 cri.go:89] found id: ""
	I0819 19:16:03.730291  438716 logs.go:276] 0 containers: []
	W0819 19:16:03.730302  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:03.730314  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:03.730331  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:03.780776  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:03.780816  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:03.795381  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:03.795419  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:03.869995  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:03.870016  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:03.870029  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:03.949654  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:03.949691  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:06.493589  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:06.506758  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:06.506834  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:06.545325  438716 cri.go:89] found id: ""
	I0819 19:16:06.545357  438716 logs.go:276] 0 containers: []
	W0819 19:16:06.545370  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:06.545378  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:06.545443  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:06.581708  438716 cri.go:89] found id: ""
	I0819 19:16:06.581741  438716 logs.go:276] 0 containers: []
	W0819 19:16:06.581753  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:06.581761  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:06.581828  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:06.626543  438716 cri.go:89] found id: ""
	I0819 19:16:06.626588  438716 logs.go:276] 0 containers: []
	W0819 19:16:06.626600  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:06.626609  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:06.626676  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:06.662466  438716 cri.go:89] found id: ""
	I0819 19:16:06.662499  438716 logs.go:276] 0 containers: []
	W0819 19:16:06.662509  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:06.662518  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:06.662585  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:06.701584  438716 cri.go:89] found id: ""
	I0819 19:16:06.701619  438716 logs.go:276] 0 containers: []
	W0819 19:16:06.701628  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:06.701635  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:06.701688  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:06.736245  438716 cri.go:89] found id: ""
	I0819 19:16:06.736280  438716 logs.go:276] 0 containers: []
	W0819 19:16:06.736292  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:06.736300  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:06.736392  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:06.774411  438716 cri.go:89] found id: ""
	I0819 19:16:06.774439  438716 logs.go:276] 0 containers: []
	W0819 19:16:06.774447  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:06.774454  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:06.774510  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:06.809560  438716 cri.go:89] found id: ""
	I0819 19:16:06.809597  438716 logs.go:276] 0 containers: []
	W0819 19:16:06.809609  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:06.809624  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:06.809648  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:06.884841  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:06.884862  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:06.884878  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:06.971467  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:06.971507  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:07.010737  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:07.010767  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:07.063807  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:07.063846  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:09.578451  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:09.591643  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:09.591737  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:09.625607  438716 cri.go:89] found id: ""
	I0819 19:16:09.625639  438716 logs.go:276] 0 containers: []
	W0819 19:16:09.625650  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:09.625659  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:09.625727  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:09.669145  438716 cri.go:89] found id: ""
	I0819 19:16:09.669177  438716 logs.go:276] 0 containers: []
	W0819 19:16:09.669185  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:09.669191  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:09.669254  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:09.707035  438716 cri.go:89] found id: ""
	I0819 19:16:09.707064  438716 logs.go:276] 0 containers: []
	W0819 19:16:09.707073  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:09.707080  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:09.707142  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:09.742089  438716 cri.go:89] found id: ""
	I0819 19:16:09.742116  438716 logs.go:276] 0 containers: []
	W0819 19:16:09.742125  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:09.742132  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:09.742193  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:09.782736  438716 cri.go:89] found id: ""
	I0819 19:16:09.782774  438716 logs.go:276] 0 containers: []
	W0819 19:16:09.782785  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:09.782794  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:09.782860  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:09.818003  438716 cri.go:89] found id: ""
	I0819 19:16:09.818031  438716 logs.go:276] 0 containers: []
	W0819 19:16:09.818040  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:09.818047  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:09.818110  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:09.852716  438716 cri.go:89] found id: ""
	I0819 19:16:09.852748  438716 logs.go:276] 0 containers: []
	W0819 19:16:09.852757  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:09.852764  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:09.852828  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:09.887176  438716 cri.go:89] found id: ""
	I0819 19:16:09.887206  438716 logs.go:276] 0 containers: []
	W0819 19:16:09.887218  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:09.887230  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:09.887247  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:09.901547  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:09.901573  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:09.969153  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:09.969190  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:09.969205  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:10.053777  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:10.053820  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:10.100888  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:10.100916  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:12.655112  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:12.667824  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:12.667897  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:12.702337  438716 cri.go:89] found id: ""
	I0819 19:16:12.702364  438716 logs.go:276] 0 containers: []
	W0819 19:16:12.702373  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:12.702379  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:12.702432  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:12.736628  438716 cri.go:89] found id: ""
	I0819 19:16:12.736655  438716 logs.go:276] 0 containers: []
	W0819 19:16:12.736663  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:12.736669  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:12.736720  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:12.773598  438716 cri.go:89] found id: ""
	I0819 19:16:12.773628  438716 logs.go:276] 0 containers: []
	W0819 19:16:12.773636  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:12.773643  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:12.773695  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:12.806584  438716 cri.go:89] found id: ""
	I0819 19:16:12.806620  438716 logs.go:276] 0 containers: []
	W0819 19:16:12.806632  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:12.806640  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:12.806723  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:12.840535  438716 cri.go:89] found id: ""
	I0819 19:16:12.840561  438716 logs.go:276] 0 containers: []
	W0819 19:16:12.840569  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:12.840575  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:12.840639  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:12.877680  438716 cri.go:89] found id: ""
	I0819 19:16:12.877712  438716 logs.go:276] 0 containers: []
	W0819 19:16:12.877721  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:12.877728  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:12.877779  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:12.912226  438716 cri.go:89] found id: ""
	I0819 19:16:12.912253  438716 logs.go:276] 0 containers: []
	W0819 19:16:12.912264  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:12.912272  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:12.912342  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:12.953463  438716 cri.go:89] found id: ""
	I0819 19:16:12.953493  438716 logs.go:276] 0 containers: []
	W0819 19:16:12.953504  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:12.953524  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:12.953542  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:13.007648  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:13.007691  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:13.022452  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:13.022494  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:13.092411  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:13.092439  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:13.092455  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:13.168711  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:13.168750  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:15.711501  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:15.724841  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:15.724921  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:15.760120  438716 cri.go:89] found id: ""
	I0819 19:16:15.760149  438716 logs.go:276] 0 containers: []
	W0819 19:16:15.760158  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:15.760166  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:15.760234  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:15.794959  438716 cri.go:89] found id: ""
	I0819 19:16:15.794988  438716 logs.go:276] 0 containers: []
	W0819 19:16:15.794996  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:15.795002  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:15.795054  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:15.842776  438716 cri.go:89] found id: ""
	I0819 19:16:15.842804  438716 logs.go:276] 0 containers: []
	W0819 19:16:15.842814  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:15.842820  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:15.842874  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:15.882134  438716 cri.go:89] found id: ""
	I0819 19:16:15.882167  438716 logs.go:276] 0 containers: []
	W0819 19:16:15.882178  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:15.882187  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:15.882251  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:15.919296  438716 cri.go:89] found id: ""
	I0819 19:16:15.919325  438716 logs.go:276] 0 containers: []
	W0819 19:16:15.919336  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:15.919345  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:15.919409  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:15.956401  438716 cri.go:89] found id: ""
	I0819 19:16:15.956429  438716 logs.go:276] 0 containers: []
	W0819 19:16:15.956437  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:15.956444  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:15.956507  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:15.994271  438716 cri.go:89] found id: ""
	I0819 19:16:15.994304  438716 logs.go:276] 0 containers: []
	W0819 19:16:15.994314  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:15.994320  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:15.994378  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:16.033685  438716 cri.go:89] found id: ""
	I0819 19:16:16.033714  438716 logs.go:276] 0 containers: []
	W0819 19:16:16.033724  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:16.033736  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:16.033754  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:16.083929  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:16.083964  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:16.107309  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:16.107342  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:16.193657  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:16.193681  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:16.193697  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:16.276974  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:16.277016  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:18.818532  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:18.831586  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:18.831655  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:18.866663  438716 cri.go:89] found id: ""
	I0819 19:16:18.866689  438716 logs.go:276] 0 containers: []
	W0819 19:16:18.866700  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:18.866709  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:18.866769  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:18.900711  438716 cri.go:89] found id: ""
	I0819 19:16:18.900746  438716 logs.go:276] 0 containers: []
	W0819 19:16:18.900757  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:18.900765  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:18.900849  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:18.935156  438716 cri.go:89] found id: ""
	I0819 19:16:18.935179  438716 logs.go:276] 0 containers: []
	W0819 19:16:18.935186  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:18.935193  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:18.935246  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:18.973853  438716 cri.go:89] found id: ""
	I0819 19:16:18.973889  438716 logs.go:276] 0 containers: []
	W0819 19:16:18.973902  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:18.973911  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:18.973978  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:19.014212  438716 cri.go:89] found id: ""
	I0819 19:16:19.014241  438716 logs.go:276] 0 containers: []
	W0819 19:16:19.014250  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:19.014255  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:19.014317  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:19.056089  438716 cri.go:89] found id: ""
	I0819 19:16:19.056125  438716 logs.go:276] 0 containers: []
	W0819 19:16:19.056137  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:19.056146  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:19.056211  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:19.091372  438716 cri.go:89] found id: ""
	I0819 19:16:19.091399  438716 logs.go:276] 0 containers: []
	W0819 19:16:19.091411  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:19.091420  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:19.091478  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:19.129737  438716 cri.go:89] found id: ""
	I0819 19:16:19.129767  438716 logs.go:276] 0 containers: []
	W0819 19:16:19.129777  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:19.129787  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:19.129800  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:19.207325  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:19.207360  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:19.247780  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:19.247816  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:19.302496  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:19.302543  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:19.317706  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:19.317739  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:19.395029  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:21.895538  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:21.910595  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:21.910658  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:21.948363  438716 cri.go:89] found id: ""
	I0819 19:16:21.948398  438716 logs.go:276] 0 containers: []
	W0819 19:16:21.948410  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:21.948419  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:21.948492  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:21.983391  438716 cri.go:89] found id: ""
	I0819 19:16:21.983428  438716 logs.go:276] 0 containers: []
	W0819 19:16:21.983440  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:21.983449  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:21.983520  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:22.022383  438716 cri.go:89] found id: ""
	I0819 19:16:22.022415  438716 logs.go:276] 0 containers: []
	W0819 19:16:22.022427  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:22.022436  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:22.022493  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:22.060676  438716 cri.go:89] found id: ""
	I0819 19:16:22.060707  438716 logs.go:276] 0 containers: []
	W0819 19:16:22.060716  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:22.060725  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:22.060778  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:22.095188  438716 cri.go:89] found id: ""
	I0819 19:16:22.095218  438716 logs.go:276] 0 containers: []
	W0819 19:16:22.095227  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:22.095234  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:22.095300  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:22.131164  438716 cri.go:89] found id: ""
	I0819 19:16:22.131192  438716 logs.go:276] 0 containers: []
	W0819 19:16:22.131200  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:22.131209  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:22.131275  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:22.166539  438716 cri.go:89] found id: ""
	I0819 19:16:22.166566  438716 logs.go:276] 0 containers: []
	W0819 19:16:22.166573  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:22.166580  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:22.166643  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:22.205604  438716 cri.go:89] found id: ""
	I0819 19:16:22.205631  438716 logs.go:276] 0 containers: []
	W0819 19:16:22.205640  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:22.205649  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:22.205662  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:22.265650  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:22.265689  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:22.280401  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:22.280443  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:22.356818  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:22.356851  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:22.356872  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:22.437678  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:22.437719  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:24.979655  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:24.993462  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:24.993526  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:25.029955  438716 cri.go:89] found id: ""
	I0819 19:16:25.029983  438716 logs.go:276] 0 containers: []
	W0819 19:16:25.029992  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:25.029999  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:25.030049  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:25.068478  438716 cri.go:89] found id: ""
	I0819 19:16:25.068507  438716 logs.go:276] 0 containers: []
	W0819 19:16:25.068518  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:25.068527  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:25.068594  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:25.105209  438716 cri.go:89] found id: ""
	I0819 19:16:25.105238  438716 logs.go:276] 0 containers: []
	W0819 19:16:25.105247  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:25.105256  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:25.105327  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:25.143166  438716 cri.go:89] found id: ""
	I0819 19:16:25.143203  438716 logs.go:276] 0 containers: []
	W0819 19:16:25.143218  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:25.143225  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:25.143279  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:25.177993  438716 cri.go:89] found id: ""
	I0819 19:16:25.178023  438716 logs.go:276] 0 containers: []
	W0819 19:16:25.178035  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:25.178044  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:25.178129  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:25.216473  438716 cri.go:89] found id: ""
	I0819 19:16:25.216501  438716 logs.go:276] 0 containers: []
	W0819 19:16:25.216523  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:25.216540  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:25.216603  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:25.251454  438716 cri.go:89] found id: ""
	I0819 19:16:25.251486  438716 logs.go:276] 0 containers: []
	W0819 19:16:25.251495  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:25.251501  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:25.251555  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:25.287145  438716 cri.go:89] found id: ""
	I0819 19:16:25.287179  438716 logs.go:276] 0 containers: []
	W0819 19:16:25.287188  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:25.287198  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:25.287210  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:25.371571  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:25.371619  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:25.418247  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:25.418277  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:25.472209  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:25.472248  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:25.486286  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:25.486315  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:25.554470  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:28.055382  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:28.068750  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:28.068827  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:28.101856  438716 cri.go:89] found id: ""
	I0819 19:16:28.101891  438716 logs.go:276] 0 containers: []
	W0819 19:16:28.101903  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:28.101912  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:28.101977  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:28.136402  438716 cri.go:89] found id: ""
	I0819 19:16:28.136437  438716 logs.go:276] 0 containers: []
	W0819 19:16:28.136449  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:28.136460  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:28.136528  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:28.171766  438716 cri.go:89] found id: ""
	I0819 19:16:28.171795  438716 logs.go:276] 0 containers: []
	W0819 19:16:28.171803  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:28.171809  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:28.171864  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:28.206228  438716 cri.go:89] found id: ""
	I0819 19:16:28.206256  438716 logs.go:276] 0 containers: []
	W0819 19:16:28.206264  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:28.206272  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:28.206337  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:28.248877  438716 cri.go:89] found id: ""
	I0819 19:16:28.248912  438716 logs.go:276] 0 containers: []
	W0819 19:16:28.248923  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:28.248931  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:28.249002  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:28.290160  438716 cri.go:89] found id: ""
	I0819 19:16:28.290201  438716 logs.go:276] 0 containers: []
	W0819 19:16:28.290212  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:28.290221  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:28.290287  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:28.340413  438716 cri.go:89] found id: ""
	I0819 19:16:28.340445  438716 logs.go:276] 0 containers: []
	W0819 19:16:28.340454  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:28.340461  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:28.340513  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:28.385486  438716 cri.go:89] found id: ""
	I0819 19:16:28.385513  438716 logs.go:276] 0 containers: []
	W0819 19:16:28.385521  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:28.385532  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:28.385544  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:28.441987  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:28.442029  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:28.456509  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:28.456538  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:28.527941  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:28.527976  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:28.527993  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:28.612696  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:28.612738  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:31.154773  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:31.168718  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:31.168789  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:31.205365  438716 cri.go:89] found id: ""
	I0819 19:16:31.205399  438716 logs.go:276] 0 containers: []
	W0819 19:16:31.205411  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:31.205419  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:31.205496  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:31.238829  438716 cri.go:89] found id: ""
	I0819 19:16:31.238871  438716 logs.go:276] 0 containers: []
	W0819 19:16:31.238879  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:31.238886  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:31.238936  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:31.273229  438716 cri.go:89] found id: ""
	I0819 19:16:31.273259  438716 logs.go:276] 0 containers: []
	W0819 19:16:31.273304  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:31.273313  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:31.273377  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:31.309559  438716 cri.go:89] found id: ""
	I0819 19:16:31.309601  438716 logs.go:276] 0 containers: []
	W0819 19:16:31.309613  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:31.309622  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:31.309689  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:31.344939  438716 cri.go:89] found id: ""
	I0819 19:16:31.344971  438716 logs.go:276] 0 containers: []
	W0819 19:16:31.344981  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:31.344987  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:31.345043  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:31.382423  438716 cri.go:89] found id: ""
	I0819 19:16:31.382455  438716 logs.go:276] 0 containers: []
	W0819 19:16:31.382468  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:31.382474  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:31.382525  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:31.420148  438716 cri.go:89] found id: ""
	I0819 19:16:31.420174  438716 logs.go:276] 0 containers: []
	W0819 19:16:31.420184  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:31.420192  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:31.420262  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:31.455691  438716 cri.go:89] found id: ""
	I0819 19:16:31.455720  438716 logs.go:276] 0 containers: []
	W0819 19:16:31.455730  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:31.455740  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:31.455753  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:31.509501  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:31.509549  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:31.523650  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:31.523693  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:31.591535  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:31.591557  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:31.591574  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:31.674038  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:31.674077  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:34.216506  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:34.232782  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:34.232875  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:34.286103  438716 cri.go:89] found id: ""
	I0819 19:16:34.286136  438716 logs.go:276] 0 containers: []
	W0819 19:16:34.286147  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:34.286156  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:34.286221  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:34.324193  438716 cri.go:89] found id: ""
	I0819 19:16:34.324220  438716 logs.go:276] 0 containers: []
	W0819 19:16:34.324229  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:34.324235  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:34.324292  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:34.382777  438716 cri.go:89] found id: ""
	I0819 19:16:34.382804  438716 logs.go:276] 0 containers: []
	W0819 19:16:34.382814  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:34.382822  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:34.382887  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:34.420714  438716 cri.go:89] found id: ""
	I0819 19:16:34.420743  438716 logs.go:276] 0 containers: []
	W0819 19:16:34.420753  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:34.420771  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:34.420840  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:34.455338  438716 cri.go:89] found id: ""
	I0819 19:16:34.455369  438716 logs.go:276] 0 containers: []
	W0819 19:16:34.455381  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:34.455391  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:34.455467  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:34.489528  438716 cri.go:89] found id: ""
	I0819 19:16:34.489566  438716 logs.go:276] 0 containers: []
	W0819 19:16:34.489575  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:34.489581  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:34.489634  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:34.523830  438716 cri.go:89] found id: ""
	I0819 19:16:34.523857  438716 logs.go:276] 0 containers: []
	W0819 19:16:34.523866  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:34.523873  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:34.523940  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:34.559023  438716 cri.go:89] found id: ""
	I0819 19:16:34.559052  438716 logs.go:276] 0 containers: []
	W0819 19:16:34.559063  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:34.559077  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:34.559092  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:34.639116  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:34.639159  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:34.675990  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:34.676017  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:34.730900  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:34.730935  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:34.744938  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:34.744964  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:34.816267  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:37.317314  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:37.331915  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:37.331982  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:37.370233  438716 cri.go:89] found id: ""
	I0819 19:16:37.370261  438716 logs.go:276] 0 containers: []
	W0819 19:16:37.370269  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:37.370276  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:37.370343  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:37.409042  438716 cri.go:89] found id: ""
	I0819 19:16:37.409071  438716 logs.go:276] 0 containers: []
	W0819 19:16:37.409082  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:37.409090  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:37.409161  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:37.445903  438716 cri.go:89] found id: ""
	I0819 19:16:37.445932  438716 logs.go:276] 0 containers: []
	W0819 19:16:37.445941  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:37.445948  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:37.445999  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:37.484275  438716 cri.go:89] found id: ""
	I0819 19:16:37.484318  438716 logs.go:276] 0 containers: []
	W0819 19:16:37.484328  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:37.484334  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:37.484393  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:37.528131  438716 cri.go:89] found id: ""
	I0819 19:16:37.528161  438716 logs.go:276] 0 containers: []
	W0819 19:16:37.528174  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:37.528180  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:37.528243  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:37.563374  438716 cri.go:89] found id: ""
	I0819 19:16:37.563406  438716 logs.go:276] 0 containers: []
	W0819 19:16:37.563414  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:37.563421  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:37.563473  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:37.597234  438716 cri.go:89] found id: ""
	I0819 19:16:37.597260  438716 logs.go:276] 0 containers: []
	W0819 19:16:37.597267  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:37.597274  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:37.597329  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:37.634809  438716 cri.go:89] found id: ""
	I0819 19:16:37.634845  438716 logs.go:276] 0 containers: []
	W0819 19:16:37.634854  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:37.634864  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:37.634879  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:37.704354  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:37.704380  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:37.704396  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:37.788606  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:37.788646  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:37.830486  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:37.830513  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:37.890642  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:37.890681  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:40.405473  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:40.420019  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:40.420094  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:40.458558  438716 cri.go:89] found id: ""
	I0819 19:16:40.458586  438716 logs.go:276] 0 containers: []
	W0819 19:16:40.458598  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:40.458606  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:40.458671  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:40.500353  438716 cri.go:89] found id: ""
	I0819 19:16:40.500379  438716 logs.go:276] 0 containers: []
	W0819 19:16:40.500388  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:40.500394  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:40.500445  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:40.534281  438716 cri.go:89] found id: ""
	I0819 19:16:40.534307  438716 logs.go:276] 0 containers: []
	W0819 19:16:40.534316  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:40.534322  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:40.534379  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:40.569537  438716 cri.go:89] found id: ""
	I0819 19:16:40.569568  438716 logs.go:276] 0 containers: []
	W0819 19:16:40.569578  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:40.569587  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:40.569654  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:40.603066  438716 cri.go:89] found id: ""
	I0819 19:16:40.603097  438716 logs.go:276] 0 containers: []
	W0819 19:16:40.603110  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:40.603118  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:40.603171  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:40.637598  438716 cri.go:89] found id: ""
	I0819 19:16:40.637628  438716 logs.go:276] 0 containers: []
	W0819 19:16:40.637637  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:40.637643  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:40.637704  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:40.673583  438716 cri.go:89] found id: ""
	I0819 19:16:40.673616  438716 logs.go:276] 0 containers: []
	W0819 19:16:40.673629  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:40.673637  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:40.673692  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:40.708324  438716 cri.go:89] found id: ""
	I0819 19:16:40.708354  438716 logs.go:276] 0 containers: []
	W0819 19:16:40.708363  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:40.708373  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:40.708387  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:40.789743  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:40.789782  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:40.830849  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:40.830884  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:40.882662  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:40.882700  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:40.896843  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:40.896869  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:40.969491  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:43.470579  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:43.483791  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:43.483876  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:43.523764  438716 cri.go:89] found id: ""
	I0819 19:16:43.523797  438716 logs.go:276] 0 containers: []
	W0819 19:16:43.523809  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:43.523817  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:43.523882  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:43.557925  438716 cri.go:89] found id: ""
	I0819 19:16:43.557953  438716 logs.go:276] 0 containers: []
	W0819 19:16:43.557960  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:43.557966  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:43.558017  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:43.591324  438716 cri.go:89] found id: ""
	I0819 19:16:43.591355  438716 logs.go:276] 0 containers: []
	W0819 19:16:43.591364  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:43.591370  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:43.591421  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:43.625798  438716 cri.go:89] found id: ""
	I0819 19:16:43.625826  438716 logs.go:276] 0 containers: []
	W0819 19:16:43.625834  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:43.625840  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:43.625898  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:43.659787  438716 cri.go:89] found id: ""
	I0819 19:16:43.659815  438716 logs.go:276] 0 containers: []
	W0819 19:16:43.659823  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:43.659830  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:43.659882  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:43.692982  438716 cri.go:89] found id: ""
	I0819 19:16:43.693008  438716 logs.go:276] 0 containers: []
	W0819 19:16:43.693017  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:43.693024  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:43.693075  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:43.726059  438716 cri.go:89] found id: ""
	I0819 19:16:43.726092  438716 logs.go:276] 0 containers: []
	W0819 19:16:43.726104  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:43.726113  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:43.726187  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:43.760906  438716 cri.go:89] found id: ""
	I0819 19:16:43.760947  438716 logs.go:276] 0 containers: []
	W0819 19:16:43.760958  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:43.760971  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:43.760994  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:43.812249  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:43.812285  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:43.826538  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:43.826566  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:43.894904  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:43.894926  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:43.894941  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:43.975746  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:43.975796  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:46.515329  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:46.529088  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:46.529170  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:46.564525  438716 cri.go:89] found id: ""
	I0819 19:16:46.564557  438716 logs.go:276] 0 containers: []
	W0819 19:16:46.564570  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:46.564578  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:46.564647  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:46.598457  438716 cri.go:89] found id: ""
	I0819 19:16:46.598485  438716 logs.go:276] 0 containers: []
	W0819 19:16:46.598494  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:46.598499  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:46.598549  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:46.631767  438716 cri.go:89] found id: ""
	I0819 19:16:46.631798  438716 logs.go:276] 0 containers: []
	W0819 19:16:46.631807  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:46.631814  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:46.631867  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:46.664978  438716 cri.go:89] found id: ""
	I0819 19:16:46.665013  438716 logs.go:276] 0 containers: []
	W0819 19:16:46.665026  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:46.665034  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:46.665094  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:46.701024  438716 cri.go:89] found id: ""
	I0819 19:16:46.701052  438716 logs.go:276] 0 containers: []
	W0819 19:16:46.701061  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:46.701067  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:46.701132  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:46.735834  438716 cri.go:89] found id: ""
	I0819 19:16:46.735874  438716 logs.go:276] 0 containers: []
	W0819 19:16:46.735886  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:46.735894  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:46.735978  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:46.773392  438716 cri.go:89] found id: ""
	I0819 19:16:46.773426  438716 logs.go:276] 0 containers: []
	W0819 19:16:46.773437  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:46.773445  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:46.773498  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:46.819800  438716 cri.go:89] found id: ""
	I0819 19:16:46.819829  438716 logs.go:276] 0 containers: []
	W0819 19:16:46.819841  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:46.819869  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:46.819889  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:46.860633  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:46.860669  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:46.911895  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:46.911936  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:46.927388  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:46.927422  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:46.998601  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:46.998628  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:46.998645  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:49.585303  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:49.598962  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:49.599032  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:49.631891  438716 cri.go:89] found id: ""
	I0819 19:16:49.631920  438716 logs.go:276] 0 containers: []
	W0819 19:16:49.631931  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:49.631940  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:49.631998  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:49.671731  438716 cri.go:89] found id: ""
	I0819 19:16:49.671761  438716 logs.go:276] 0 containers: []
	W0819 19:16:49.671777  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:49.671786  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:49.671846  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:49.707517  438716 cri.go:89] found id: ""
	I0819 19:16:49.707556  438716 logs.go:276] 0 containers: []
	W0819 19:16:49.707568  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:49.707578  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:49.707651  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:49.744255  438716 cri.go:89] found id: ""
	I0819 19:16:49.744289  438716 logs.go:276] 0 containers: []
	W0819 19:16:49.744299  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:49.744305  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:49.744357  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:49.779224  438716 cri.go:89] found id: ""
	I0819 19:16:49.779252  438716 logs.go:276] 0 containers: []
	W0819 19:16:49.779259  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:49.779266  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:49.779322  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:49.815641  438716 cri.go:89] found id: ""
	I0819 19:16:49.815689  438716 logs.go:276] 0 containers: []
	W0819 19:16:49.815701  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:49.815711  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:49.815769  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:49.851861  438716 cri.go:89] found id: ""
	I0819 19:16:49.851894  438716 logs.go:276] 0 containers: []
	W0819 19:16:49.851906  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:49.851915  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:49.851984  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:49.888140  438716 cri.go:89] found id: ""
	I0819 19:16:49.888173  438716 logs.go:276] 0 containers: []
	W0819 19:16:49.888186  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:49.888199  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:49.888215  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:49.940389  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:49.940430  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:49.954519  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:49.954553  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:50.028462  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:50.028486  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:50.028502  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:50.108319  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:50.108362  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:52.647146  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:52.660468  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:52.660558  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:52.697665  438716 cri.go:89] found id: ""
	I0819 19:16:52.697703  438716 logs.go:276] 0 containers: []
	W0819 19:16:52.697719  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:52.697727  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:52.697786  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:52.739169  438716 cri.go:89] found id: ""
	I0819 19:16:52.739203  438716 logs.go:276] 0 containers: []
	W0819 19:16:52.739214  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:52.739222  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:52.739289  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:52.776580  438716 cri.go:89] found id: ""
	I0819 19:16:52.776610  438716 logs.go:276] 0 containers: []
	W0819 19:16:52.776619  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:52.776630  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:52.776683  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:52.813443  438716 cri.go:89] found id: ""
	I0819 19:16:52.813475  438716 logs.go:276] 0 containers: []
	W0819 19:16:52.813488  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:52.813497  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:52.813557  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:52.848035  438716 cri.go:89] found id: ""
	I0819 19:16:52.848064  438716 logs.go:276] 0 containers: []
	W0819 19:16:52.848075  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:52.848082  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:52.848150  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:52.881814  438716 cri.go:89] found id: ""
	I0819 19:16:52.881841  438716 logs.go:276] 0 containers: []
	W0819 19:16:52.881858  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:52.881867  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:52.881930  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:52.922179  438716 cri.go:89] found id: ""
	I0819 19:16:52.922202  438716 logs.go:276] 0 containers: []
	W0819 19:16:52.922210  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:52.922216  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:52.922277  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:52.958110  438716 cri.go:89] found id: ""
	I0819 19:16:52.958136  438716 logs.go:276] 0 containers: []
	W0819 19:16:52.958144  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:52.958153  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:52.958167  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:53.008553  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:53.008592  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:53.022826  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:53.022860  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:53.094940  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:53.094967  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:53.094982  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:53.173877  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:53.173920  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:55.716096  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:55.734732  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:55.734817  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:55.780484  438716 cri.go:89] found id: ""
	I0819 19:16:55.780514  438716 logs.go:276] 0 containers: []
	W0819 19:16:55.780525  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:55.780534  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:55.780607  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:55.821755  438716 cri.go:89] found id: ""
	I0819 19:16:55.821778  438716 logs.go:276] 0 containers: []
	W0819 19:16:55.821786  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:55.821792  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:55.821855  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:55.861032  438716 cri.go:89] found id: ""
	I0819 19:16:55.861066  438716 logs.go:276] 0 containers: []
	W0819 19:16:55.861077  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:55.861086  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:55.861159  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:55.909978  438716 cri.go:89] found id: ""
	I0819 19:16:55.910004  438716 logs.go:276] 0 containers: []
	W0819 19:16:55.910015  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:55.910024  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:55.910087  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:55.956603  438716 cri.go:89] found id: ""
	I0819 19:16:55.956634  438716 logs.go:276] 0 containers: []
	W0819 19:16:55.956645  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:55.956653  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:55.956722  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:55.999176  438716 cri.go:89] found id: ""
	I0819 19:16:55.999203  438716 logs.go:276] 0 containers: []
	W0819 19:16:55.999216  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:55.999225  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:55.999286  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:56.035141  438716 cri.go:89] found id: ""
	I0819 19:16:56.035172  438716 logs.go:276] 0 containers: []
	W0819 19:16:56.035183  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:56.035192  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:56.035255  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:56.076152  438716 cri.go:89] found id: ""
	I0819 19:16:56.076185  438716 logs.go:276] 0 containers: []
	W0819 19:16:56.076197  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:56.076209  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:56.076226  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:56.136624  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:56.136671  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:56.151867  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:56.151902  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:56.231650  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:56.231696  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:56.231713  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:56.307203  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:56.307247  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:58.848295  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:58.861984  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:58.862172  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:58.900089  438716 cri.go:89] found id: ""
	I0819 19:16:58.900114  438716 logs.go:276] 0 containers: []
	W0819 19:16:58.900124  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:58.900132  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:58.900203  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:58.932528  438716 cri.go:89] found id: ""
	I0819 19:16:58.932551  438716 logs.go:276] 0 containers: []
	W0819 19:16:58.932559  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:58.932565  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:58.932618  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:58.967255  438716 cri.go:89] found id: ""
	I0819 19:16:58.967283  438716 logs.go:276] 0 containers: []
	W0819 19:16:58.967291  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:58.967298  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:58.967349  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:59.000887  438716 cri.go:89] found id: ""
	I0819 19:16:59.000923  438716 logs.go:276] 0 containers: []
	W0819 19:16:59.000934  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:59.000942  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:59.001009  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:59.041386  438716 cri.go:89] found id: ""
	I0819 19:16:59.041417  438716 logs.go:276] 0 containers: []
	W0819 19:16:59.041428  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:59.041436  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:59.041499  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:59.080036  438716 cri.go:89] found id: ""
	I0819 19:16:59.080078  438716 logs.go:276] 0 containers: []
	W0819 19:16:59.080090  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:59.080099  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:59.080168  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:59.113946  438716 cri.go:89] found id: ""
	I0819 19:16:59.113982  438716 logs.go:276] 0 containers: []
	W0819 19:16:59.113995  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:59.114004  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:59.114066  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:59.155413  438716 cri.go:89] found id: ""
	I0819 19:16:59.155437  438716 logs.go:276] 0 containers: []
	W0819 19:16:59.155446  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:59.155456  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:59.155477  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:59.223795  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:59.223815  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:59.223828  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:59.304516  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:59.304554  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:59.344975  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:59.345005  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:59.397751  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:59.397789  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:17:01.914433  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:17:01.927468  438716 kubeadm.go:597] duration metric: took 4m3.453401239s to restartPrimaryControlPlane
	W0819 19:17:01.927564  438716 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0819 19:17:01.927600  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 19:17:02.647971  438716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 19:17:02.665946  438716 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 19:17:02.676665  438716 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 19:17:02.686818  438716 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 19:17:02.686840  438716 kubeadm.go:157] found existing configuration files:
	
	I0819 19:17:02.686885  438716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 19:17:02.697160  438716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 19:17:02.697228  438716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 19:17:02.707774  438716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 19:17:02.717251  438716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 19:17:02.717310  438716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 19:17:02.727481  438716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 19:17:02.738085  438716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 19:17:02.738141  438716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 19:17:02.749286  438716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 19:17:02.759965  438716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 19:17:02.760025  438716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 19:17:02.770753  438716 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 19:17:02.835857  438716 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0819 19:17:02.835940  438716 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 19:17:02.983775  438716 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 19:17:02.983974  438716 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 19:17:02.984149  438716 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0819 19:17:03.173404  438716 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 19:17:03.175412  438716 out.go:235]   - Generating certificates and keys ...
	I0819 19:17:03.175520  438716 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 19:17:03.175659  438716 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 19:17:03.175805  438716 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 19:17:03.175913  438716 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 19:17:03.176021  438716 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 19:17:03.176125  438716 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 19:17:03.176626  438716 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 19:17:03.177624  438716 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 19:17:03.178399  438716 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 19:17:03.179325  438716 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 19:17:03.179599  438716 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 19:17:03.179702  438716 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 19:17:03.416467  438716 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 19:17:03.505378  438716 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 19:17:03.588959  438716 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 19:17:03.680602  438716 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 19:17:03.697717  438716 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 19:17:03.700436  438716 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 19:17:03.700579  438716 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 19:17:03.858804  438716 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 19:17:03.861395  438716 out.go:235]   - Booting up control plane ...
	I0819 19:17:03.861520  438716 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 19:17:03.877387  438716 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 19:17:03.878611  438716 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 19:17:03.882842  438716 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 19:17:03.887436  438716 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0819 19:17:43.889122  438716 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0819 19:17:43.889226  438716 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 19:17:43.889441  438716 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 19:17:48.889647  438716 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 19:17:48.889896  438716 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 19:17:58.890611  438716 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 19:17:58.890832  438716 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 19:18:18.891960  438716 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 19:18:18.892243  438716 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 19:18:58.894609  438716 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 19:18:58.894854  438716 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 19:18:58.894869  438716 kubeadm.go:310] 
	I0819 19:18:58.894912  438716 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0819 19:18:58.894967  438716 kubeadm.go:310] 		timed out waiting for the condition
	I0819 19:18:58.894981  438716 kubeadm.go:310] 
	I0819 19:18:58.895024  438716 kubeadm.go:310] 	This error is likely caused by:
	I0819 19:18:58.895072  438716 kubeadm.go:310] 		- The kubelet is not running
	I0819 19:18:58.895344  438716 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0819 19:18:58.895388  438716 kubeadm.go:310] 
	I0819 19:18:58.895518  438716 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0819 19:18:58.895613  438716 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0819 19:18:58.895668  438716 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0819 19:18:58.895695  438716 kubeadm.go:310] 
	I0819 19:18:58.895839  438716 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0819 19:18:58.895959  438716 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0819 19:18:58.895972  438716 kubeadm.go:310] 
	I0819 19:18:58.896072  438716 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0819 19:18:58.896154  438716 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0819 19:18:58.896220  438716 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0819 19:18:58.896284  438716 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0819 19:18:58.896314  438716 kubeadm.go:310] 
	I0819 19:18:58.896819  438716 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 19:18:58.896946  438716 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0819 19:18:58.897028  438716 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0819 19:18:58.897193  438716 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0819 19:18:58.897249  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 19:18:59.361073  438716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 19:18:59.375791  438716 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 19:18:59.387650  438716 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 19:18:59.387697  438716 kubeadm.go:157] found existing configuration files:
	
	I0819 19:18:59.387756  438716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 19:18:59.397345  438716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 19:18:59.397409  438716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 19:18:59.408060  438716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 19:18:59.417658  438716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 19:18:59.417731  438716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 19:18:59.427765  438716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 19:18:59.437636  438716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 19:18:59.437712  438716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 19:18:59.447506  438716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 19:18:59.457100  438716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 19:18:59.457165  438716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 19:18:59.467185  438716 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 19:18:59.540706  438716 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0819 19:18:59.541005  438716 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 19:18:59.694109  438716 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 19:18:59.694238  438716 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 19:18:59.694350  438716 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0819 19:18:59.874268  438716 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 19:18:59.876259  438716 out.go:235]   - Generating certificates and keys ...
	I0819 19:18:59.876362  438716 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 19:18:59.876441  438716 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 19:18:59.876569  438716 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 19:18:59.876654  438716 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 19:18:59.876751  438716 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 19:18:59.876824  438716 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 19:18:59.876900  438716 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 19:18:59.877076  438716 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 19:18:59.877571  438716 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 19:18:59.877997  438716 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 19:18:59.878139  438716 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 19:18:59.878241  438716 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 19:19:00.153380  438716 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 19:19:00.359863  438716 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 19:19:00.470797  438716 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 19:19:00.590041  438716 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 19:19:00.614332  438716 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 19:19:00.615415  438716 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 19:19:00.615473  438716 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 19:19:00.756167  438716 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 19:19:00.757737  438716 out.go:235]   - Booting up control plane ...
	I0819 19:19:00.757873  438716 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 19:19:00.761484  438716 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 19:19:00.762431  438716 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 19:19:00.763241  438716 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 19:19:00.766155  438716 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0819 19:19:40.770166  438716 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0819 19:19:40.770378  438716 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 19:19:40.770543  438716 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 19:19:45.771352  438716 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 19:19:45.771587  438716 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 19:19:55.772027  438716 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 19:19:55.772243  438716 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 19:20:15.773008  438716 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 19:20:15.773238  438716 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 19:20:55.771311  438716 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 19:20:55.771517  438716 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 19:20:55.771530  438716 kubeadm.go:310] 
	I0819 19:20:55.771578  438716 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0819 19:20:55.771750  438716 kubeadm.go:310] 		timed out waiting for the condition
	I0819 19:20:55.771784  438716 kubeadm.go:310] 
	I0819 19:20:55.771845  438716 kubeadm.go:310] 	This error is likely caused by:
	I0819 19:20:55.771891  438716 kubeadm.go:310] 		- The kubelet is not running
	I0819 19:20:55.772014  438716 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0819 19:20:55.772027  438716 kubeadm.go:310] 
	I0819 19:20:55.772125  438716 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0819 19:20:55.772162  438716 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0819 19:20:55.772188  438716 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0819 19:20:55.772196  438716 kubeadm.go:310] 
	I0819 19:20:55.772272  438716 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0819 19:20:55.772336  438716 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0819 19:20:55.772343  438716 kubeadm.go:310] 
	I0819 19:20:55.772439  438716 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0819 19:20:55.772520  438716 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0819 19:20:55.772581  438716 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0819 19:20:55.772637  438716 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0819 19:20:55.772645  438716 kubeadm.go:310] 
	I0819 19:20:55.773758  438716 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 19:20:55.773880  438716 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0819 19:20:55.773971  438716 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0819 19:20:55.774067  438716 kubeadm.go:394] duration metric: took 7m57.361589371s to StartCluster
	I0819 19:20:55.774157  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:20:55.774243  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:20:55.818428  438716 cri.go:89] found id: ""
	I0819 19:20:55.818460  438716 logs.go:276] 0 containers: []
	W0819 19:20:55.818468  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:20:55.818475  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:20:55.818535  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:20:55.857714  438716 cri.go:89] found id: ""
	I0819 19:20:55.857747  438716 logs.go:276] 0 containers: []
	W0819 19:20:55.857758  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:20:55.857766  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:20:55.857841  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:20:55.891917  438716 cri.go:89] found id: ""
	I0819 19:20:55.891948  438716 logs.go:276] 0 containers: []
	W0819 19:20:55.891967  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:20:55.891976  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:20:55.892046  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:20:55.930608  438716 cri.go:89] found id: ""
	I0819 19:20:55.930643  438716 logs.go:276] 0 containers: []
	W0819 19:20:55.930656  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:20:55.930665  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:20:55.930734  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:20:55.966563  438716 cri.go:89] found id: ""
	I0819 19:20:55.966591  438716 logs.go:276] 0 containers: []
	W0819 19:20:55.966600  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:20:55.966607  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:20:55.966670  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:20:56.010392  438716 cri.go:89] found id: ""
	I0819 19:20:56.010421  438716 logs.go:276] 0 containers: []
	W0819 19:20:56.010430  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:20:56.010436  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:20:56.010491  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:20:56.066940  438716 cri.go:89] found id: ""
	I0819 19:20:56.066973  438716 logs.go:276] 0 containers: []
	W0819 19:20:56.066985  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:20:56.066994  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:20:56.067062  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:20:56.118852  438716 cri.go:89] found id: ""
	I0819 19:20:56.118881  438716 logs.go:276] 0 containers: []
	W0819 19:20:56.118894  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:20:56.118909  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:20:56.118925  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:20:56.158224  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:20:56.158263  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:20:56.211882  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:20:56.211925  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:20:56.228082  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:20:56.228124  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:20:56.307857  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:20:56.307880  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:20:56.307893  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0819 19:20:56.414797  438716 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0819 19:20:56.414885  438716 out.go:270] * 
	* 
	W0819 19:20:56.415020  438716 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0819 19:20:56.415039  438716 out.go:270] * 
	* 
	W0819 19:20:56.416031  438716 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 19:20:56.419869  438716 out.go:201] 
	W0819 19:20:56.421262  438716 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0819 19:20:56.421319  438716 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0819 19:20:56.421351  438716 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0819 19:20:56.422942  438716 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-104669 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-104669 -n old-k8s-version-104669
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-104669 -n old-k8s-version-104669: exit status 2 (249.305479ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-104669 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-104669 logs -n 25: (1.703755726s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-571803                           | enable-default-cni-571803    | jenkins | v1.33.1 | 19 Aug 24 19:03 UTC | 19 Aug 24 19:03 UTC |
	|         | sudo cat                                               |                              |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-571803                           | enable-default-cni-571803    | jenkins | v1.33.1 | 19 Aug 24 19:03 UTC | 19 Aug 24 19:03 UTC |
	|         | sudo containerd config dump                            |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-571803                           | enable-default-cni-571803    | jenkins | v1.33.1 | 19 Aug 24 19:03 UTC | 19 Aug 24 19:03 UTC |
	|         | sudo systemctl status crio                             |                              |         |         |                     |                     |
	|         | --all --full --no-pager                                |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-571803                           | enable-default-cni-571803    | jenkins | v1.33.1 | 19 Aug 24 19:03 UTC | 19 Aug 24 19:03 UTC |
	|         | sudo systemctl cat crio                                |                              |         |         |                     |                     |
	|         | --no-pager                                             |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-571803                           | enable-default-cni-571803    | jenkins | v1.33.1 | 19 Aug 24 19:03 UTC | 19 Aug 24 19:03 UTC |
	|         | sudo find /etc/crio -type f                            |                              |         |         |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                          |                              |         |         |                     |                     |
	|         | \;                                                     |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-571803                           | enable-default-cni-571803    | jenkins | v1.33.1 | 19 Aug 24 19:03 UTC | 19 Aug 24 19:03 UTC |
	|         | sudo crio config                                       |                              |         |         |                     |                     |
	| delete  | -p enable-default-cni-571803                           | enable-default-cni-571803    | jenkins | v1.33.1 | 19 Aug 24 19:03 UTC | 19 Aug 24 19:03 UTC |
	| delete  | -p                                                     | disable-driver-mounts-737091 | jenkins | v1.33.1 | 19 Aug 24 19:03 UTC | 19 Aug 24 19:03 UTC |
	|         | disable-driver-mounts-737091                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-982795 | jenkins | v1.33.1 | 19 Aug 24 19:03 UTC | 19 Aug 24 19:04 UTC |
	|         | default-k8s-diff-port-982795                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-278232             | no-preload-278232            | jenkins | v1.33.1 | 19 Aug 24 19:04 UTC | 19 Aug 24 19:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-278232                                   | no-preload-278232            | jenkins | v1.33.1 | 19 Aug 24 19:04 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-982795  | default-k8s-diff-port-982795 | jenkins | v1.33.1 | 19 Aug 24 19:04 UTC | 19 Aug 24 19:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-982795 | jenkins | v1.33.1 | 19 Aug 24 19:04 UTC |                     |
	|         | default-k8s-diff-port-982795                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-024748            | embed-certs-024748           | jenkins | v1.33.1 | 19 Aug 24 19:04 UTC | 19 Aug 24 19:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-024748                                  | embed-certs-024748           | jenkins | v1.33.1 | 19 Aug 24 19:04 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-104669        | old-k8s-version-104669       | jenkins | v1.33.1 | 19 Aug 24 19:06 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-278232                  | no-preload-278232            | jenkins | v1.33.1 | 19 Aug 24 19:07 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-278232                                   | no-preload-278232            | jenkins | v1.33.1 | 19 Aug 24 19:07 UTC | 19 Aug 24 19:18 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-982795       | default-k8s-diff-port-982795 | jenkins | v1.33.1 | 19 Aug 24 19:07 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-024748                 | embed-certs-024748           | jenkins | v1.33.1 | 19 Aug 24 19:07 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-982795 | jenkins | v1.33.1 | 19 Aug 24 19:07 UTC | 19 Aug 24 19:17 UTC |
	|         | default-k8s-diff-port-982795                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-024748                                  | embed-certs-024748           | jenkins | v1.33.1 | 19 Aug 24 19:07 UTC | 19 Aug 24 19:17 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-104669                              | old-k8s-version-104669       | jenkins | v1.33.1 | 19 Aug 24 19:08 UTC | 19 Aug 24 19:08 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-104669             | old-k8s-version-104669       | jenkins | v1.33.1 | 19 Aug 24 19:08 UTC | 19 Aug 24 19:08 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-104669                              | old-k8s-version-104669       | jenkins | v1.33.1 | 19 Aug 24 19:08 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 19:08:30
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 19:08:30.532545  438716 out.go:345] Setting OutFile to fd 1 ...
	I0819 19:08:30.532649  438716 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:08:30.532657  438716 out.go:358] Setting ErrFile to fd 2...
	I0819 19:08:30.532661  438716 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:08:30.532811  438716 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19468-372744/.minikube/bin
	I0819 19:08:30.533379  438716 out.go:352] Setting JSON to false
	I0819 19:08:30.534373  438716 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":10253,"bootTime":1724084257,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 19:08:30.534451  438716 start.go:139] virtualization: kvm guest
	I0819 19:08:30.536658  438716 out.go:177] * [old-k8s-version-104669] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 19:08:30.537921  438716 out.go:177]   - MINIKUBE_LOCATION=19468
	I0819 19:08:30.537959  438716 notify.go:220] Checking for updates...
	I0819 19:08:30.540501  438716 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 19:08:30.541864  438716 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19468-372744/kubeconfig
	I0819 19:08:30.543170  438716 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19468-372744/.minikube
	I0819 19:08:30.544395  438716 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 19:08:30.545614  438716 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 19:08:30.547072  438716 config.go:182] Loaded profile config "old-k8s-version-104669": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0819 19:08:30.547468  438716 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:08:30.547570  438716 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:08:30.563059  438716 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34139
	I0819 19:08:30.563506  438716 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:08:30.564068  438716 main.go:141] libmachine: Using API Version  1
	I0819 19:08:30.564091  438716 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:08:30.564474  438716 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:08:30.564719  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .DriverName
	I0819 19:08:30.566599  438716 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0819 19:08:30.568124  438716 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 19:08:30.568503  438716 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:08:30.568541  438716 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:08:30.583805  438716 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35313
	I0819 19:08:30.584314  438716 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:08:30.584805  438716 main.go:141] libmachine: Using API Version  1
	I0819 19:08:30.584827  438716 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:08:30.585131  438716 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:08:30.585320  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .DriverName
	I0819 19:08:30.621020  438716 out.go:177] * Using the kvm2 driver based on existing profile
	I0819 19:08:30.622137  438716 start.go:297] selected driver: kvm2
	I0819 19:08:30.622158  438716 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-104669 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-104669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.32 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 19:08:30.622252  438716 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 19:08:30.622998  438716 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 19:08:30.623082  438716 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19468-372744/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 19:08:30.638616  438716 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 19:08:30.638998  438716 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 19:08:30.639047  438716 cni.go:84] Creating CNI manager for ""
	I0819 19:08:30.639059  438716 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 19:08:30.639097  438716 start.go:340] cluster config:
	{Name:old-k8s-version-104669 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-104669 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.32 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 19:08:30.639243  438716 iso.go:125] acquiring lock: {Name:mk4c0ac1c3202b1a296739df622960e7a0bd8566 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 19:08:30.641823  438716 out.go:177] * Starting "old-k8s-version-104669" primary control-plane node in "old-k8s-version-104669" cluster
	I0819 19:08:30.915976  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:08:30.643167  438716 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0819 19:08:30.643197  438716 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0819 19:08:30.643205  438716 cache.go:56] Caching tarball of preloaded images
	I0819 19:08:30.643300  438716 preload.go:172] Found /home/jenkins/minikube-integration/19468-372744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 19:08:30.643311  438716 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0819 19:08:30.643409  438716 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/config.json ...
	I0819 19:08:30.643583  438716 start.go:360] acquireMachinesLock for old-k8s-version-104669: {Name:mk24ba67a747357e9ce40f1e460d2bb0bc59cc75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 19:08:33.988031  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:08:40.067999  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:08:43.140051  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:08:49.219991  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:08:52.292013  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:08:58.371952  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:09:01.444061  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:09:07.523958  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:09:10.595977  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:09:16.675955  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:09:19.748037  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:09:25.828064  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:09:28.899972  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:09:34.980044  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:09:38.052066  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:09:44.131960  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:09:47.203926  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:09:53.283992  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:09:56.355952  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:10:02.435994  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:10:05.508042  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:10:11.587960  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:10:14.660027  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:10:20.740007  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:10:23.811991  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:10:29.891998  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:10:32.963959  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:10:39.043942  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:10:42.116029  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:10:48.195984  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:10:51.267954  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:10:57.347922  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:11:00.419952  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:11:06.499978  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:11:09.572013  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:11:15.652066  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:11:18.724012  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:11:24.804001  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:11:27.875961  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:11:33.956046  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:11:37.027998  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:11:43.108014  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:11:46.179987  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:11:49.184190  438245 start.go:364] duration metric: took 4m21.835882225s to acquireMachinesLock for "default-k8s-diff-port-982795"
	I0819 19:11:49.184280  438245 start.go:96] Skipping create...Using existing machine configuration
	I0819 19:11:49.184296  438245 fix.go:54] fixHost starting: 
	I0819 19:11:49.184628  438245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:11:49.184661  438245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:11:49.200544  438245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38241
	I0819 19:11:49.200994  438245 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:11:49.201530  438245 main.go:141] libmachine: Using API Version  1
	I0819 19:11:49.201560  438245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:11:49.201953  438245 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:11:49.202151  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .DriverName
	I0819 19:11:49.202296  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetState
	I0819 19:11:49.203841  438245 fix.go:112] recreateIfNeeded on default-k8s-diff-port-982795: state=Stopped err=<nil>
	I0819 19:11:49.203875  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .DriverName
	W0819 19:11:49.204042  438245 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 19:11:49.205721  438245 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-982795" ...
	I0819 19:11:49.181717  438001 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 19:11:49.181755  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetMachineName
	I0819 19:11:49.182097  438001 buildroot.go:166] provisioning hostname "no-preload-278232"
	I0819 19:11:49.182131  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetMachineName
	I0819 19:11:49.182392  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHHostname
	I0819 19:11:49.184006  438001 machine.go:96] duration metric: took 4m37.423775019s to provisionDockerMachine
	I0819 19:11:49.184078  438001 fix.go:56] duration metric: took 4m37.445408913s for fixHost
	I0819 19:11:49.184091  438001 start.go:83] releasing machines lock for "no-preload-278232", held for 4m37.44544277s
	W0819 19:11:49.184116  438001 start.go:714] error starting host: provision: host is not running
	W0819 19:11:49.184274  438001 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0819 19:11:49.184288  438001 start.go:729] Will try again in 5 seconds ...
	I0819 19:11:49.206739  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .Start
	I0819 19:11:49.206892  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Ensuring networks are active...
	I0819 19:11:49.207586  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Ensuring network default is active
	I0819 19:11:49.207947  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Ensuring network mk-default-k8s-diff-port-982795 is active
	I0819 19:11:49.208368  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Getting domain xml...
	I0819 19:11:49.209114  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Creating domain...
	I0819 19:11:50.421290  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting to get IP...
	I0819 19:11:50.422082  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:11:50.422490  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | unable to find current IP address of domain default-k8s-diff-port-982795 in network mk-default-k8s-diff-port-982795
	I0819 19:11:50.422562  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | I0819 19:11:50.422473  439403 retry.go:31] will retry after 273.434317ms: waiting for machine to come up
	I0819 19:11:50.698167  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:11:50.698598  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | unable to find current IP address of domain default-k8s-diff-port-982795 in network mk-default-k8s-diff-port-982795
	I0819 19:11:50.698635  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | I0819 19:11:50.698569  439403 retry.go:31] will retry after 367.841325ms: waiting for machine to come up
	I0819 19:11:51.068401  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:11:51.068996  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | unable to find current IP address of domain default-k8s-diff-port-982795 in network mk-default-k8s-diff-port-982795
	I0819 19:11:51.069019  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | I0819 19:11:51.068942  439403 retry.go:31] will retry after 460.053559ms: waiting for machine to come up
	I0819 19:11:51.530228  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:11:51.530700  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | unable to find current IP address of domain default-k8s-diff-port-982795 in network mk-default-k8s-diff-port-982795
	I0819 19:11:51.530730  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | I0819 19:11:51.530636  439403 retry.go:31] will retry after 498.222116ms: waiting for machine to come up
	I0819 19:11:52.030322  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:11:52.030771  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | unable to find current IP address of domain default-k8s-diff-port-982795 in network mk-default-k8s-diff-port-982795
	I0819 19:11:52.030808  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | I0819 19:11:52.030710  439403 retry.go:31] will retry after 750.75175ms: waiting for machine to come up
	I0819 19:11:54.186765  438001 start.go:360] acquireMachinesLock for no-preload-278232: {Name:mk24ba67a747357e9ce40f1e460d2bb0bc59cc75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 19:11:52.782638  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:11:52.783001  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | unable to find current IP address of domain default-k8s-diff-port-982795 in network mk-default-k8s-diff-port-982795
	I0819 19:11:52.783027  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | I0819 19:11:52.782952  439403 retry.go:31] will retry after 576.883195ms: waiting for machine to come up
	I0819 19:11:53.361702  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:11:53.362105  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | unable to find current IP address of domain default-k8s-diff-port-982795 in network mk-default-k8s-diff-port-982795
	I0819 19:11:53.362138  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | I0819 19:11:53.362035  439403 retry.go:31] will retry after 900.512446ms: waiting for machine to come up
	I0819 19:11:54.264656  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:11:54.265032  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | unable to find current IP address of domain default-k8s-diff-port-982795 in network mk-default-k8s-diff-port-982795
	I0819 19:11:54.265052  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | I0819 19:11:54.264984  439403 retry.go:31] will retry after 1.339005367s: waiting for machine to come up
	I0819 19:11:55.605816  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:11:55.606348  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | unable to find current IP address of domain default-k8s-diff-port-982795 in network mk-default-k8s-diff-port-982795
	I0819 19:11:55.606378  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | I0819 19:11:55.606304  439403 retry.go:31] will retry after 1.517824531s: waiting for machine to come up
	I0819 19:11:57.126027  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:11:57.126400  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | unable to find current IP address of domain default-k8s-diff-port-982795 in network mk-default-k8s-diff-port-982795
	I0819 19:11:57.126426  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | I0819 19:11:57.126340  439403 retry.go:31] will retry after 2.220939365s: waiting for machine to come up
	I0819 19:11:59.348649  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:11:59.349041  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | unable to find current IP address of domain default-k8s-diff-port-982795 in network mk-default-k8s-diff-port-982795
	I0819 19:11:59.349072  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | I0819 19:11:59.348987  439403 retry.go:31] will retry after 2.830298687s: waiting for machine to come up
	I0819 19:12:02.182934  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:02.183398  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | unable to find current IP address of domain default-k8s-diff-port-982795 in network mk-default-k8s-diff-port-982795
	I0819 19:12:02.183422  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | I0819 19:12:02.183348  439403 retry.go:31] will retry after 2.302725829s: waiting for machine to come up
	I0819 19:12:04.487648  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:04.488074  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | unable to find current IP address of domain default-k8s-diff-port-982795 in network mk-default-k8s-diff-port-982795
	I0819 19:12:04.488108  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | I0819 19:12:04.488016  439403 retry.go:31] will retry after 2.932250361s: waiting for machine to come up
	I0819 19:12:08.736669  438295 start.go:364] duration metric: took 4m39.596501254s to acquireMachinesLock for "embed-certs-024748"
	I0819 19:12:08.736755  438295 start.go:96] Skipping create...Using existing machine configuration
	I0819 19:12:08.736776  438295 fix.go:54] fixHost starting: 
	I0819 19:12:08.737277  438295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:08.737326  438295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:08.754873  438295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36829
	I0819 19:12:08.755301  438295 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:08.755839  438295 main.go:141] libmachine: Using API Version  1
	I0819 19:12:08.755866  438295 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:08.756184  438295 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:08.756383  438295 main.go:141] libmachine: (embed-certs-024748) Calling .DriverName
	I0819 19:12:08.756525  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetState
	I0819 19:12:08.758092  438295 fix.go:112] recreateIfNeeded on embed-certs-024748: state=Stopped err=<nil>
	I0819 19:12:08.758134  438295 main.go:141] libmachine: (embed-certs-024748) Calling .DriverName
	W0819 19:12:08.758299  438295 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 19:12:08.760922  438295 out.go:177] * Restarting existing kvm2 VM for "embed-certs-024748" ...
	I0819 19:12:08.762335  438295 main.go:141] libmachine: (embed-certs-024748) Calling .Start
	I0819 19:12:08.762509  438295 main.go:141] libmachine: (embed-certs-024748) Ensuring networks are active...
	I0819 19:12:08.763274  438295 main.go:141] libmachine: (embed-certs-024748) Ensuring network default is active
	I0819 19:12:08.763647  438295 main.go:141] libmachine: (embed-certs-024748) Ensuring network mk-embed-certs-024748 is active
	I0819 19:12:08.764057  438295 main.go:141] libmachine: (embed-certs-024748) Getting domain xml...
	I0819 19:12:08.764765  438295 main.go:141] libmachine: (embed-certs-024748) Creating domain...
	I0819 19:12:07.424132  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.424589  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Found IP for machine: 192.168.61.48
	I0819 19:12:07.424615  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Reserving static IP address...
	I0819 19:12:07.424634  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has current primary IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.425178  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Reserved static IP address: 192.168.61.48
	I0819 19:12:07.425205  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for SSH to be available...
	I0819 19:12:07.425237  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-982795", mac: "52:54:00:d4:19:cd", ip: "192.168.61.48"} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:07.425283  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | skip adding static IP to network mk-default-k8s-diff-port-982795 - found existing host DHCP lease matching {name: "default-k8s-diff-port-982795", mac: "52:54:00:d4:19:cd", ip: "192.168.61.48"}
	I0819 19:12:07.425304  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | Getting to WaitForSSH function...
	I0819 19:12:07.427600  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.427969  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:07.428001  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.428179  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | Using SSH client type: external
	I0819 19:12:07.428245  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | Using SSH private key: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/default-k8s-diff-port-982795/id_rsa (-rw-------)
	I0819 19:12:07.428297  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.48 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19468-372744/.minikube/machines/default-k8s-diff-port-982795/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 19:12:07.428321  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | About to run SSH command:
	I0819 19:12:07.428339  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | exit 0
	I0819 19:12:07.547727  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | SSH cmd err, output: <nil>: 
	I0819 19:12:07.548095  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetConfigRaw
	I0819 19:12:07.548741  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetIP
	I0819 19:12:07.551308  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.551700  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:07.551733  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.551967  438245 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/default-k8s-diff-port-982795/config.json ...
	I0819 19:12:07.552164  438245 machine.go:93] provisionDockerMachine start ...
	I0819 19:12:07.552186  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .DriverName
	I0819 19:12:07.552427  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHHostname
	I0819 19:12:07.554782  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.555062  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:07.555080  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.555219  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHPort
	I0819 19:12:07.555427  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:12:07.555586  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:12:07.555767  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHUsername
	I0819 19:12:07.555912  438245 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:07.556152  438245 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.48 22 <nil> <nil>}
	I0819 19:12:07.556168  438245 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 19:12:07.655996  438245 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 19:12:07.656027  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetMachineName
	I0819 19:12:07.656301  438245 buildroot.go:166] provisioning hostname "default-k8s-diff-port-982795"
	I0819 19:12:07.656329  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetMachineName
	I0819 19:12:07.656530  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHHostname
	I0819 19:12:07.658956  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.659311  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:07.659344  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.659439  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHPort
	I0819 19:12:07.659617  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:12:07.659813  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:12:07.659937  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHUsername
	I0819 19:12:07.660112  438245 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:07.660291  438245 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.48 22 <nil> <nil>}
	I0819 19:12:07.660302  438245 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-982795 && echo "default-k8s-diff-port-982795" | sudo tee /etc/hostname
	I0819 19:12:07.773590  438245 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-982795
	
	I0819 19:12:07.773615  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHHostname
	I0819 19:12:07.776994  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.777360  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:07.777399  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.777580  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHPort
	I0819 19:12:07.777860  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:12:07.778060  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:12:07.778273  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHUsername
	I0819 19:12:07.778457  438245 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:07.778665  438245 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.48 22 <nil> <nil>}
	I0819 19:12:07.778687  438245 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-982795' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-982795/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-982795' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 19:12:07.884662  438245 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 19:12:07.884718  438245 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19468-372744/.minikube CaCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19468-372744/.minikube}
	I0819 19:12:07.884751  438245 buildroot.go:174] setting up certificates
	I0819 19:12:07.884768  438245 provision.go:84] configureAuth start
	I0819 19:12:07.884782  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetMachineName
	I0819 19:12:07.885101  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetIP
	I0819 19:12:07.887844  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.888262  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:07.888293  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.888439  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHHostname
	I0819 19:12:07.890581  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.890977  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:07.891005  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.891136  438245 provision.go:143] copyHostCerts
	I0819 19:12:07.891219  438245 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem, removing ...
	I0819 19:12:07.891240  438245 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem
	I0819 19:12:07.891306  438245 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem (1082 bytes)
	I0819 19:12:07.891398  438245 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem, removing ...
	I0819 19:12:07.891406  438245 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem
	I0819 19:12:07.891430  438245 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem (1123 bytes)
	I0819 19:12:07.891487  438245 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem, removing ...
	I0819 19:12:07.891494  438245 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem
	I0819 19:12:07.891517  438245 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem (1675 bytes)
	I0819 19:12:07.891570  438245 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-982795 san=[127.0.0.1 192.168.61.48 default-k8s-diff-port-982795 localhost minikube]
	I0819 19:12:08.083963  438245 provision.go:177] copyRemoteCerts
	I0819 19:12:08.084024  438245 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 19:12:08.084086  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHHostname
	I0819 19:12:08.086637  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:08.086961  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:08.087005  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:08.087144  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHPort
	I0819 19:12:08.087357  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:12:08.087507  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHUsername
	I0819 19:12:08.087694  438245 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/default-k8s-diff-port-982795/id_rsa Username:docker}
	I0819 19:12:08.166312  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 19:12:08.194124  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0819 19:12:08.221817  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 19:12:08.249674  438245 provision.go:87] duration metric: took 364.885827ms to configureAuth
	I0819 19:12:08.249709  438245 buildroot.go:189] setting minikube options for container-runtime
	I0819 19:12:08.249891  438245 config.go:182] Loaded profile config "default-k8s-diff-port-982795": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:12:08.249983  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHHostname
	I0819 19:12:08.253045  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:08.253438  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:08.253469  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:08.253647  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHPort
	I0819 19:12:08.253856  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:12:08.254071  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:12:08.254266  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHUsername
	I0819 19:12:08.254481  438245 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:08.254700  438245 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.48 22 <nil> <nil>}
	I0819 19:12:08.254722  438245 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 19:12:08.508775  438245 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 19:12:08.508808  438245 machine.go:96] duration metric: took 956.629475ms to provisionDockerMachine
	I0819 19:12:08.508824  438245 start.go:293] postStartSetup for "default-k8s-diff-port-982795" (driver="kvm2")
	I0819 19:12:08.508838  438245 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 19:12:08.508868  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .DriverName
	I0819 19:12:08.509214  438245 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 19:12:08.509259  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHHostname
	I0819 19:12:08.512004  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:08.512341  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:08.512378  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:08.512517  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHPort
	I0819 19:12:08.512688  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:12:08.512867  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHUsername
	I0819 19:12:08.513059  438245 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/default-k8s-diff-port-982795/id_rsa Username:docker}
	I0819 19:12:08.594287  438245 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 19:12:08.598742  438245 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 19:12:08.598774  438245 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-372744/.minikube/addons for local assets ...
	I0819 19:12:08.598849  438245 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-372744/.minikube/files for local assets ...
	I0819 19:12:08.598943  438245 filesync.go:149] local asset: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem -> 3800092.pem in /etc/ssl/certs
	I0819 19:12:08.599029  438245 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 19:12:08.608416  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem --> /etc/ssl/certs/3800092.pem (1708 bytes)
	I0819 19:12:08.633880  438245 start.go:296] duration metric: took 125.036785ms for postStartSetup
	I0819 19:12:08.633930  438245 fix.go:56] duration metric: took 19.449641939s for fixHost
	I0819 19:12:08.633955  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHHostname
	I0819 19:12:08.636729  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:08.637006  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:08.637030  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:08.637248  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHPort
	I0819 19:12:08.637483  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:12:08.637672  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:12:08.637791  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHUsername
	I0819 19:12:08.637954  438245 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:08.638170  438245 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.48 22 <nil> <nil>}
	I0819 19:12:08.638186  438245 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 19:12:08.736519  438245 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724094728.710064462
	
	I0819 19:12:08.736540  438245 fix.go:216] guest clock: 1724094728.710064462
	I0819 19:12:08.736548  438245 fix.go:229] Guest: 2024-08-19 19:12:08.710064462 +0000 UTC Remote: 2024-08-19 19:12:08.633934039 +0000 UTC m=+281.422189217 (delta=76.130423ms)
	I0819 19:12:08.736568  438245 fix.go:200] guest clock delta is within tolerance: 76.130423ms
	I0819 19:12:08.736580  438245 start.go:83] releasing machines lock for "default-k8s-diff-port-982795", held for 19.552337255s
	I0819 19:12:08.736604  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .DriverName
	I0819 19:12:08.736918  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetIP
	I0819 19:12:08.739570  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:08.740030  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:08.740057  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:08.740222  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .DriverName
	I0819 19:12:08.740762  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .DriverName
	I0819 19:12:08.740960  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .DriverName
	I0819 19:12:08.741037  438245 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 19:12:08.741100  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHHostname
	I0819 19:12:08.741185  438245 ssh_runner.go:195] Run: cat /version.json
	I0819 19:12:08.741206  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHHostname
	I0819 19:12:08.743899  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:08.744037  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:08.744282  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:08.744304  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:08.744439  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHPort
	I0819 19:12:08.744576  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:08.744599  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:12:08.744607  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:08.744689  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHPort
	I0819 19:12:08.744786  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHUsername
	I0819 19:12:08.744858  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:12:08.744923  438245 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/default-k8s-diff-port-982795/id_rsa Username:docker}
	I0819 19:12:08.744997  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHUsername
	I0819 19:12:08.745143  438245 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/default-k8s-diff-port-982795/id_rsa Username:docker}
	I0819 19:12:08.820672  438245 ssh_runner.go:195] Run: systemctl --version
	I0819 19:12:08.847046  438245 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 19:12:08.989725  438245 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 19:12:08.996607  438245 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 19:12:08.996680  438245 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 19:12:09.013017  438245 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 19:12:09.013067  438245 start.go:495] detecting cgroup driver to use...
	I0819 19:12:09.013144  438245 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 19:12:09.030338  438245 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 19:12:09.044580  438245 docker.go:217] disabling cri-docker service (if available) ...
	I0819 19:12:09.044635  438245 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 19:12:09.058825  438245 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 19:12:09.073358  438245 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 19:12:09.194611  438245 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 19:12:09.333368  438245 docker.go:233] disabling docker service ...
	I0819 19:12:09.333446  438245 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 19:12:09.348775  438245 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 19:12:09.362911  438245 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 19:12:09.503015  438245 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 19:12:09.621246  438245 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 19:12:09.638480  438245 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 19:12:09.659346  438245 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 19:12:09.659406  438245 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:09.672088  438245 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 19:12:09.672166  438245 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:09.683704  438245 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:09.694847  438245 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:09.706339  438245 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 19:12:09.718658  438245 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:09.730645  438245 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:09.750843  438245 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:09.762551  438245 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 19:12:09.772960  438245 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 19:12:09.773037  438245 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 19:12:09.788362  438245 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 19:12:09.798695  438245 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:12:09.923389  438245 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 19:12:10.063317  438245 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 19:12:10.063413  438245 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 19:12:10.068449  438245 start.go:563] Will wait 60s for crictl version
	I0819 19:12:10.068540  438245 ssh_runner.go:195] Run: which crictl
	I0819 19:12:10.072807  438245 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 19:12:10.114058  438245 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 19:12:10.114151  438245 ssh_runner.go:195] Run: crio --version
	I0819 19:12:10.147919  438245 ssh_runner.go:195] Run: crio --version
	I0819 19:12:10.180009  438245 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 19:12:10.181218  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetIP
	I0819 19:12:10.184626  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:10.185015  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:10.185049  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:10.185243  438245 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0819 19:12:10.189653  438245 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 19:12:10.203439  438245 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-982795 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-982795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.48 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 19:12:10.203608  438245 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 19:12:10.203668  438245 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 19:12:10.241427  438245 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0819 19:12:10.241511  438245 ssh_runner.go:195] Run: which lz4
	I0819 19:12:10.245734  438245 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 19:12:10.250082  438245 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 19:12:10.250112  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0819 19:12:11.694285  438245 crio.go:462] duration metric: took 1.448590086s to copy over tarball
	I0819 19:12:11.694371  438245 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 19:12:10.028225  438295 main.go:141] libmachine: (embed-certs-024748) Waiting to get IP...
	I0819 19:12:10.029208  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:10.029696  438295 main.go:141] libmachine: (embed-certs-024748) DBG | unable to find current IP address of domain embed-certs-024748 in network mk-embed-certs-024748
	I0819 19:12:10.029752  438295 main.go:141] libmachine: (embed-certs-024748) DBG | I0819 19:12:10.029666  439540 retry.go:31] will retry after 276.66184ms: waiting for machine to come up
	I0819 19:12:10.308339  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:10.308762  438295 main.go:141] libmachine: (embed-certs-024748) DBG | unable to find current IP address of domain embed-certs-024748 in network mk-embed-certs-024748
	I0819 19:12:10.308804  438295 main.go:141] libmachine: (embed-certs-024748) DBG | I0819 19:12:10.308710  439540 retry.go:31] will retry after 279.376198ms: waiting for machine to come up
	I0819 19:12:10.590326  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:10.591084  438295 main.go:141] libmachine: (embed-certs-024748) DBG | unable to find current IP address of domain embed-certs-024748 in network mk-embed-certs-024748
	I0819 19:12:10.591117  438295 main.go:141] libmachine: (embed-certs-024748) DBG | I0819 19:12:10.590861  439540 retry.go:31] will retry after 364.735563ms: waiting for machine to come up
	I0819 19:12:10.957592  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:10.958075  438295 main.go:141] libmachine: (embed-certs-024748) DBG | unable to find current IP address of domain embed-certs-024748 in network mk-embed-certs-024748
	I0819 19:12:10.958100  438295 main.go:141] libmachine: (embed-certs-024748) DBG | I0819 19:12:10.958033  439540 retry.go:31] will retry after 384.275284ms: waiting for machine to come up
	I0819 19:12:11.343631  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:11.344169  438295 main.go:141] libmachine: (embed-certs-024748) DBG | unable to find current IP address of domain embed-certs-024748 in network mk-embed-certs-024748
	I0819 19:12:11.344192  438295 main.go:141] libmachine: (embed-certs-024748) DBG | I0819 19:12:11.344125  439540 retry.go:31] will retry after 572.182522ms: waiting for machine to come up
	I0819 19:12:11.917660  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:11.918150  438295 main.go:141] libmachine: (embed-certs-024748) DBG | unable to find current IP address of domain embed-certs-024748 in network mk-embed-certs-024748
	I0819 19:12:11.918179  438295 main.go:141] libmachine: (embed-certs-024748) DBG | I0819 19:12:11.918093  439540 retry.go:31] will retry after 767.807058ms: waiting for machine to come up
	I0819 19:12:12.687256  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:12.687782  438295 main.go:141] libmachine: (embed-certs-024748) DBG | unable to find current IP address of domain embed-certs-024748 in network mk-embed-certs-024748
	I0819 19:12:12.687815  438295 main.go:141] libmachine: (embed-certs-024748) DBG | I0819 19:12:12.687728  439540 retry.go:31] will retry after 715.897037ms: waiting for machine to come up
	I0819 19:12:13.406041  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:13.406653  438295 main.go:141] libmachine: (embed-certs-024748) DBG | unable to find current IP address of domain embed-certs-024748 in network mk-embed-certs-024748
	I0819 19:12:13.406690  438295 main.go:141] libmachine: (embed-certs-024748) DBG | I0819 19:12:13.406577  439540 retry.go:31] will retry after 1.301579737s: waiting for machine to come up
	I0819 19:12:13.847779  438245 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.153373496s)
	I0819 19:12:13.847810  438245 crio.go:469] duration metric: took 2.153488101s to extract the tarball
	I0819 19:12:13.847817  438245 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 19:12:13.885520  438245 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 19:12:13.929775  438245 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 19:12:13.929809  438245 cache_images.go:84] Images are preloaded, skipping loading
	I0819 19:12:13.929838  438245 kubeadm.go:934] updating node { 192.168.61.48 8444 v1.31.0 crio true true} ...
	I0819 19:12:13.930019  438245 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-982795 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.48
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-982795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 19:12:13.930113  438245 ssh_runner.go:195] Run: crio config
	I0819 19:12:13.977098  438245 cni.go:84] Creating CNI manager for ""
	I0819 19:12:13.977123  438245 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 19:12:13.977136  438245 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 19:12:13.977176  438245 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.48 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-982795 NodeName:default-k8s-diff-port-982795 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.48"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.48 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 19:12:13.977382  438245 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.48
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-982795"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.48
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.48"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 19:12:13.977461  438245 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 19:12:13.987276  438245 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 19:12:13.987381  438245 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 19:12:13.996666  438245 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0819 19:12:14.013822  438245 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 19:12:14.030936  438245 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0819 19:12:14.048575  438245 ssh_runner.go:195] Run: grep 192.168.61.48	control-plane.minikube.internal$ /etc/hosts
	I0819 19:12:14.052809  438245 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.48	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 19:12:14.065177  438245 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:12:14.185159  438245 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 19:12:14.202906  438245 certs.go:68] Setting up /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/default-k8s-diff-port-982795 for IP: 192.168.61.48
	I0819 19:12:14.202934  438245 certs.go:194] generating shared ca certs ...
	I0819 19:12:14.202966  438245 certs.go:226] acquiring lock for ca certs: {Name:mk639e03f593e0bccac045f6e9f5ba3b96cc81e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:12:14.203184  438245 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key
	I0819 19:12:14.203266  438245 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key
	I0819 19:12:14.203282  438245 certs.go:256] generating profile certs ...
	I0819 19:12:14.203399  438245 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/default-k8s-diff-port-982795/client.key
	I0819 19:12:14.203487  438245 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/default-k8s-diff-port-982795/apiserver.key.a3c7a519
	I0819 19:12:14.203552  438245 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/default-k8s-diff-port-982795/proxy-client.key
	I0819 19:12:14.203757  438245 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem (1338 bytes)
	W0819 19:12:14.203820  438245 certs.go:480] ignoring /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009_empty.pem, impossibly tiny 0 bytes
	I0819 19:12:14.203834  438245 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 19:12:14.203866  438245 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem (1082 bytes)
	I0819 19:12:14.203899  438245 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem (1123 bytes)
	I0819 19:12:14.203929  438245 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem (1675 bytes)
	I0819 19:12:14.203994  438245 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem (1708 bytes)
	I0819 19:12:14.205025  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 19:12:14.258243  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 19:12:14.295380  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 19:12:14.330511  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 19:12:14.358547  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/default-k8s-diff-port-982795/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0819 19:12:14.386938  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/default-k8s-diff-port-982795/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 19:12:14.415021  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/default-k8s-diff-port-982795/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 19:12:14.439531  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/default-k8s-diff-port-982795/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 19:12:14.463969  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 19:12:14.487638  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem --> /usr/share/ca-certificates/380009.pem (1338 bytes)
	I0819 19:12:14.511571  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem --> /usr/share/ca-certificates/3800092.pem (1708 bytes)
	I0819 19:12:14.535223  438245 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 19:12:14.552922  438245 ssh_runner.go:195] Run: openssl version
	I0819 19:12:14.559078  438245 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 19:12:14.570605  438245 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:12:14.575411  438245 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:12:14.575484  438245 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:12:14.581714  438245 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 19:12:14.592896  438245 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/380009.pem && ln -fs /usr/share/ca-certificates/380009.pem /etc/ssl/certs/380009.pem"
	I0819 19:12:14.604306  438245 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/380009.pem
	I0819 19:12:14.609139  438245 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 17:56 /usr/share/ca-certificates/380009.pem
	I0819 19:12:14.609212  438245 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/380009.pem
	I0819 19:12:14.615160  438245 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/380009.pem /etc/ssl/certs/51391683.0"
	I0819 19:12:14.626010  438245 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3800092.pem && ln -fs /usr/share/ca-certificates/3800092.pem /etc/ssl/certs/3800092.pem"
	I0819 19:12:14.636821  438245 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3800092.pem
	I0819 19:12:14.641308  438245 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 17:56 /usr/share/ca-certificates/3800092.pem
	I0819 19:12:14.641358  438245 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3800092.pem
	I0819 19:12:14.646898  438245 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3800092.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 19:12:14.657905  438245 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 19:12:14.662780  438245 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 19:12:14.668934  438245 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 19:12:14.674693  438245 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 19:12:14.680683  438245 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 19:12:14.686689  438245 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 19:12:14.692678  438245 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 19:12:14.698784  438245 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-982795 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-982795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.48 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 19:12:14.698930  438245 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 19:12:14.699006  438245 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 19:12:14.740881  438245 cri.go:89] found id: ""
	I0819 19:12:14.740964  438245 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 19:12:14.751589  438245 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 19:12:14.751613  438245 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 19:12:14.751665  438245 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 19:12:14.761837  438245 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 19:12:14.762870  438245 kubeconfig.go:125] found "default-k8s-diff-port-982795" server: "https://192.168.61.48:8444"
	I0819 19:12:14.765176  438245 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 19:12:14.775114  438245 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.48
	I0819 19:12:14.775147  438245 kubeadm.go:1160] stopping kube-system containers ...
	I0819 19:12:14.775161  438245 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0819 19:12:14.775228  438245 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 19:12:14.811373  438245 cri.go:89] found id: ""
	I0819 19:12:14.811442  438245 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0819 19:12:14.829656  438245 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 19:12:14.840215  438245 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 19:12:14.840236  438245 kubeadm.go:157] found existing configuration files:
	
	I0819 19:12:14.840288  438245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0819 19:12:14.850017  438245 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 19:12:14.850075  438245 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 19:12:14.860060  438245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0819 19:12:14.869589  438245 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 19:12:14.869645  438245 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 19:12:14.879249  438245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0819 19:12:14.888475  438245 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 19:12:14.888532  438245 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 19:12:14.898151  438245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0819 19:12:14.907628  438245 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 19:12:14.907737  438245 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 19:12:14.917581  438245 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 19:12:14.927119  438245 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:12:15.037162  438245 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:12:16.355430  438245 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.318225023s)
	I0819 19:12:16.355461  438245 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:12:16.566565  438245 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:12:16.649402  438245 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:12:16.775956  438245 api_server.go:52] waiting for apiserver process to appear ...
	I0819 19:12:16.776067  438245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:12:14.709988  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:14.710397  438295 main.go:141] libmachine: (embed-certs-024748) DBG | unable to find current IP address of domain embed-certs-024748 in network mk-embed-certs-024748
	I0819 19:12:14.710429  438295 main.go:141] libmachine: (embed-certs-024748) DBG | I0819 19:12:14.710338  439540 retry.go:31] will retry after 1.420823505s: waiting for machine to come up
	I0819 19:12:16.133160  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:16.133558  438295 main.go:141] libmachine: (embed-certs-024748) DBG | unable to find current IP address of domain embed-certs-024748 in network mk-embed-certs-024748
	I0819 19:12:16.133587  438295 main.go:141] libmachine: (embed-certs-024748) DBG | I0819 19:12:16.133531  439540 retry.go:31] will retry after 1.71697779s: waiting for machine to come up
	I0819 19:12:17.852342  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:17.852884  438295 main.go:141] libmachine: (embed-certs-024748) DBG | unable to find current IP address of domain embed-certs-024748 in network mk-embed-certs-024748
	I0819 19:12:17.852922  438295 main.go:141] libmachine: (embed-certs-024748) DBG | I0819 19:12:17.852836  439540 retry.go:31] will retry after 2.816782354s: waiting for machine to come up
	I0819 19:12:17.277067  438245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:12:17.777027  438245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:12:17.797513  438245 api_server.go:72] duration metric: took 1.021572879s to wait for apiserver process to appear ...
	I0819 19:12:17.797554  438245 api_server.go:88] waiting for apiserver healthz status ...
	I0819 19:12:17.797596  438245 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8444/healthz ...
	I0819 19:12:17.798191  438245 api_server.go:269] stopped: https://192.168.61.48:8444/healthz: Get "https://192.168.61.48:8444/healthz": dial tcp 192.168.61.48:8444: connect: connection refused
	I0819 19:12:18.297907  438245 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8444/healthz ...
	I0819 19:12:20.177305  438245 api_server.go:279] https://192.168.61.48:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 19:12:20.177345  438245 api_server.go:103] status: https://192.168.61.48:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 19:12:20.177367  438245 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8444/healthz ...
	I0819 19:12:20.244091  438245 api_server.go:279] https://192.168.61.48:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 19:12:20.244140  438245 api_server.go:103] status: https://192.168.61.48:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 19:12:20.298403  438245 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8444/healthz ...
	I0819 19:12:20.304289  438245 api_server.go:279] https://192.168.61.48:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 19:12:20.304325  438245 api_server.go:103] status: https://192.168.61.48:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 19:12:20.797876  438245 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8444/healthz ...
	I0819 19:12:20.803894  438245 api_server.go:279] https://192.168.61.48:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 19:12:20.803935  438245 api_server.go:103] status: https://192.168.61.48:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 19:12:21.298284  438245 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8444/healthz ...
	I0819 19:12:21.320292  438245 api_server.go:279] https://192.168.61.48:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 19:12:21.320320  438245 api_server.go:103] status: https://192.168.61.48:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 19:12:21.797829  438245 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8444/healthz ...
	I0819 19:12:21.802183  438245 api_server.go:279] https://192.168.61.48:8444/healthz returned 200:
	ok
	I0819 19:12:21.809866  438245 api_server.go:141] control plane version: v1.31.0
	I0819 19:12:21.809902  438245 api_server.go:131] duration metric: took 4.012339897s to wait for apiserver health ...
	I0819 19:12:21.809914  438245 cni.go:84] Creating CNI manager for ""
	I0819 19:12:21.809944  438245 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 19:12:21.811668  438245 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 19:12:21.813183  438245 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 19:12:21.826170  438245 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 19:12:21.850473  438245 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 19:12:21.865379  438245 system_pods.go:59] 8 kube-system pods found
	I0819 19:12:21.865422  438245 system_pods.go:61] "coredns-6f6b679f8f-dwbnt" [9b8d7ee3-15ca-475b-b659-d5c3b10890fe] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0819 19:12:21.865442  438245 system_pods.go:61] "etcd-default-k8s-diff-port-982795" [6686e6f6-485d-4c57-89a1-af4f27b6216e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0819 19:12:21.865455  438245 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-982795" [fcfb5a0d-6d6c-4c30-a17f-43106f3dd5ae] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0819 19:12:21.865475  438245 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-982795" [346bf3b5-57e7-4f30-a6ed-959dc9e8941d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0819 19:12:21.865485  438245 system_pods.go:61] "kube-proxy-wrczx" [acabdc8e-5397-4531-afcb-57a8f4c48618] Running
	I0819 19:12:21.865493  438245 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-982795" [82de0c57-e712-4c0c-b751-a17cb0dd75b2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0819 19:12:21.865503  438245 system_pods.go:61] "metrics-server-6867b74b74-5hlnx" [394c87af-a198-4fea-8a30-32a8c3e80884] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 19:12:21.865522  438245 system_pods.go:61] "storage-provisioner" [35f70989-846d-4ec5-b879-a22625ee94ce] Running
	I0819 19:12:21.865534  438245 system_pods.go:74] duration metric: took 15.035147ms to wait for pod list to return data ...
	I0819 19:12:21.865545  438245 node_conditions.go:102] verifying NodePressure condition ...
	I0819 19:12:21.870314  438245 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 19:12:21.870350  438245 node_conditions.go:123] node cpu capacity is 2
	I0819 19:12:21.870366  438245 node_conditions.go:105] duration metric: took 4.813819ms to run NodePressure ...
	I0819 19:12:21.870390  438245 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:12:22.130916  438245 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0819 19:12:22.134889  438245 kubeadm.go:739] kubelet initialised
	I0819 19:12:22.134912  438245 kubeadm.go:740] duration metric: took 3.970465ms waiting for restarted kubelet to initialise ...
	I0819 19:12:22.134920  438245 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 19:12:22.139345  438245 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-dwbnt" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:20.672189  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:20.672655  438295 main.go:141] libmachine: (embed-certs-024748) DBG | unable to find current IP address of domain embed-certs-024748 in network mk-embed-certs-024748
	I0819 19:12:20.672682  438295 main.go:141] libmachine: (embed-certs-024748) DBG | I0819 19:12:20.672613  439540 retry.go:31] will retry after 2.76896974s: waiting for machine to come up
	I0819 19:12:23.442804  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:23.443223  438295 main.go:141] libmachine: (embed-certs-024748) DBG | unable to find current IP address of domain embed-certs-024748 in network mk-embed-certs-024748
	I0819 19:12:23.443268  438295 main.go:141] libmachine: (embed-certs-024748) DBG | I0819 19:12:23.443170  439540 retry.go:31] will retry after 4.199459292s: waiting for machine to come up
	I0819 19:12:24.145329  438245 pod_ready.go:103] pod "coredns-6f6b679f8f-dwbnt" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:26.645695  438245 pod_ready.go:103] pod "coredns-6f6b679f8f-dwbnt" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:27.644842  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:27.645376  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has current primary IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:27.645403  438295 main.go:141] libmachine: (embed-certs-024748) Found IP for machine: 192.168.72.96
	I0819 19:12:27.645417  438295 main.go:141] libmachine: (embed-certs-024748) Reserving static IP address...
	I0819 19:12:27.645874  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "embed-certs-024748", mac: "52:54:00:f0:8b:43", ip: "192.168.72.96"} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:27.645902  438295 main.go:141] libmachine: (embed-certs-024748) Reserved static IP address: 192.168.72.96
	I0819 19:12:27.645919  438295 main.go:141] libmachine: (embed-certs-024748) DBG | skip adding static IP to network mk-embed-certs-024748 - found existing host DHCP lease matching {name: "embed-certs-024748", mac: "52:54:00:f0:8b:43", ip: "192.168.72.96"}
	I0819 19:12:27.645952  438295 main.go:141] libmachine: (embed-certs-024748) Waiting for SSH to be available...
	I0819 19:12:27.645974  438295 main.go:141] libmachine: (embed-certs-024748) DBG | Getting to WaitForSSH function...
	I0819 19:12:27.648195  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:27.648471  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:27.648496  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:27.648717  438295 main.go:141] libmachine: (embed-certs-024748) DBG | Using SSH client type: external
	I0819 19:12:27.648744  438295 main.go:141] libmachine: (embed-certs-024748) DBG | Using SSH private key: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/embed-certs-024748/id_rsa (-rw-------)
	I0819 19:12:27.648773  438295 main.go:141] libmachine: (embed-certs-024748) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.96 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19468-372744/.minikube/machines/embed-certs-024748/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 19:12:27.648792  438295 main.go:141] libmachine: (embed-certs-024748) DBG | About to run SSH command:
	I0819 19:12:27.648808  438295 main.go:141] libmachine: (embed-certs-024748) DBG | exit 0
	I0819 19:12:27.775964  438295 main.go:141] libmachine: (embed-certs-024748) DBG | SSH cmd err, output: <nil>: 
	I0819 19:12:27.776344  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetConfigRaw
	I0819 19:12:27.777100  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetIP
	I0819 19:12:27.780096  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:27.780535  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:27.780570  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:27.780936  438295 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/embed-certs-024748/config.json ...
	I0819 19:12:27.781721  438295 machine.go:93] provisionDockerMachine start ...
	I0819 19:12:27.781748  438295 main.go:141] libmachine: (embed-certs-024748) Calling .DriverName
	I0819 19:12:27.781974  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHHostname
	I0819 19:12:27.784482  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:27.784838  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:27.784868  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:27.785066  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHPort
	I0819 19:12:27.785254  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:27.785452  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:27.785617  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHUsername
	I0819 19:12:27.785789  438295 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:27.786038  438295 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.96 22 <nil> <nil>}
	I0819 19:12:27.786059  438295 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 19:12:27.904337  438295 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 19:12:27.904375  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetMachineName
	I0819 19:12:27.904675  438295 buildroot.go:166] provisioning hostname "embed-certs-024748"
	I0819 19:12:27.904711  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetMachineName
	I0819 19:12:27.904932  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHHostname
	I0819 19:12:27.907960  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:27.908325  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:27.908354  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:27.908446  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHPort
	I0819 19:12:27.908659  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:27.908825  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:27.909012  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHUsername
	I0819 19:12:27.909234  438295 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:27.909441  438295 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.96 22 <nil> <nil>}
	I0819 19:12:27.909458  438295 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-024748 && echo "embed-certs-024748" | sudo tee /etc/hostname
	I0819 19:12:28.036564  438295 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-024748
	
	I0819 19:12:28.036597  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHHostname
	I0819 19:12:28.039385  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:28.039798  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:28.039827  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:28.040071  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHPort
	I0819 19:12:28.040327  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:28.040493  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:28.040652  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHUsername
	I0819 19:12:28.040882  438295 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:28.041113  438295 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.96 22 <nil> <nil>}
	I0819 19:12:28.041138  438295 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-024748' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-024748/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-024748' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 19:12:28.162311  438295 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 19:12:28.162348  438295 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19468-372744/.minikube CaCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19468-372744/.minikube}
	I0819 19:12:28.162368  438295 buildroot.go:174] setting up certificates
	I0819 19:12:28.162376  438295 provision.go:84] configureAuth start
	I0819 19:12:28.162385  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetMachineName
	I0819 19:12:28.162703  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetIP
	I0819 19:12:28.165171  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:28.165563  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:28.165593  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:28.165727  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHHostname
	I0819 19:12:28.167917  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:28.168199  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:28.168221  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:28.168411  438295 provision.go:143] copyHostCerts
	I0819 19:12:28.168469  438295 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem, removing ...
	I0819 19:12:28.168491  438295 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem
	I0819 19:12:28.168560  438295 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem (1082 bytes)
	I0819 19:12:28.168693  438295 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem, removing ...
	I0819 19:12:28.168704  438295 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem
	I0819 19:12:28.168736  438295 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem (1123 bytes)
	I0819 19:12:28.168814  438295 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem, removing ...
	I0819 19:12:28.168824  438295 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem
	I0819 19:12:28.168853  438295 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem (1675 bytes)
	I0819 19:12:28.168942  438295 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem org=jenkins.embed-certs-024748 san=[127.0.0.1 192.168.72.96 embed-certs-024748 localhost minikube]
	I0819 19:12:28.447064  438295 provision.go:177] copyRemoteCerts
	I0819 19:12:28.447129  438295 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 19:12:28.447158  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHHostname
	I0819 19:12:28.449851  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:28.450138  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:28.450163  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:28.450344  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHPort
	I0819 19:12:28.450541  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:28.450713  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHUsername
	I0819 19:12:28.450832  438295 sshutil.go:53] new ssh client: &{IP:192.168.72.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/embed-certs-024748/id_rsa Username:docker}
	I0819 19:12:28.537815  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 19:12:28.562408  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0819 19:12:28.586728  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 19:12:28.611119  438295 provision.go:87] duration metric: took 448.726133ms to configureAuth
	I0819 19:12:28.611158  438295 buildroot.go:189] setting minikube options for container-runtime
	I0819 19:12:28.611351  438295 config.go:182] Loaded profile config "embed-certs-024748": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:12:28.611428  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHHostname
	I0819 19:12:28.614168  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:28.614543  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:28.614571  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:28.614736  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHPort
	I0819 19:12:28.614941  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:28.615083  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:28.615192  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHUsername
	I0819 19:12:28.615302  438295 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:28.615454  438295 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.96 22 <nil> <nil>}
	I0819 19:12:28.615469  438295 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 19:12:28.890054  438295 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 19:12:28.890086  438295 machine.go:96] duration metric: took 1.10834874s to provisionDockerMachine
	I0819 19:12:28.890100  438295 start.go:293] postStartSetup for "embed-certs-024748" (driver="kvm2")
	I0819 19:12:28.890120  438295 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 19:12:28.890146  438295 main.go:141] libmachine: (embed-certs-024748) Calling .DriverName
	I0819 19:12:28.890469  438295 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 19:12:28.890499  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHHostname
	I0819 19:12:28.893251  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:28.893579  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:28.893605  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:28.893733  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHPort
	I0819 19:12:28.893895  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:28.894102  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHUsername
	I0819 19:12:28.894220  438295 sshutil.go:53] new ssh client: &{IP:192.168.72.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/embed-certs-024748/id_rsa Username:docker}
	I0819 19:12:28.979381  438295 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 19:12:28.983921  438295 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 19:12:28.983952  438295 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-372744/.minikube/addons for local assets ...
	I0819 19:12:28.984048  438295 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-372744/.minikube/files for local assets ...
	I0819 19:12:28.984156  438295 filesync.go:149] local asset: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem -> 3800092.pem in /etc/ssl/certs
	I0819 19:12:28.984250  438295 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 19:12:28.994964  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem --> /etc/ssl/certs/3800092.pem (1708 bytes)
	I0819 19:12:29.018801  438295 start.go:296] duration metric: took 128.685446ms for postStartSetup
	I0819 19:12:29.018843  438295 fix.go:56] duration metric: took 20.282076509s for fixHost
	I0819 19:12:29.018870  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHHostname
	I0819 19:12:29.021554  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:29.021848  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:29.021875  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:29.022066  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHPort
	I0819 19:12:29.022261  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:29.022428  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:29.022526  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHUsername
	I0819 19:12:29.022678  438295 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:29.022900  438295 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.96 22 <nil> <nil>}
	I0819 19:12:29.022915  438295 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 19:12:29.132976  438716 start.go:364] duration metric: took 3m58.489348567s to acquireMachinesLock for "old-k8s-version-104669"
	I0819 19:12:29.133047  438716 start.go:96] Skipping create...Using existing machine configuration
	I0819 19:12:29.133055  438716 fix.go:54] fixHost starting: 
	I0819 19:12:29.133485  438716 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:29.133524  438716 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:29.151330  438716 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39213
	I0819 19:12:29.151778  438716 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:29.152271  438716 main.go:141] libmachine: Using API Version  1
	I0819 19:12:29.152301  438716 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:29.152682  438716 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:29.152883  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .DriverName
	I0819 19:12:29.153065  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetState
	I0819 19:12:29.154399  438716 fix.go:112] recreateIfNeeded on old-k8s-version-104669: state=Stopped err=<nil>
	I0819 19:12:29.154444  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .DriverName
	W0819 19:12:29.154684  438716 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 19:12:29.156349  438716 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-104669" ...
	I0819 19:12:29.157631  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .Start
	I0819 19:12:29.157825  438716 main.go:141] libmachine: (old-k8s-version-104669) Ensuring networks are active...
	I0819 19:12:29.158635  438716 main.go:141] libmachine: (old-k8s-version-104669) Ensuring network default is active
	I0819 19:12:29.159041  438716 main.go:141] libmachine: (old-k8s-version-104669) Ensuring network mk-old-k8s-version-104669 is active
	I0819 19:12:29.159509  438716 main.go:141] libmachine: (old-k8s-version-104669) Getting domain xml...
	I0819 19:12:29.160383  438716 main.go:141] libmachine: (old-k8s-version-104669) Creating domain...
	I0819 19:12:30.452488  438716 main.go:141] libmachine: (old-k8s-version-104669) Waiting to get IP...
	I0819 19:12:30.453743  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:30.454237  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:30.454323  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:30.454193  439728 retry.go:31] will retry after 197.440033ms: waiting for machine to come up
	I0819 19:12:29.132812  438295 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724094749.105537362
	
	I0819 19:12:29.132839  438295 fix.go:216] guest clock: 1724094749.105537362
	I0819 19:12:29.132850  438295 fix.go:229] Guest: 2024-08-19 19:12:29.105537362 +0000 UTC Remote: 2024-08-19 19:12:29.018848957 +0000 UTC m=+300.015027560 (delta=86.688405ms)
	I0819 19:12:29.132877  438295 fix.go:200] guest clock delta is within tolerance: 86.688405ms
	I0819 19:12:29.132884  438295 start.go:83] releasing machines lock for "embed-certs-024748", held for 20.396159242s
	I0819 19:12:29.132912  438295 main.go:141] libmachine: (embed-certs-024748) Calling .DriverName
	I0819 19:12:29.133179  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetIP
	I0819 19:12:29.136110  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:29.136532  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:29.136565  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:29.136750  438295 main.go:141] libmachine: (embed-certs-024748) Calling .DriverName
	I0819 19:12:29.137307  438295 main.go:141] libmachine: (embed-certs-024748) Calling .DriverName
	I0819 19:12:29.137532  438295 main.go:141] libmachine: (embed-certs-024748) Calling .DriverName
	I0819 19:12:29.137616  438295 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 19:12:29.137690  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHHostname
	I0819 19:12:29.137758  438295 ssh_runner.go:195] Run: cat /version.json
	I0819 19:12:29.137781  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHHostname
	I0819 19:12:29.140500  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:29.140820  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:29.140870  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:29.140903  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:29.141067  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHPort
	I0819 19:12:29.141266  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:29.141385  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:29.141430  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:29.141443  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHUsername
	I0819 19:12:29.141586  438295 sshutil.go:53] new ssh client: &{IP:192.168.72.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/embed-certs-024748/id_rsa Username:docker}
	I0819 19:12:29.141639  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHPort
	I0819 19:12:29.141790  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:29.141957  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHUsername
	I0819 19:12:29.142123  438295 sshutil.go:53] new ssh client: &{IP:192.168.72.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/embed-certs-024748/id_rsa Username:docker}
	I0819 19:12:29.242886  438295 ssh_runner.go:195] Run: systemctl --version
	I0819 19:12:29.249276  438295 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 19:12:29.393872  438295 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 19:12:29.401874  438295 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 19:12:29.401954  438295 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 19:12:29.421973  438295 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 19:12:29.422004  438295 start.go:495] detecting cgroup driver to use...
	I0819 19:12:29.422081  438295 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 19:12:29.442823  438295 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 19:12:29.462663  438295 docker.go:217] disabling cri-docker service (if available) ...
	I0819 19:12:29.462720  438295 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 19:12:29.477896  438295 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 19:12:29.492591  438295 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 19:12:29.613759  438295 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 19:12:29.770719  438295 docker.go:233] disabling docker service ...
	I0819 19:12:29.770805  438295 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 19:12:29.785787  438295 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 19:12:29.802879  438295 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 19:12:29.947633  438295 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 19:12:30.082602  438295 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 19:12:30.097628  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 19:12:30.118671  438295 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 19:12:30.118735  438295 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:30.131287  438295 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 19:12:30.131354  438295 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:30.143008  438295 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:30.156358  438295 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:30.172123  438295 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 19:12:30.188196  438295 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:30.201487  438295 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:30.219887  438295 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:30.235685  438295 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 19:12:30.246112  438295 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 19:12:30.246202  438295 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 19:12:30.259732  438295 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 19:12:30.269866  438295 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:12:30.397522  438295 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 19:12:30.545249  438295 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 19:12:30.545349  438295 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 19:12:30.550473  438295 start.go:563] Will wait 60s for crictl version
	I0819 19:12:30.550528  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:12:30.554782  438295 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 19:12:30.597634  438295 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 19:12:30.597736  438295 ssh_runner.go:195] Run: crio --version
	I0819 19:12:30.628137  438295 ssh_runner.go:195] Run: crio --version
	I0819 19:12:30.660912  438295 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 19:12:29.146475  438245 pod_ready.go:103] pod "coredns-6f6b679f8f-dwbnt" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:31.147618  438245 pod_ready.go:93] pod "coredns-6f6b679f8f-dwbnt" in "kube-system" namespace has status "Ready":"True"
	I0819 19:12:31.147651  438245 pod_ready.go:82] duration metric: took 9.00827926s for pod "coredns-6f6b679f8f-dwbnt" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:31.147665  438245 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:31.153305  438245 pod_ready.go:93] pod "etcd-default-k8s-diff-port-982795" in "kube-system" namespace has status "Ready":"True"
	I0819 19:12:31.153331  438245 pod_ready.go:82] duration metric: took 5.657625ms for pod "etcd-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:31.153347  438245 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:31.159009  438245 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-982795" in "kube-system" namespace has status "Ready":"True"
	I0819 19:12:31.159037  438245 pod_ready.go:82] duration metric: took 5.680194ms for pod "kube-apiserver-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:31.159050  438245 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:31.165478  438245 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-982795" in "kube-system" namespace has status "Ready":"True"
	I0819 19:12:31.165504  438245 pod_ready.go:82] duration metric: took 6.444529ms for pod "kube-controller-manager-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:31.165517  438245 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-wrczx" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:31.180293  438245 pod_ready.go:93] pod "kube-proxy-wrczx" in "kube-system" namespace has status "Ready":"True"
	I0819 19:12:31.180324  438245 pod_ready.go:82] duration metric: took 14.798883ms for pod "kube-proxy-wrczx" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:31.180337  438245 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:30.662168  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetIP
	I0819 19:12:30.665057  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:30.665455  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:30.665486  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:30.665660  438295 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0819 19:12:30.669911  438295 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 19:12:30.682755  438295 kubeadm.go:883] updating cluster {Name:embed-certs-024748 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-024748 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.96 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 19:12:30.682883  438295 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 19:12:30.682936  438295 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 19:12:30.724160  438295 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0819 19:12:30.724233  438295 ssh_runner.go:195] Run: which lz4
	I0819 19:12:30.728710  438295 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 19:12:30.733279  438295 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 19:12:30.733317  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0819 19:12:32.178568  438295 crio.go:462] duration metric: took 1.449881121s to copy over tarball
	I0819 19:12:32.178642  438295 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 19:12:30.653917  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:30.654521  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:30.654566  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:30.654436  439728 retry.go:31] will retry after 317.038756ms: waiting for machine to come up
	I0819 19:12:30.973003  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:30.973530  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:30.973560  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:30.973487  439728 retry.go:31] will retry after 486.945032ms: waiting for machine to come up
	I0819 19:12:31.461937  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:31.462438  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:31.462470  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:31.462389  439728 retry.go:31] will retry after 441.288745ms: waiting for machine to come up
	I0819 19:12:31.904947  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:31.905564  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:31.905617  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:31.905472  439728 retry.go:31] will retry after 752.583403ms: waiting for machine to come up
	I0819 19:12:32.659642  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:32.660175  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:32.660207  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:32.660128  439728 retry.go:31] will retry after 932.705928ms: waiting for machine to come up
	I0819 19:12:33.594983  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:33.595529  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:33.595556  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:33.595466  439728 retry.go:31] will retry after 936.558157ms: waiting for machine to come up
	I0819 19:12:34.533158  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:34.533717  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:34.533743  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:34.533656  439728 retry.go:31] will retry after 1.435945188s: waiting for machine to come up
	I0819 19:12:33.186835  438245 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-982795" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:35.187500  438245 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-982795" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:35.686905  438245 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-982795" in "kube-system" namespace has status "Ready":"True"
	I0819 19:12:35.686932  438245 pod_ready.go:82] duration metric: took 4.50658625s for pod "kube-scheduler-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:35.686945  438245 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:34.321347  438295 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.14267077s)
	I0819 19:12:34.321379  438295 crio.go:469] duration metric: took 2.142777016s to extract the tarball
	I0819 19:12:34.321390  438295 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 19:12:34.357670  438295 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 19:12:34.403313  438295 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 19:12:34.403344  438295 cache_images.go:84] Images are preloaded, skipping loading
	I0819 19:12:34.403358  438295 kubeadm.go:934] updating node { 192.168.72.96 8443 v1.31.0 crio true true} ...
	I0819 19:12:34.403495  438295 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-024748 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.96
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-024748 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 19:12:34.403576  438295 ssh_runner.go:195] Run: crio config
	I0819 19:12:34.450415  438295 cni.go:84] Creating CNI manager for ""
	I0819 19:12:34.450443  438295 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 19:12:34.450461  438295 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 19:12:34.450490  438295 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.96 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-024748 NodeName:embed-certs-024748 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.96"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.96 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 19:12:34.450646  438295 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.96
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-024748"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.96
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.96"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 19:12:34.450723  438295 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 19:12:34.461183  438295 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 19:12:34.461313  438295 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 19:12:34.470516  438295 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0819 19:12:34.488844  438295 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 19:12:34.505450  438295 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0819 19:12:34.522456  438295 ssh_runner.go:195] Run: grep 192.168.72.96	control-plane.minikube.internal$ /etc/hosts
	I0819 19:12:34.526272  438295 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.96	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 19:12:34.539079  438295 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:12:34.665665  438295 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 19:12:34.683237  438295 certs.go:68] Setting up /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/embed-certs-024748 for IP: 192.168.72.96
	I0819 19:12:34.683265  438295 certs.go:194] generating shared ca certs ...
	I0819 19:12:34.683287  438295 certs.go:226] acquiring lock for ca certs: {Name:mk639e03f593e0bccac045f6e9f5ba3b96cc81e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:12:34.683471  438295 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key
	I0819 19:12:34.683536  438295 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key
	I0819 19:12:34.683550  438295 certs.go:256] generating profile certs ...
	I0819 19:12:34.683687  438295 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/embed-certs-024748/client.key
	I0819 19:12:34.683776  438295 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/embed-certs-024748/apiserver.key.89193d03
	I0819 19:12:34.683828  438295 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/embed-certs-024748/proxy-client.key
	I0819 19:12:34.683991  438295 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem (1338 bytes)
	W0819 19:12:34.684035  438295 certs.go:480] ignoring /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009_empty.pem, impossibly tiny 0 bytes
	I0819 19:12:34.684047  438295 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 19:12:34.684074  438295 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem (1082 bytes)
	I0819 19:12:34.684112  438295 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem (1123 bytes)
	I0819 19:12:34.684159  438295 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem (1675 bytes)
	I0819 19:12:34.684224  438295 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem (1708 bytes)
	I0819 19:12:34.685127  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 19:12:34.718591  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 19:12:34.758439  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 19:12:34.790143  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 19:12:34.828113  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/embed-certs-024748/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0819 19:12:34.860389  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/embed-certs-024748/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 19:12:34.898361  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/embed-certs-024748/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 19:12:34.924677  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/embed-certs-024748/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 19:12:34.951630  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem --> /usr/share/ca-certificates/3800092.pem (1708 bytes)
	I0819 19:12:34.977435  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 19:12:35.002048  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem --> /usr/share/ca-certificates/380009.pem (1338 bytes)
	I0819 19:12:35.026934  438295 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 19:12:35.044476  438295 ssh_runner.go:195] Run: openssl version
	I0819 19:12:35.050174  438295 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3800092.pem && ln -fs /usr/share/ca-certificates/3800092.pem /etc/ssl/certs/3800092.pem"
	I0819 19:12:35.061299  438295 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3800092.pem
	I0819 19:12:35.065978  438295 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 17:56 /usr/share/ca-certificates/3800092.pem
	I0819 19:12:35.066047  438295 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3800092.pem
	I0819 19:12:35.072572  438295 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3800092.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 19:12:35.083760  438295 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 19:12:35.094492  438295 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:12:35.099152  438295 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:12:35.099229  438295 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:12:35.105124  438295 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 19:12:35.115950  438295 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/380009.pem && ln -fs /usr/share/ca-certificates/380009.pem /etc/ssl/certs/380009.pem"
	I0819 19:12:35.126845  438295 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/380009.pem
	I0819 19:12:35.131568  438295 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 17:56 /usr/share/ca-certificates/380009.pem
	I0819 19:12:35.131650  438295 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/380009.pem
	I0819 19:12:35.137851  438295 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/380009.pem /etc/ssl/certs/51391683.0"
	I0819 19:12:35.148818  438295 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 19:12:35.153800  438295 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 19:12:35.159720  438295 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 19:12:35.165740  438295 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 19:12:35.171705  438295 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 19:12:35.177574  438295 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 19:12:35.183935  438295 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 19:12:35.192681  438295 kubeadm.go:392] StartCluster: {Name:embed-certs-024748 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-024748 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.96 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 19:12:35.192845  438295 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 19:12:35.192908  438295 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 19:12:35.231688  438295 cri.go:89] found id: ""
	I0819 19:12:35.231791  438295 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 19:12:35.242835  438295 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 19:12:35.242859  438295 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 19:12:35.242944  438295 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 19:12:35.255695  438295 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 19:12:35.257036  438295 kubeconfig.go:125] found "embed-certs-024748" server: "https://192.168.72.96:8443"
	I0819 19:12:35.259422  438295 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 19:12:35.271730  438295 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.96
	I0819 19:12:35.271758  438295 kubeadm.go:1160] stopping kube-system containers ...
	I0819 19:12:35.271772  438295 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0819 19:12:35.271820  438295 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 19:12:35.321065  438295 cri.go:89] found id: ""
	I0819 19:12:35.321155  438295 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0819 19:12:35.337802  438295 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 19:12:35.347699  438295 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 19:12:35.347726  438295 kubeadm.go:157] found existing configuration files:
	
	I0819 19:12:35.347785  438295 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 19:12:35.357108  438295 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 19:12:35.357178  438295 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 19:12:35.366805  438295 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 19:12:35.376864  438295 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 19:12:35.376938  438295 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 19:12:35.387018  438295 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 19:12:35.396966  438295 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 19:12:35.397045  438295 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 19:12:35.406192  438295 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 19:12:35.415325  438295 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 19:12:35.415401  438295 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 19:12:35.424450  438295 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 19:12:35.433931  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:12:35.549294  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:12:36.306930  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:12:36.517086  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:12:36.587680  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:12:36.680728  438295 api_server.go:52] waiting for apiserver process to appear ...
	I0819 19:12:36.680825  438295 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:12:37.181054  438295 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:12:37.681059  438295 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:12:38.181588  438295 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:12:38.197155  438295 api_server.go:72] duration metric: took 1.516436456s to wait for apiserver process to appear ...
	I0819 19:12:38.197184  438295 api_server.go:88] waiting for apiserver healthz status ...
	I0819 19:12:38.197212  438295 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0819 19:12:35.971138  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:35.971576  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:35.971607  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:35.971514  439728 retry.go:31] will retry after 1.521077744s: waiting for machine to come up
	I0819 19:12:37.493931  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:37.494389  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:37.494415  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:37.494361  439728 retry.go:31] will retry after 1.632508579s: waiting for machine to come up
	I0819 19:12:39.128939  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:39.129429  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:39.129456  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:39.129392  439728 retry.go:31] will retry after 2.634061376s: waiting for machine to come up
	I0819 19:12:40.567608  438295 api_server.go:279] https://192.168.72.96:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 19:12:40.567654  438295 api_server.go:103] status: https://192.168.72.96:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 19:12:40.567669  438295 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0819 19:12:40.593405  438295 api_server.go:279] https://192.168.72.96:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 19:12:40.593456  438295 api_server.go:103] status: https://192.168.72.96:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 19:12:40.697607  438295 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0819 19:12:40.713767  438295 api_server.go:279] https://192.168.72.96:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 19:12:40.713806  438295 api_server.go:103] status: https://192.168.72.96:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 19:12:41.197299  438295 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0819 19:12:41.203307  438295 api_server.go:279] https://192.168.72.96:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 19:12:41.203338  438295 api_server.go:103] status: https://192.168.72.96:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 19:12:41.697903  438295 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0819 19:12:41.705142  438295 api_server.go:279] https://192.168.72.96:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 19:12:41.705174  438295 api_server.go:103] status: https://192.168.72.96:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 19:12:42.197361  438295 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0819 19:12:42.202272  438295 api_server.go:279] https://192.168.72.96:8443/healthz returned 200:
	ok
	I0819 19:12:42.209788  438295 api_server.go:141] control plane version: v1.31.0
	I0819 19:12:42.209819  438295 api_server.go:131] duration metric: took 4.012627755s to wait for apiserver health ...
	I0819 19:12:42.209829  438295 cni.go:84] Creating CNI manager for ""
	I0819 19:12:42.209836  438295 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 19:12:42.211612  438295 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 19:12:37.693171  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:39.693397  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:41.693523  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:42.212889  438295 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 19:12:42.223277  438295 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 19:12:42.242392  438295 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 19:12:42.256273  438295 system_pods.go:59] 8 kube-system pods found
	I0819 19:12:42.256321  438295 system_pods.go:61] "coredns-6f6b679f8f-7ww4z" [bbde00d4-6027-4d8d-b51e-bd68915da166] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0819 19:12:42.256331  438295 system_pods.go:61] "etcd-embed-certs-024748" [846ff0f0-5399-43fd-8e7b-1f64997cd291] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0819 19:12:42.256348  438295 system_pods.go:61] "kube-apiserver-embed-certs-024748" [3ff558d6-e82e-47a0-bb81-15244bee6470] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0819 19:12:42.256366  438295 system_pods.go:61] "kube-controller-manager-embed-certs-024748" [993b82ba-e8e7-4896-a06b-87c4f08d5985] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0819 19:12:42.256383  438295 system_pods.go:61] "kube-proxy-bmmbh" [1f77f152-f5f4-40f6-9632-1eaa36b9ea31] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0819 19:12:42.256393  438295 system_pods.go:61] "kube-scheduler-embed-certs-024748" [34684d4c-2479-45c5-883b-158cf9f974f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0819 19:12:42.256403  438295 system_pods.go:61] "metrics-server-6867b74b74-kxcwh" [15f86629-d916-4fdc-9ecf-9cb1b6c83f85] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 19:12:42.256409  438295 system_pods.go:61] "storage-provisioner" [7acb6ce1-21b6-4cdd-a5cb-76d694fc0a38] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0819 19:12:42.256418  438295 system_pods.go:74] duration metric: took 14.004598ms to wait for pod list to return data ...
	I0819 19:12:42.256428  438295 node_conditions.go:102] verifying NodePressure condition ...
	I0819 19:12:42.263308  438295 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 19:12:42.263340  438295 node_conditions.go:123] node cpu capacity is 2
	I0819 19:12:42.263354  438295 node_conditions.go:105] duration metric: took 6.920993ms to run NodePressure ...
	I0819 19:12:42.263376  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:12:42.533917  438295 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0819 19:12:42.545853  438295 kubeadm.go:739] kubelet initialised
	I0819 19:12:42.545886  438295 kubeadm.go:740] duration metric: took 11.931664ms waiting for restarted kubelet to initialise ...
	I0819 19:12:42.545899  438295 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 19:12:42.553125  438295 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-7ww4z" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:42.559120  438295 pod_ready.go:98] node "embed-certs-024748" hosting pod "coredns-6f6b679f8f-7ww4z" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:42.559148  438295 pod_ready.go:82] duration metric: took 5.984169ms for pod "coredns-6f6b679f8f-7ww4z" in "kube-system" namespace to be "Ready" ...
	E0819 19:12:42.559158  438295 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-024748" hosting pod "coredns-6f6b679f8f-7ww4z" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:42.559164  438295 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:42.564830  438295 pod_ready.go:98] node "embed-certs-024748" hosting pod "etcd-embed-certs-024748" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:42.564852  438295 pod_ready.go:82] duration metric: took 5.681326ms for pod "etcd-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	E0819 19:12:42.564860  438295 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-024748" hosting pod "etcd-embed-certs-024748" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:42.564867  438295 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:42.571982  438295 pod_ready.go:98] node "embed-certs-024748" hosting pod "kube-apiserver-embed-certs-024748" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:42.572027  438295 pod_ready.go:82] duration metric: took 7.150945ms for pod "kube-apiserver-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	E0819 19:12:42.572038  438295 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-024748" hosting pod "kube-apiserver-embed-certs-024748" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:42.572045  438295 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:42.648692  438295 pod_ready.go:98] node "embed-certs-024748" hosting pod "kube-controller-manager-embed-certs-024748" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:42.648721  438295 pod_ready.go:82] duration metric: took 76.665633ms for pod "kube-controller-manager-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	E0819 19:12:42.648730  438295 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-024748" hosting pod "kube-controller-manager-embed-certs-024748" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:42.648737  438295 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-bmmbh" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:43.045619  438295 pod_ready.go:98] node "embed-certs-024748" hosting pod "kube-proxy-bmmbh" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:43.045648  438295 pod_ready.go:82] duration metric: took 396.90414ms for pod "kube-proxy-bmmbh" in "kube-system" namespace to be "Ready" ...
	E0819 19:12:43.045658  438295 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-024748" hosting pod "kube-proxy-bmmbh" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:43.045665  438295 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:43.446302  438295 pod_ready.go:98] node "embed-certs-024748" hosting pod "kube-scheduler-embed-certs-024748" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:43.446331  438295 pod_ready.go:82] duration metric: took 400.658861ms for pod "kube-scheduler-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	E0819 19:12:43.446342  438295 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-024748" hosting pod "kube-scheduler-embed-certs-024748" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:43.446359  438295 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:43.845457  438295 pod_ready.go:98] node "embed-certs-024748" hosting pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:43.845488  438295 pod_ready.go:82] duration metric: took 399.120328ms for pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace to be "Ready" ...
	E0819 19:12:43.845499  438295 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-024748" hosting pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:43.845506  438295 pod_ready.go:39] duration metric: took 1.299593775s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 19:12:43.845526  438295 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 19:12:43.864357  438295 ops.go:34] apiserver oom_adj: -16
	I0819 19:12:43.864384  438295 kubeadm.go:597] duration metric: took 8.621518076s to restartPrimaryControlPlane
	I0819 19:12:43.864394  438295 kubeadm.go:394] duration metric: took 8.671725617s to StartCluster
	I0819 19:12:43.864414  438295 settings.go:142] acquiring lock: {Name:mk396fcf49a1d0e69583cf37ff3c819e37118163 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:12:43.864495  438295 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19468-372744/kubeconfig
	I0819 19:12:43.866775  438295 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/kubeconfig: {Name:mk8e7b4e1bb7da665111d2acd83eb48882c66853 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:12:43.867073  438295 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.96 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 19:12:43.867296  438295 config.go:182] Loaded profile config "embed-certs-024748": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:12:43.867195  438295 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 19:12:43.867354  438295 addons.go:69] Setting metrics-server=true in profile "embed-certs-024748"
	I0819 19:12:43.867362  438295 addons.go:69] Setting default-storageclass=true in profile "embed-certs-024748"
	I0819 19:12:43.867397  438295 addons.go:234] Setting addon metrics-server=true in "embed-certs-024748"
	I0819 19:12:43.867402  438295 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-024748"
	W0819 19:12:43.867409  438295 addons.go:243] addon metrics-server should already be in state true
	I0819 19:12:43.867437  438295 host.go:66] Checking if "embed-certs-024748" exists ...
	I0819 19:12:43.867354  438295 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-024748"
	I0819 19:12:43.867502  438295 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-024748"
	W0819 19:12:43.867514  438295 addons.go:243] addon storage-provisioner should already be in state true
	I0819 19:12:43.867538  438295 host.go:66] Checking if "embed-certs-024748" exists ...
	I0819 19:12:43.867761  438295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:43.867796  438295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:43.867839  438295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:43.867873  438295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:43.867889  438295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:43.867908  438295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:43.869989  438295 out.go:177] * Verifying Kubernetes components...
	I0819 19:12:43.871464  438295 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:12:43.883655  438295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33557
	I0819 19:12:43.883871  438295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33763
	I0819 19:12:43.884279  438295 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:43.884323  438295 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:43.884790  438295 main.go:141] libmachine: Using API Version  1
	I0819 19:12:43.884809  438295 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:43.884935  438295 main.go:141] libmachine: Using API Version  1
	I0819 19:12:43.884953  438295 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:43.885204  438295 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:43.885275  438295 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:43.885380  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetState
	I0819 19:12:43.885886  438295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:43.885928  438295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:43.886840  438295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40467
	I0819 19:12:43.887309  438295 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:43.887792  438295 main.go:141] libmachine: Using API Version  1
	I0819 19:12:43.887802  438295 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:43.888109  438295 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:43.888670  438295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:43.888697  438295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:43.888973  438295 addons.go:234] Setting addon default-storageclass=true in "embed-certs-024748"
	W0819 19:12:43.888988  438295 addons.go:243] addon default-storageclass should already be in state true
	I0819 19:12:43.889020  438295 host.go:66] Checking if "embed-certs-024748" exists ...
	I0819 19:12:43.889270  438295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:43.889304  438295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:43.905278  438295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40907
	I0819 19:12:43.905278  438295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41133
	I0819 19:12:43.905734  438295 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:43.905877  438295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33393
	I0819 19:12:43.905983  438295 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:43.906299  438295 main.go:141] libmachine: Using API Version  1
	I0819 19:12:43.906320  438295 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:43.906366  438295 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:43.906443  438295 main.go:141] libmachine: Using API Version  1
	I0819 19:12:43.906457  438295 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:43.906822  438295 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:43.906898  438295 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:43.906995  438295 main.go:141] libmachine: Using API Version  1
	I0819 19:12:43.907006  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetState
	I0819 19:12:43.907012  438295 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:43.907371  438295 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:43.907473  438295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:43.907523  438295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:43.907534  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetState
	I0819 19:12:43.909443  438295 main.go:141] libmachine: (embed-certs-024748) Calling .DriverName
	I0819 19:12:43.909529  438295 main.go:141] libmachine: (embed-certs-024748) Calling .DriverName
	I0819 19:12:43.911431  438295 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0819 19:12:43.911437  438295 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:12:43.913061  438295 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0819 19:12:43.913090  438295 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0819 19:12:43.913115  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHHostname
	I0819 19:12:43.913180  438295 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 19:12:43.913199  438295 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 19:12:43.913216  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHHostname
	I0819 19:12:43.916642  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:43.916813  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:43.917110  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:43.917135  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:43.917166  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:43.917193  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:43.917463  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHPort
	I0819 19:12:43.917668  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHPort
	I0819 19:12:43.917671  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:43.917846  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:43.917867  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHUsername
	I0819 19:12:43.918014  438295 sshutil.go:53] new ssh client: &{IP:192.168.72.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/embed-certs-024748/id_rsa Username:docker}
	I0819 19:12:43.918032  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHUsername
	I0819 19:12:43.918148  438295 sshutil.go:53] new ssh client: &{IP:192.168.72.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/embed-certs-024748/id_rsa Username:docker}
	I0819 19:12:43.926337  438295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46687
	I0819 19:12:43.926813  438295 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:43.927333  438295 main.go:141] libmachine: Using API Version  1
	I0819 19:12:43.927354  438295 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:43.927762  438295 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:43.927965  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetState
	I0819 19:12:43.929591  438295 main.go:141] libmachine: (embed-certs-024748) Calling .DriverName
	I0819 19:12:43.929910  438295 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 19:12:43.929926  438295 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 19:12:43.929942  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHHostname
	I0819 19:12:43.933032  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:43.933387  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:43.933406  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:43.933626  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHPort
	I0819 19:12:43.933850  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:43.933992  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHUsername
	I0819 19:12:43.934118  438295 sshutil.go:53] new ssh client: &{IP:192.168.72.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/embed-certs-024748/id_rsa Username:docker}
	I0819 19:12:44.078901  438295 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 19:12:44.098542  438295 node_ready.go:35] waiting up to 6m0s for node "embed-certs-024748" to be "Ready" ...
	I0819 19:12:44.180050  438295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 19:12:44.196186  438295 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0819 19:12:44.196210  438295 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0819 19:12:44.220001  438295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 19:12:44.231145  438295 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0819 19:12:44.231180  438295 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0819 19:12:44.267800  438295 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 19:12:44.267831  438295 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0819 19:12:44.323078  438295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 19:12:45.276298  438295 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.096199779s)
	I0819 19:12:45.276336  438295 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.056298773s)
	I0819 19:12:45.276383  438295 main.go:141] libmachine: Making call to close driver server
	I0819 19:12:45.276395  438295 main.go:141] libmachine: (embed-certs-024748) Calling .Close
	I0819 19:12:45.276385  438295 main.go:141] libmachine: Making call to close driver server
	I0819 19:12:45.276462  438295 main.go:141] libmachine: (embed-certs-024748) Calling .Close
	I0819 19:12:45.276714  438295 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:12:45.276757  438295 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:12:45.276777  438295 main.go:141] libmachine: Making call to close driver server
	I0819 19:12:45.276793  438295 main.go:141] libmachine: (embed-certs-024748) Calling .Close
	I0819 19:12:45.276860  438295 main.go:141] libmachine: (embed-certs-024748) DBG | Closing plugin on server side
	I0819 19:12:45.276874  438295 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:12:45.276940  438295 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:12:45.276956  438295 main.go:141] libmachine: Making call to close driver server
	I0819 19:12:45.276964  438295 main.go:141] libmachine: (embed-certs-024748) Calling .Close
	I0819 19:12:45.277134  438295 main.go:141] libmachine: (embed-certs-024748) DBG | Closing plugin on server side
	I0819 19:12:45.277195  438295 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:12:45.277239  438295 main.go:141] libmachine: (embed-certs-024748) DBG | Closing plugin on server side
	I0819 19:12:45.277258  438295 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:12:45.277277  438295 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:12:45.277304  438295 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:12:45.284982  438295 main.go:141] libmachine: Making call to close driver server
	I0819 19:12:45.285007  438295 main.go:141] libmachine: (embed-certs-024748) Calling .Close
	I0819 19:12:45.285304  438295 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:12:45.285324  438295 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:12:45.293973  438295 main.go:141] libmachine: Making call to close driver server
	I0819 19:12:45.293994  438295 main.go:141] libmachine: (embed-certs-024748) Calling .Close
	I0819 19:12:45.294247  438295 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:12:45.294265  438295 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:12:45.294274  438295 main.go:141] libmachine: Making call to close driver server
	I0819 19:12:45.294282  438295 main.go:141] libmachine: (embed-certs-024748) Calling .Close
	I0819 19:12:45.295704  438295 main.go:141] libmachine: (embed-certs-024748) DBG | Closing plugin on server side
	I0819 19:12:45.295787  438295 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:12:45.295813  438295 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:12:45.295828  438295 addons.go:475] Verifying addon metrics-server=true in "embed-certs-024748"
	I0819 19:12:45.297684  438295 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0819 19:12:41.765706  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:41.766129  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:41.766182  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:41.766093  439728 retry.go:31] will retry after 3.464758587s: waiting for machine to come up
	I0819 19:12:45.232640  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:45.233118  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:45.233151  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:45.233066  439728 retry.go:31] will retry after 3.551527195s: waiting for machine to come up
	I0819 19:12:43.694387  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:46.194627  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:45.298844  438295 addons.go:510] duration metric: took 1.431699078s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0819 19:12:46.103096  438295 node_ready.go:53] node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:48.603205  438295 node_ready.go:53] node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:50.084809  438001 start.go:364] duration metric: took 55.89796214s to acquireMachinesLock for "no-preload-278232"
	I0819 19:12:50.084884  438001 start.go:96] Skipping create...Using existing machine configuration
	I0819 19:12:50.084895  438001 fix.go:54] fixHost starting: 
	I0819 19:12:50.085416  438001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:50.085459  438001 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:50.103796  438001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41569
	I0819 19:12:50.104278  438001 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:50.104900  438001 main.go:141] libmachine: Using API Version  1
	I0819 19:12:50.104934  438001 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:50.105335  438001 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:50.105544  438001 main.go:141] libmachine: (no-preload-278232) Calling .DriverName
	I0819 19:12:50.105703  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetState
	I0819 19:12:50.107422  438001 fix.go:112] recreateIfNeeded on no-preload-278232: state=Stopped err=<nil>
	I0819 19:12:50.107444  438001 main.go:141] libmachine: (no-preload-278232) Calling .DriverName
	W0819 19:12:50.107602  438001 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 19:12:50.109328  438001 out.go:177] * Restarting existing kvm2 VM for "no-preload-278232" ...
	I0819 19:12:48.787197  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:48.787586  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has current primary IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:48.787611  438716 main.go:141] libmachine: (old-k8s-version-104669) Found IP for machine: 192.168.50.32
	I0819 19:12:48.787625  438716 main.go:141] libmachine: (old-k8s-version-104669) Reserving static IP address...
	I0819 19:12:48.788104  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "old-k8s-version-104669", mac: "52:54:00:8c:ff:a3", ip: "192.168.50.32"} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:48.788140  438716 main.go:141] libmachine: (old-k8s-version-104669) Reserved static IP address: 192.168.50.32
	I0819 19:12:48.788164  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | skip adding static IP to network mk-old-k8s-version-104669 - found existing host DHCP lease matching {name: "old-k8s-version-104669", mac: "52:54:00:8c:ff:a3", ip: "192.168.50.32"}
	I0819 19:12:48.788186  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | Getting to WaitForSSH function...
	I0819 19:12:48.788202  438716 main.go:141] libmachine: (old-k8s-version-104669) Waiting for SSH to be available...
	I0819 19:12:48.790365  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:48.790765  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:48.790793  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:48.790994  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | Using SSH client type: external
	I0819 19:12:48.791034  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | Using SSH private key: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/old-k8s-version-104669/id_rsa (-rw-------)
	I0819 19:12:48.791073  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.32 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19468-372744/.minikube/machines/old-k8s-version-104669/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 19:12:48.791087  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | About to run SSH command:
	I0819 19:12:48.791103  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | exit 0
	I0819 19:12:48.920087  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | SSH cmd err, output: <nil>: 
	I0819 19:12:48.920464  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetConfigRaw
	I0819 19:12:48.921105  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetIP
	I0819 19:12:48.923637  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:48.924022  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:48.924053  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:48.924242  438716 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/config.json ...
	I0819 19:12:48.924429  438716 machine.go:93] provisionDockerMachine start ...
	I0819 19:12:48.924447  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .DriverName
	I0819 19:12:48.924655  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHHostname
	I0819 19:12:48.926885  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:48.927345  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:48.927376  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:48.927527  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHPort
	I0819 19:12:48.927723  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:48.927846  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:48.927968  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHUsername
	I0819 19:12:48.928241  438716 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:48.928453  438716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I0819 19:12:48.928475  438716 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 19:12:49.039908  438716 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 19:12:49.039944  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetMachineName
	I0819 19:12:49.040200  438716 buildroot.go:166] provisioning hostname "old-k8s-version-104669"
	I0819 19:12:49.040236  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetMachineName
	I0819 19:12:49.040454  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHHostname
	I0819 19:12:49.043462  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.043860  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:49.043892  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.044061  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHPort
	I0819 19:12:49.044256  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:49.044472  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:49.044613  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHUsername
	I0819 19:12:49.044837  438716 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:49.045014  438716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I0819 19:12:49.045027  438716 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-104669 && echo "old-k8s-version-104669" | sudo tee /etc/hostname
	I0819 19:12:49.170660  438716 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-104669
	
	I0819 19:12:49.170695  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHHostname
	I0819 19:12:49.173564  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.173855  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:49.173882  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.174059  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHPort
	I0819 19:12:49.174239  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:49.174432  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:49.174564  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHUsername
	I0819 19:12:49.174732  438716 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:49.174923  438716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I0819 19:12:49.174941  438716 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-104669' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-104669/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-104669' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 19:12:49.298689  438716 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 19:12:49.298731  438716 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19468-372744/.minikube CaCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19468-372744/.minikube}
	I0819 19:12:49.298764  438716 buildroot.go:174] setting up certificates
	I0819 19:12:49.298778  438716 provision.go:84] configureAuth start
	I0819 19:12:49.298793  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetMachineName
	I0819 19:12:49.299157  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetIP
	I0819 19:12:49.301897  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.302290  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:49.302326  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.302462  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHHostname
	I0819 19:12:49.304592  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.304960  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:49.304987  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.305150  438716 provision.go:143] copyHostCerts
	I0819 19:12:49.305219  438716 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem, removing ...
	I0819 19:12:49.305243  438716 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem
	I0819 19:12:49.305310  438716 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem (1082 bytes)
	I0819 19:12:49.305437  438716 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem, removing ...
	I0819 19:12:49.305449  438716 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem
	I0819 19:12:49.305477  438716 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem (1123 bytes)
	I0819 19:12:49.305571  438716 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem, removing ...
	I0819 19:12:49.305583  438716 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem
	I0819 19:12:49.305612  438716 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem (1675 bytes)
	I0819 19:12:49.305699  438716 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-104669 san=[127.0.0.1 192.168.50.32 localhost minikube old-k8s-version-104669]
	I0819 19:12:49.394004  438716 provision.go:177] copyRemoteCerts
	I0819 19:12:49.394074  438716 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 19:12:49.394112  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHHostname
	I0819 19:12:49.396645  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.396906  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:49.396951  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.397108  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHPort
	I0819 19:12:49.397321  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:49.397504  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHUsername
	I0819 19:12:49.397709  438716 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/old-k8s-version-104669/id_rsa Username:docker}
	I0819 19:12:49.483061  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 19:12:49.508297  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 19:12:49.533821  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0819 19:12:49.560064  438716 provision.go:87] duration metric: took 261.270909ms to configureAuth
	I0819 19:12:49.560093  438716 buildroot.go:189] setting minikube options for container-runtime
	I0819 19:12:49.560310  438716 config.go:182] Loaded profile config "old-k8s-version-104669": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0819 19:12:49.560409  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHHostname
	I0819 19:12:49.563173  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.563604  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:49.563633  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.563882  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHPort
	I0819 19:12:49.564075  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:49.564274  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:49.564479  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHUsername
	I0819 19:12:49.564707  438716 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:49.564925  438716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I0819 19:12:49.564948  438716 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 19:12:49.837237  438716 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 19:12:49.837267  438716 machine.go:96] duration metric: took 912.825625ms to provisionDockerMachine
	I0819 19:12:49.837281  438716 start.go:293] postStartSetup for "old-k8s-version-104669" (driver="kvm2")
	I0819 19:12:49.837297  438716 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 19:12:49.837341  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .DriverName
	I0819 19:12:49.837716  438716 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 19:12:49.837757  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHHostname
	I0819 19:12:49.840409  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.840759  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:49.840789  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.840988  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHPort
	I0819 19:12:49.841183  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:49.841345  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHUsername
	I0819 19:12:49.841473  438716 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/old-k8s-version-104669/id_rsa Username:docker}
	I0819 19:12:49.931067  438716 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 19:12:49.935562  438716 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 19:12:49.935590  438716 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-372744/.minikube/addons for local assets ...
	I0819 19:12:49.935694  438716 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-372744/.minikube/files for local assets ...
	I0819 19:12:49.935815  438716 filesync.go:149] local asset: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem -> 3800092.pem in /etc/ssl/certs
	I0819 19:12:49.935941  438716 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 19:12:49.945418  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem --> /etc/ssl/certs/3800092.pem (1708 bytes)
	I0819 19:12:49.969454  438716 start.go:296] duration metric: took 132.15677ms for postStartSetup
	I0819 19:12:49.969494  438716 fix.go:56] duration metric: took 20.836438665s for fixHost
	I0819 19:12:49.969517  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHHostname
	I0819 19:12:49.972127  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.972502  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:49.972542  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.972758  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHPort
	I0819 19:12:49.973000  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:49.973190  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:49.973355  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHUsername
	I0819 19:12:49.973548  438716 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:49.973753  438716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I0819 19:12:49.973766  438716 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 19:12:50.084645  438716 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724094770.056929881
	
	I0819 19:12:50.084672  438716 fix.go:216] guest clock: 1724094770.056929881
	I0819 19:12:50.084681  438716 fix.go:229] Guest: 2024-08-19 19:12:50.056929881 +0000 UTC Remote: 2024-08-19 19:12:49.969497734 +0000 UTC m=+259.472837552 (delta=87.432147ms)
	I0819 19:12:50.084711  438716 fix.go:200] guest clock delta is within tolerance: 87.432147ms
	I0819 19:12:50.084718  438716 start.go:83] releasing machines lock for "old-k8s-version-104669", held for 20.951701853s
	I0819 19:12:50.084752  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .DriverName
	I0819 19:12:50.085050  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetIP
	I0819 19:12:50.087976  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:50.088363  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:50.088391  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:50.088572  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .DriverName
	I0819 19:12:50.089141  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .DriverName
	I0819 19:12:50.089360  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .DriverName
	I0819 19:12:50.089460  438716 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 19:12:50.089526  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHHostname
	I0819 19:12:50.089572  438716 ssh_runner.go:195] Run: cat /version.json
	I0819 19:12:50.089599  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHHostname
	I0819 19:12:50.092427  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:50.092591  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:50.092772  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:50.092797  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:50.092933  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:50.092965  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:50.092965  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHPort
	I0819 19:12:50.093147  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:50.093248  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHPort
	I0819 19:12:50.093328  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHUsername
	I0819 19:12:50.093409  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:50.093503  438716 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/old-k8s-version-104669/id_rsa Username:docker}
	I0819 19:12:50.093532  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHUsername
	I0819 19:12:50.093650  438716 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/old-k8s-version-104669/id_rsa Username:docker}
	I0819 19:12:50.177322  438716 ssh_runner.go:195] Run: systemctl --version
	I0819 19:12:50.200999  438716 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 19:12:50.349276  438716 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 19:12:50.357011  438716 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 19:12:50.357090  438716 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 19:12:50.377691  438716 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 19:12:50.377721  438716 start.go:495] detecting cgroup driver to use...
	I0819 19:12:50.377790  438716 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 19:12:50.394502  438716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 19:12:50.408481  438716 docker.go:217] disabling cri-docker service (if available) ...
	I0819 19:12:50.408556  438716 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 19:12:50.421818  438716 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 19:12:50.434899  438716 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 19:12:50.559399  438716 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 19:12:50.708621  438716 docker.go:233] disabling docker service ...
	I0819 19:12:50.708695  438716 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 19:12:50.726699  438716 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 19:12:50.740605  438716 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 19:12:50.896815  438716 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 19:12:51.037560  438716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 19:12:51.052554  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 19:12:51.072292  438716 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0819 19:12:51.072360  438716 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:51.083248  438716 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 19:12:51.083334  438716 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:51.093721  438716 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:51.105212  438716 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:51.119349  438716 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 19:12:51.134647  438716 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 19:12:51.144553  438716 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 19:12:51.144598  438716 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 19:12:51.159151  438716 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 19:12:51.171260  438716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:12:51.328931  438716 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 19:12:51.500761  438716 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 19:12:51.500831  438716 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 19:12:51.505982  438716 start.go:563] Will wait 60s for crictl version
	I0819 19:12:51.506057  438716 ssh_runner.go:195] Run: which crictl
	I0819 19:12:51.510447  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 19:12:51.552892  438716 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 19:12:51.552982  438716 ssh_runner.go:195] Run: crio --version
	I0819 19:12:51.581931  438716 ssh_runner.go:195] Run: crio --version
	I0819 19:12:51.614565  438716 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0819 19:12:50.110718  438001 main.go:141] libmachine: (no-preload-278232) Calling .Start
	I0819 19:12:50.110888  438001 main.go:141] libmachine: (no-preload-278232) Ensuring networks are active...
	I0819 19:12:50.111809  438001 main.go:141] libmachine: (no-preload-278232) Ensuring network default is active
	I0819 19:12:50.112149  438001 main.go:141] libmachine: (no-preload-278232) Ensuring network mk-no-preload-278232 is active
	I0819 19:12:50.112709  438001 main.go:141] libmachine: (no-preload-278232) Getting domain xml...
	I0819 19:12:50.113441  438001 main.go:141] libmachine: (no-preload-278232) Creating domain...
	I0819 19:12:51.494803  438001 main.go:141] libmachine: (no-preload-278232) Waiting to get IP...
	I0819 19:12:51.495733  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:12:51.496203  438001 main.go:141] libmachine: (no-preload-278232) DBG | unable to find current IP address of domain no-preload-278232 in network mk-no-preload-278232
	I0819 19:12:51.496302  438001 main.go:141] libmachine: (no-preload-278232) DBG | I0819 19:12:51.496187  439925 retry.go:31] will retry after 190.334257ms: waiting for machine to come up
	I0819 19:12:48.694017  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:50.694533  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:51.102764  438295 node_ready.go:49] node "embed-certs-024748" has status "Ready":"True"
	I0819 19:12:51.102791  438295 node_ready.go:38] duration metric: took 7.004204889s for node "embed-certs-024748" to be "Ready" ...
	I0819 19:12:51.102814  438295 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 19:12:51.109122  438295 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-7ww4z" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:51.114649  438295 pod_ready.go:93] pod "coredns-6f6b679f8f-7ww4z" in "kube-system" namespace has status "Ready":"True"
	I0819 19:12:51.114679  438295 pod_ready.go:82] duration metric: took 5.529339ms for pod "coredns-6f6b679f8f-7ww4z" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:51.114692  438295 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:51.121699  438295 pod_ready.go:93] pod "etcd-embed-certs-024748" in "kube-system" namespace has status "Ready":"True"
	I0819 19:12:51.121729  438295 pod_ready.go:82] duration metric: took 7.027906ms for pod "etcd-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:51.121742  438295 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:51.129040  438295 pod_ready.go:93] pod "kube-apiserver-embed-certs-024748" in "kube-system" namespace has status "Ready":"True"
	I0819 19:12:51.129066  438295 pod_ready.go:82] duration metric: took 7.315166ms for pod "kube-apiserver-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:51.129078  438295 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:51.636173  438295 pod_ready.go:93] pod "kube-controller-manager-embed-certs-024748" in "kube-system" namespace has status "Ready":"True"
	I0819 19:12:51.636226  438295 pod_ready.go:82] duration metric: took 507.130455ms for pod "kube-controller-manager-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:51.636243  438295 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bmmbh" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:51.904734  438295 pod_ready.go:93] pod "kube-proxy-bmmbh" in "kube-system" namespace has status "Ready":"True"
	I0819 19:12:51.904776  438295 pod_ready.go:82] duration metric: took 268.522999ms for pod "kube-proxy-bmmbh" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:51.904806  438295 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:53.911857  438295 pod_ready.go:103] pod "kube-scheduler-embed-certs-024748" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:51.615865  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetIP
	I0819 19:12:51.618782  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:51.619238  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:51.619268  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:51.619508  438716 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0819 19:12:51.624020  438716 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 19:12:51.640765  438716 kubeadm.go:883] updating cluster {Name:old-k8s-version-104669 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-104669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.32 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 19:12:51.640905  438716 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0819 19:12:51.640982  438716 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 19:12:51.696872  438716 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0819 19:12:51.696931  438716 ssh_runner.go:195] Run: which lz4
	I0819 19:12:51.702194  438716 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 19:12:51.707228  438716 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 19:12:51.707265  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0819 19:12:53.435062  438716 crio.go:462] duration metric: took 1.732918912s to copy over tarball
	I0819 19:12:53.435149  438716 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 19:12:51.688680  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:12:51.689287  438001 main.go:141] libmachine: (no-preload-278232) DBG | unable to find current IP address of domain no-preload-278232 in network mk-no-preload-278232
	I0819 19:12:51.689326  438001 main.go:141] libmachine: (no-preload-278232) DBG | I0819 19:12:51.689222  439925 retry.go:31] will retry after 351.943478ms: waiting for machine to come up
	I0819 19:12:52.042810  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:12:52.043142  438001 main.go:141] libmachine: (no-preload-278232) DBG | unable to find current IP address of domain no-preload-278232 in network mk-no-preload-278232
	I0819 19:12:52.043163  438001 main.go:141] libmachine: (no-preload-278232) DBG | I0819 19:12:52.043070  439925 retry.go:31] will retry after 332.731922ms: waiting for machine to come up
	I0819 19:12:52.377750  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:12:52.378418  438001 main.go:141] libmachine: (no-preload-278232) DBG | unable to find current IP address of domain no-preload-278232 in network mk-no-preload-278232
	I0819 19:12:52.378442  438001 main.go:141] libmachine: (no-preload-278232) DBG | I0819 19:12:52.378377  439925 retry.go:31] will retry after 601.079013ms: waiting for machine to come up
	I0819 19:12:52.980930  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:12:52.981446  438001 main.go:141] libmachine: (no-preload-278232) DBG | unable to find current IP address of domain no-preload-278232 in network mk-no-preload-278232
	I0819 19:12:52.981474  438001 main.go:141] libmachine: (no-preload-278232) DBG | I0819 19:12:52.981396  439925 retry.go:31] will retry after 621.686612ms: waiting for machine to come up
	I0819 19:12:53.605240  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:12:53.605716  438001 main.go:141] libmachine: (no-preload-278232) DBG | unable to find current IP address of domain no-preload-278232 in network mk-no-preload-278232
	I0819 19:12:53.605751  438001 main.go:141] libmachine: (no-preload-278232) DBG | I0819 19:12:53.605666  439925 retry.go:31] will retry after 627.115747ms: waiting for machine to come up
	I0819 19:12:54.234095  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:12:54.234590  438001 main.go:141] libmachine: (no-preload-278232) DBG | unable to find current IP address of domain no-preload-278232 in network mk-no-preload-278232
	I0819 19:12:54.234613  438001 main.go:141] libmachine: (no-preload-278232) DBG | I0819 19:12:54.234541  439925 retry.go:31] will retry after 1.137953362s: waiting for machine to come up
	I0819 19:12:55.373941  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:12:55.374412  438001 main.go:141] libmachine: (no-preload-278232) DBG | unable to find current IP address of domain no-preload-278232 in network mk-no-preload-278232
	I0819 19:12:55.374440  438001 main.go:141] libmachine: (no-preload-278232) DBG | I0819 19:12:55.374368  439925 retry.go:31] will retry after 1.437610965s: waiting for machine to come up
	I0819 19:12:52.696277  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:54.704463  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:57.195001  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:55.412162  438295 pod_ready.go:93] pod "kube-scheduler-embed-certs-024748" in "kube-system" namespace has status "Ready":"True"
	I0819 19:12:55.412198  438295 pod_ready.go:82] duration metric: took 3.507380249s for pod "kube-scheduler-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:55.412214  438295 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:57.419600  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:56.399941  438716 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.96472478s)
	I0819 19:12:56.399971  438716 crio.go:469] duration metric: took 2.964877539s to extract the tarball
	I0819 19:12:56.399986  438716 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 19:12:56.447075  438716 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 19:12:56.491773  438716 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0819 19:12:56.491800  438716 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0819 19:12:56.491876  438716 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:12:56.491876  438716 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0819 19:12:56.491956  438716 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0819 19:12:56.491961  438716 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 19:12:56.492041  438716 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0819 19:12:56.492059  438716 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0819 19:12:56.492280  438716 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0819 19:12:56.492494  438716 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0819 19:12:56.493750  438716 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 19:12:56.493762  438716 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0819 19:12:56.493756  438716 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:12:56.493762  438716 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0819 19:12:56.493765  438716 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0819 19:12:56.493831  438716 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0819 19:12:56.493806  438716 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0819 19:12:56.494099  438716 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0819 19:12:56.694872  438716 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0819 19:12:56.711504  438716 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0819 19:12:56.754045  438716 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0819 19:12:56.754096  438716 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0819 19:12:56.754136  438716 ssh_runner.go:195] Run: which crictl
	I0819 19:12:56.770451  438716 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0819 19:12:56.770510  438716 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0819 19:12:56.770574  438716 ssh_runner.go:195] Run: which crictl
	I0819 19:12:56.770573  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0819 19:12:56.804839  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0819 19:12:56.804872  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0819 19:12:56.825837  438716 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0819 19:12:56.832063  438716 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0819 19:12:56.834072  438716 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0819 19:12:56.837029  438716 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0819 19:12:56.837697  438716 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 19:12:56.902843  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0819 19:12:56.902930  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0819 19:12:57.020902  438716 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0819 19:12:57.020962  438716 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0819 19:12:57.020988  438716 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0819 19:12:57.021017  438716 ssh_runner.go:195] Run: which crictl
	I0819 19:12:57.021025  438716 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0819 19:12:57.021098  438716 ssh_runner.go:195] Run: which crictl
	I0819 19:12:57.023363  438716 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0819 19:12:57.023411  438716 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0819 19:12:57.023457  438716 ssh_runner.go:195] Run: which crictl
	I0819 19:12:57.023541  438716 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0819 19:12:57.023569  438716 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0819 19:12:57.023605  438716 ssh_runner.go:195] Run: which crictl
	I0819 19:12:57.034648  438716 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0819 19:12:57.034698  438716 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 19:12:57.034719  438716 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0819 19:12:57.034748  438716 ssh_runner.go:195] Run: which crictl
	I0819 19:12:57.039577  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0819 19:12:57.039648  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0819 19:12:57.039715  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0819 19:12:57.041644  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0819 19:12:57.041983  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0819 19:12:57.045383  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 19:12:57.149677  438716 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0819 19:12:57.164701  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0819 19:12:57.164821  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0819 19:12:57.202353  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0819 19:12:57.202434  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0819 19:12:57.202465  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 19:12:57.258824  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0819 19:12:57.258858  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0819 19:12:57.285756  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0819 19:12:57.326148  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 19:12:57.326237  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0819 19:12:57.378322  438716 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0819 19:12:57.378369  438716 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0819 19:12:57.390369  438716 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0819 19:12:57.419554  438716 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0819 19:12:57.419627  438716 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0819 19:12:57.438485  438716 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:12:57.583634  438716 cache_images.go:92] duration metric: took 1.091812972s to LoadCachedImages
	W0819 19:12:57.583757  438716 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0819 19:12:57.583777  438716 kubeadm.go:934] updating node { 192.168.50.32 8443 v1.20.0 crio true true} ...
	I0819 19:12:57.583915  438716 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-104669 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.32
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-104669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 19:12:57.584007  438716 ssh_runner.go:195] Run: crio config
	I0819 19:12:57.636714  438716 cni.go:84] Creating CNI manager for ""
	I0819 19:12:57.636738  438716 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 19:12:57.636752  438716 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 19:12:57.636776  438716 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.32 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-104669 NodeName:old-k8s-version-104669 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.32"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.32 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0819 19:12:57.636951  438716 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.32
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-104669"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.32
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.32"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 19:12:57.637028  438716 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0819 19:12:57.648002  438716 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 19:12:57.648093  438716 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 19:12:57.658889  438716 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0819 19:12:57.677316  438716 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 19:12:57.695825  438716 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0819 19:12:57.715396  438716 ssh_runner.go:195] Run: grep 192.168.50.32	control-plane.minikube.internal$ /etc/hosts
	I0819 19:12:57.719886  438716 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.32	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 19:12:57.733179  438716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:12:57.854139  438716 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 19:12:57.871590  438716 certs.go:68] Setting up /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669 for IP: 192.168.50.32
	I0819 19:12:57.871619  438716 certs.go:194] generating shared ca certs ...
	I0819 19:12:57.871642  438716 certs.go:226] acquiring lock for ca certs: {Name:mk639e03f593e0bccac045f6e9f5ba3b96cc81e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:12:57.871850  438716 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key
	I0819 19:12:57.871916  438716 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key
	I0819 19:12:57.871930  438716 certs.go:256] generating profile certs ...
	I0819 19:12:57.872060  438716 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/client.key
	I0819 19:12:57.872131  438716 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/apiserver.key.7101f8a0
	I0819 19:12:57.872197  438716 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/proxy-client.key
	I0819 19:12:57.872336  438716 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem (1338 bytes)
	W0819 19:12:57.872365  438716 certs.go:480] ignoring /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009_empty.pem, impossibly tiny 0 bytes
	I0819 19:12:57.872371  438716 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 19:12:57.872390  438716 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem (1082 bytes)
	I0819 19:12:57.872419  438716 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem (1123 bytes)
	I0819 19:12:57.872441  438716 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem (1675 bytes)
	I0819 19:12:57.872488  438716 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem (1708 bytes)
	I0819 19:12:57.873259  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 19:12:57.907576  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 19:12:57.943535  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 19:12:57.977770  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 19:12:58.021213  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0819 19:12:58.051043  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 19:12:58.080442  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 19:12:58.110888  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 19:12:58.158635  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 19:12:58.184168  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem --> /usr/share/ca-certificates/380009.pem (1338 bytes)
	I0819 19:12:58.210064  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem --> /usr/share/ca-certificates/3800092.pem (1708 bytes)
	I0819 19:12:58.235366  438716 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 19:12:58.254667  438716 ssh_runner.go:195] Run: openssl version
	I0819 19:12:58.260977  438716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3800092.pem && ln -fs /usr/share/ca-certificates/3800092.pem /etc/ssl/certs/3800092.pem"
	I0819 19:12:58.272995  438716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3800092.pem
	I0819 19:12:58.278056  438716 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 17:56 /usr/share/ca-certificates/3800092.pem
	I0819 19:12:58.278154  438716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3800092.pem
	I0819 19:12:58.284420  438716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3800092.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 19:12:58.296945  438716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 19:12:58.309288  438716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:12:58.314695  438716 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:12:58.314774  438716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:12:58.321016  438716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 19:12:58.332728  438716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/380009.pem && ln -fs /usr/share/ca-certificates/380009.pem /etc/ssl/certs/380009.pem"
	I0819 19:12:58.344766  438716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/380009.pem
	I0819 19:12:58.349610  438716 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 17:56 /usr/share/ca-certificates/380009.pem
	I0819 19:12:58.349681  438716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/380009.pem
	I0819 19:12:58.355942  438716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/380009.pem /etc/ssl/certs/51391683.0"
	I0819 19:12:58.368869  438716 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 19:12:58.373681  438716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 19:12:58.380415  438716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 19:12:58.386741  438716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 19:12:58.393362  438716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 19:12:58.399665  438716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 19:12:58.406108  438716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 19:12:58.412486  438716 kubeadm.go:392] StartCluster: {Name:old-k8s-version-104669 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-104669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.32 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 19:12:58.412606  438716 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 19:12:58.412655  438716 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 19:12:58.462379  438716 cri.go:89] found id: ""
	I0819 19:12:58.462463  438716 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 19:12:58.474029  438716 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 19:12:58.474054  438716 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 19:12:58.474112  438716 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 19:12:58.485755  438716 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 19:12:58.486762  438716 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-104669" does not appear in /home/jenkins/minikube-integration/19468-372744/kubeconfig
	I0819 19:12:58.487464  438716 kubeconfig.go:62] /home/jenkins/minikube-integration/19468-372744/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-104669" cluster setting kubeconfig missing "old-k8s-version-104669" context setting]
	I0819 19:12:58.489361  438716 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/kubeconfig: {Name:mk8e7b4e1bb7da665111d2acd83eb48882c66853 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:12:58.508865  438716 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 19:12:58.520577  438716 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.32
	I0819 19:12:58.520622  438716 kubeadm.go:1160] stopping kube-system containers ...
	I0819 19:12:58.520637  438716 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0819 19:12:58.520728  438716 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 19:12:58.561900  438716 cri.go:89] found id: ""
	I0819 19:12:58.561984  438716 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0819 19:12:58.580483  438716 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 19:12:58.591734  438716 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 19:12:58.591754  438716 kubeadm.go:157] found existing configuration files:
	
	I0819 19:12:58.591804  438716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 19:12:58.601694  438716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 19:12:58.601771  438716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 19:12:58.612132  438716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 19:12:58.621911  438716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 19:12:58.621984  438716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 19:12:58.631525  438716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 19:12:58.640802  438716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 19:12:58.640872  438716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 19:12:58.650216  438716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 19:12:58.660647  438716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 19:12:58.660720  438716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 19:12:58.669992  438716 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 19:12:58.679709  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:12:58.809302  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:12:59.757994  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:13:00.006386  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:13:00.136752  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:13:00.222424  438716 api_server.go:52] waiting for apiserver process to appear ...
	I0819 19:13:00.222542  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:12:56.813279  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:12:56.813777  438001 main.go:141] libmachine: (no-preload-278232) DBG | unable to find current IP address of domain no-preload-278232 in network mk-no-preload-278232
	I0819 19:12:56.813807  438001 main.go:141] libmachine: (no-preload-278232) DBG | I0819 19:12:56.813725  439925 retry.go:31] will retry after 1.504132921s: waiting for machine to come up
	I0819 19:12:58.319408  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:12:58.319880  438001 main.go:141] libmachine: (no-preload-278232) DBG | unable to find current IP address of domain no-preload-278232 in network mk-no-preload-278232
	I0819 19:12:58.319910  438001 main.go:141] libmachine: (no-preload-278232) DBG | I0819 19:12:58.319832  439925 retry.go:31] will retry after 1.921699926s: waiting for machine to come up
	I0819 19:13:00.243504  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:00.243995  438001 main.go:141] libmachine: (no-preload-278232) DBG | unable to find current IP address of domain no-preload-278232 in network mk-no-preload-278232
	I0819 19:13:00.244021  438001 main.go:141] libmachine: (no-preload-278232) DBG | I0819 19:13:00.243952  439925 retry.go:31] will retry after 2.040704792s: waiting for machine to come up
	I0819 19:12:59.195084  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:01.693648  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:59.419644  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:01.918769  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:00.723213  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:01.222908  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:01.723081  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:02.223465  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:02.722589  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:03.222706  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:03.722930  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:04.222826  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:04.722638  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:05.222666  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:02.287044  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:02.287490  438001 main.go:141] libmachine: (no-preload-278232) DBG | unable to find current IP address of domain no-preload-278232 in network mk-no-preload-278232
	I0819 19:13:02.287526  438001 main.go:141] libmachine: (no-preload-278232) DBG | I0819 19:13:02.287416  439925 retry.go:31] will retry after 2.562055052s: waiting for machine to come up
	I0819 19:13:04.852682  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:04.853097  438001 main.go:141] libmachine: (no-preload-278232) DBG | unable to find current IP address of domain no-preload-278232 in network mk-no-preload-278232
	I0819 19:13:04.853125  438001 main.go:141] libmachine: (no-preload-278232) DBG | I0819 19:13:04.853062  439925 retry.go:31] will retry after 3.627213972s: waiting for machine to come up
	I0819 19:13:04.194149  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:06.194831  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:04.418550  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:06.919083  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:05.723627  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:06.222663  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:06.723230  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:07.222666  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:07.722653  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:08.222861  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:08.723248  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:09.222831  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:09.722738  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:10.223069  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:08.484125  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.484586  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has current primary IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.484612  438001 main.go:141] libmachine: (no-preload-278232) Found IP for machine: 192.168.39.106
	I0819 19:13:08.484642  438001 main.go:141] libmachine: (no-preload-278232) Reserving static IP address...
	I0819 19:13:08.485049  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "no-preload-278232", mac: "52:54:00:14:f3:b1", ip: "192.168.39.106"} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:08.485091  438001 main.go:141] libmachine: (no-preload-278232) Reserved static IP address: 192.168.39.106
	I0819 19:13:08.485112  438001 main.go:141] libmachine: (no-preload-278232) DBG | skip adding static IP to network mk-no-preload-278232 - found existing host DHCP lease matching {name: "no-preload-278232", mac: "52:54:00:14:f3:b1", ip: "192.168.39.106"}
	I0819 19:13:08.485129  438001 main.go:141] libmachine: (no-preload-278232) DBG | Getting to WaitForSSH function...
	I0819 19:13:08.485145  438001 main.go:141] libmachine: (no-preload-278232) Waiting for SSH to be available...
	I0819 19:13:08.486998  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.487266  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:08.487290  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.487402  438001 main.go:141] libmachine: (no-preload-278232) DBG | Using SSH client type: external
	I0819 19:13:08.487429  438001 main.go:141] libmachine: (no-preload-278232) DBG | Using SSH private key: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/no-preload-278232/id_rsa (-rw-------)
	I0819 19:13:08.487463  438001 main.go:141] libmachine: (no-preload-278232) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.106 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19468-372744/.minikube/machines/no-preload-278232/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 19:13:08.487476  438001 main.go:141] libmachine: (no-preload-278232) DBG | About to run SSH command:
	I0819 19:13:08.487487  438001 main.go:141] libmachine: (no-preload-278232) DBG | exit 0
	I0819 19:13:08.611459  438001 main.go:141] libmachine: (no-preload-278232) DBG | SSH cmd err, output: <nil>: 
	I0819 19:13:08.611934  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetConfigRaw
	I0819 19:13:08.612610  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetIP
	I0819 19:13:08.615212  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.615564  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:08.615594  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.615919  438001 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/no-preload-278232/config.json ...
	I0819 19:13:08.616140  438001 machine.go:93] provisionDockerMachine start ...
	I0819 19:13:08.616162  438001 main.go:141] libmachine: (no-preload-278232) Calling .DriverName
	I0819 19:13:08.616387  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHHostname
	I0819 19:13:08.618650  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.618956  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:08.618988  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.619098  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHPort
	I0819 19:13:08.619291  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:08.619433  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:08.619569  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHUsername
	I0819 19:13:08.619727  438001 main.go:141] libmachine: Using SSH client type: native
	I0819 19:13:08.619893  438001 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I0819 19:13:08.619903  438001 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 19:13:08.724912  438001 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 19:13:08.724955  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetMachineName
	I0819 19:13:08.725264  438001 buildroot.go:166] provisioning hostname "no-preload-278232"
	I0819 19:13:08.725291  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetMachineName
	I0819 19:13:08.725486  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHHostname
	I0819 19:13:08.728810  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.729237  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:08.729274  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.729434  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHPort
	I0819 19:13:08.729667  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:08.729887  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:08.730067  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHUsername
	I0819 19:13:08.730244  438001 main.go:141] libmachine: Using SSH client type: native
	I0819 19:13:08.730490  438001 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I0819 19:13:08.730511  438001 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-278232 && echo "no-preload-278232" | sudo tee /etc/hostname
	I0819 19:13:08.854474  438001 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-278232
	
	I0819 19:13:08.854499  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHHostname
	I0819 19:13:08.857179  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.857511  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:08.857540  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.857713  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHPort
	I0819 19:13:08.857912  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:08.858075  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:08.858189  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHUsername
	I0819 19:13:08.858356  438001 main.go:141] libmachine: Using SSH client type: native
	I0819 19:13:08.858556  438001 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I0819 19:13:08.858579  438001 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-278232' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-278232/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-278232' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 19:13:08.973053  438001 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 19:13:08.973090  438001 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19468-372744/.minikube CaCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19468-372744/.minikube}
	I0819 19:13:08.973115  438001 buildroot.go:174] setting up certificates
	I0819 19:13:08.973125  438001 provision.go:84] configureAuth start
	I0819 19:13:08.973135  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetMachineName
	I0819 19:13:08.973417  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetIP
	I0819 19:13:08.976100  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.976459  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:08.976487  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.976690  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHHostname
	I0819 19:13:08.978902  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.979342  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:08.979370  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.979530  438001 provision.go:143] copyHostCerts
	I0819 19:13:08.979605  438001 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem, removing ...
	I0819 19:13:08.979628  438001 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem
	I0819 19:13:08.979717  438001 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem (1082 bytes)
	I0819 19:13:08.979830  438001 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem, removing ...
	I0819 19:13:08.979842  438001 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem
	I0819 19:13:08.979874  438001 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem (1123 bytes)
	I0819 19:13:08.979963  438001 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem, removing ...
	I0819 19:13:08.979974  438001 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem
	I0819 19:13:08.980002  438001 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem (1675 bytes)
	I0819 19:13:08.980075  438001 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem org=jenkins.no-preload-278232 san=[127.0.0.1 192.168.39.106 localhost minikube no-preload-278232]
	I0819 19:13:09.092643  438001 provision.go:177] copyRemoteCerts
	I0819 19:13:09.092707  438001 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 19:13:09.092739  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHHostname
	I0819 19:13:09.095542  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:09.095929  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:09.095960  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:09.096099  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHPort
	I0819 19:13:09.096318  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:09.096481  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHUsername
	I0819 19:13:09.096635  438001 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/no-preload-278232/id_rsa Username:docker}
	I0819 19:13:09.179713  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 19:13:09.206363  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0819 19:13:09.231180  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 19:13:09.256764  438001 provision.go:87] duration metric: took 283.626537ms to configureAuth
	I0819 19:13:09.256810  438001 buildroot.go:189] setting minikube options for container-runtime
	I0819 19:13:09.256993  438001 config.go:182] Loaded profile config "no-preload-278232": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:13:09.257079  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHHostname
	I0819 19:13:09.259661  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:09.260061  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:09.260094  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:09.260253  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHPort
	I0819 19:13:09.260461  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:09.260640  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:09.260796  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHUsername
	I0819 19:13:09.260973  438001 main.go:141] libmachine: Using SSH client type: native
	I0819 19:13:09.261150  438001 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I0819 19:13:09.261166  438001 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 19:13:09.534325  438001 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 19:13:09.534357  438001 machine.go:96] duration metric: took 918.201944ms to provisionDockerMachine
	I0819 19:13:09.534371  438001 start.go:293] postStartSetup for "no-preload-278232" (driver="kvm2")
	I0819 19:13:09.534387  438001 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 19:13:09.534412  438001 main.go:141] libmachine: (no-preload-278232) Calling .DriverName
	I0819 19:13:09.534794  438001 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 19:13:09.534826  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHHostname
	I0819 19:13:09.537623  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:09.537974  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:09.538002  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:09.538138  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHPort
	I0819 19:13:09.538349  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:09.538534  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHUsername
	I0819 19:13:09.538669  438001 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/no-preload-278232/id_rsa Username:docker}
	I0819 19:13:09.627085  438001 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 19:13:09.631714  438001 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 19:13:09.631740  438001 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-372744/.minikube/addons for local assets ...
	I0819 19:13:09.631817  438001 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-372744/.minikube/files for local assets ...
	I0819 19:13:09.631911  438001 filesync.go:149] local asset: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem -> 3800092.pem in /etc/ssl/certs
	I0819 19:13:09.632035  438001 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 19:13:09.642942  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem --> /etc/ssl/certs/3800092.pem (1708 bytes)
	I0819 19:13:09.669242  438001 start.go:296] duration metric: took 134.853886ms for postStartSetup
	I0819 19:13:09.669294  438001 fix.go:56] duration metric: took 19.584399031s for fixHost
	I0819 19:13:09.669325  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHHostname
	I0819 19:13:09.672072  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:09.672461  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:09.672494  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:09.672635  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHPort
	I0819 19:13:09.672937  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:09.673116  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:09.673331  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHUsername
	I0819 19:13:09.673517  438001 main.go:141] libmachine: Using SSH client type: native
	I0819 19:13:09.673699  438001 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I0819 19:13:09.673717  438001 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 19:13:09.780601  438001 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724094789.749951838
	
	I0819 19:13:09.780628  438001 fix.go:216] guest clock: 1724094789.749951838
	I0819 19:13:09.780640  438001 fix.go:229] Guest: 2024-08-19 19:13:09.749951838 +0000 UTC Remote: 2024-08-19 19:13:09.669301343 +0000 UTC m=+358.073543000 (delta=80.650495ms)
	I0819 19:13:09.780668  438001 fix.go:200] guest clock delta is within tolerance: 80.650495ms
	I0819 19:13:09.780676  438001 start.go:83] releasing machines lock for "no-preload-278232", held for 19.69582363s
	I0819 19:13:09.780703  438001 main.go:141] libmachine: (no-preload-278232) Calling .DriverName
	I0819 19:13:09.781042  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetIP
	I0819 19:13:09.783578  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:09.783967  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:09.783996  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:09.784149  438001 main.go:141] libmachine: (no-preload-278232) Calling .DriverName
	I0819 19:13:09.784649  438001 main.go:141] libmachine: (no-preload-278232) Calling .DriverName
	I0819 19:13:09.784855  438001 main.go:141] libmachine: (no-preload-278232) Calling .DriverName
	I0819 19:13:09.784946  438001 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 19:13:09.785037  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHHostname
	I0819 19:13:09.785073  438001 ssh_runner.go:195] Run: cat /version.json
	I0819 19:13:09.785107  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHHostname
	I0819 19:13:09.787346  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:09.787706  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:09.787763  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:09.787788  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:09.787977  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHPort
	I0819 19:13:09.788162  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:09.788226  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:09.788251  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:09.788327  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHUsername
	I0819 19:13:09.788447  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHPort
	I0819 19:13:09.788500  438001 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/no-preload-278232/id_rsa Username:docker}
	I0819 19:13:09.788622  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:09.788805  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHUsername
	I0819 19:13:09.788994  438001 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/no-preload-278232/id_rsa Username:docker}
	I0819 19:13:09.864596  438001 ssh_runner.go:195] Run: systemctl --version
	I0819 19:13:09.890038  438001 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 19:13:10.039016  438001 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 19:13:10.045269  438001 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 19:13:10.045352  438001 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 19:13:10.061345  438001 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 19:13:10.061380  438001 start.go:495] detecting cgroup driver to use...
	I0819 19:13:10.061467  438001 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 19:13:10.079229  438001 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 19:13:10.094396  438001 docker.go:217] disabling cri-docker service (if available) ...
	I0819 19:13:10.094471  438001 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 19:13:10.109307  438001 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 19:13:10.123389  438001 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 19:13:10.241132  438001 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 19:13:10.395346  438001 docker.go:233] disabling docker service ...
	I0819 19:13:10.395444  438001 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 19:13:10.409604  438001 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 19:13:10.424149  438001 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 19:13:10.544180  438001 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 19:13:10.671038  438001 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 19:13:10.685563  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 19:13:10.704754  438001 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 19:13:10.704819  438001 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:13:10.716002  438001 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 19:13:10.716077  438001 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:13:10.728085  438001 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:13:10.739292  438001 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:13:10.750083  438001 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 19:13:10.760832  438001 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:13:10.771231  438001 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:13:10.788807  438001 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:13:10.799472  438001 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 19:13:10.809354  438001 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 19:13:10.809432  438001 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 19:13:10.824339  438001 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 19:13:10.833761  438001 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:13:10.953587  438001 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 19:13:11.091264  438001 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 19:13:11.091336  438001 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 19:13:11.096092  438001 start.go:563] Will wait 60s for crictl version
	I0819 19:13:11.096161  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:13:11.100040  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 19:13:11.142512  438001 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 19:13:11.142612  438001 ssh_runner.go:195] Run: crio --version
	I0819 19:13:11.176967  438001 ssh_runner.go:195] Run: crio --version
	I0819 19:13:11.208687  438001 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 19:13:11.209819  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetIP
	I0819 19:13:11.212533  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:11.212876  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:11.212900  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:11.213098  438001 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 19:13:11.217234  438001 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 19:13:11.229995  438001 kubeadm.go:883] updating cluster {Name:no-preload-278232 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-278232 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.106 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 19:13:11.230124  438001 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 19:13:11.230168  438001 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 19:13:11.265699  438001 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0819 19:13:11.265730  438001 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0819 19:13:11.265816  438001 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0819 19:13:11.265836  438001 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0819 19:13:11.265843  438001 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0819 19:13:11.265816  438001 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:13:11.265875  438001 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 19:13:11.265941  438001 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0819 19:13:11.265955  438001 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0819 19:13:11.266027  438001 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0819 19:13:11.267344  438001 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0819 19:13:11.267364  438001 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0819 19:13:11.267344  438001 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0819 19:13:11.267408  438001 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0819 19:13:11.267349  438001 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0819 19:13:11.267445  438001 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0819 19:13:11.267408  438001 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 19:13:11.267407  438001 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:13:11.411117  438001 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0819 19:13:11.435022  438001 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0819 19:13:11.437707  438001 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0819 19:13:11.439226  438001 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 19:13:11.446384  438001 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0819 19:13:11.448011  438001 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0819 19:13:11.463921  438001 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0819 19:13:11.476902  438001 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0819 19:13:11.476956  438001 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0819 19:13:11.477011  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:13:11.561762  438001 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0819 19:13:11.561827  438001 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0819 19:13:11.561889  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:13:08.694513  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:11.193505  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:09.419409  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:11.919413  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:13.931174  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:10.722882  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:11.223650  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:11.722917  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:12.223146  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:12.723410  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:13.222692  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:13.722636  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:14.223152  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:14.722661  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:15.223297  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:11.657022  438001 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0819 19:13:11.657071  438001 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0819 19:13:11.657092  438001 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0819 19:13:11.657123  438001 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 19:13:11.657127  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:13:11.657164  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:13:11.657176  438001 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0819 19:13:11.657195  438001 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0819 19:13:11.657217  438001 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0819 19:13:11.657216  438001 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0819 19:13:11.657254  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:13:11.657260  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:13:11.729671  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0819 19:13:11.729903  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0819 19:13:11.730476  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 19:13:11.730489  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0819 19:13:11.730510  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0819 19:13:11.730544  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0819 19:13:11.853411  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0819 19:13:11.853647  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0819 19:13:11.872296  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0819 19:13:11.872370  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0819 19:13:11.876801  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 19:13:11.877002  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0819 19:13:11.982642  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0819 19:13:12.007940  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0819 19:13:12.031132  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0819 19:13:12.031150  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0819 19:13:12.031163  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 19:13:12.031275  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0819 19:13:12.130991  438001 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0819 19:13:12.131099  438001 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0819 19:13:12.130994  438001 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0819 19:13:12.131231  438001 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1
	I0819 19:13:12.162852  438001 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0819 19:13:12.162911  438001 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0819 19:13:12.162916  438001 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0819 19:13:12.162967  438001 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0819 19:13:12.162984  438001 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0819 19:13:12.162984  438001 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0819 19:13:12.163035  438001 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0819 19:13:12.163044  438001 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0819 19:13:12.163053  438001 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0819 19:13:12.163055  438001 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0819 19:13:12.163086  438001 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0819 19:13:12.163095  438001 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0819 19:13:12.177377  438001 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0819 19:13:12.177438  438001 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0819 19:13:12.177438  438001 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0819 19:13:12.229301  438001 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:13:14.745129  438001 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (2.582015913s)
	I0819 19:13:14.745162  438001 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0819 19:13:14.745196  438001 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0: (2.582131532s)
	I0819 19:13:14.745215  438001 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.515891614s)
	I0819 19:13:14.745232  438001 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0819 19:13:14.745200  438001 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0819 19:13:14.745247  438001 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0819 19:13:14.745285  438001 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:13:14.745298  438001 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0819 19:13:14.745325  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:13:13.693752  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:15.693871  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:16.419552  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:18.920189  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:15.723053  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:16.223486  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:16.722740  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:17.223337  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:17.723160  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:18.222651  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:18.723509  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:19.223686  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:19.723376  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:20.222953  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:16.728557  438001 ssh_runner.go:235] Completed: which crictl: (1.983204878s)
	I0819 19:13:16.728614  438001 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.983294709s)
	I0819 19:13:16.728635  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:13:16.728642  438001 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0819 19:13:16.728673  438001 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0819 19:13:16.728714  438001 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0819 19:13:16.771574  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:13:20.532388  438001 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.760772797s)
	I0819 19:13:20.532421  438001 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.80368813s)
	I0819 19:13:20.532437  438001 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0819 19:13:20.532469  438001 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0819 19:13:20.532480  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:13:20.532500  438001 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0819 19:13:18.193852  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:20.692752  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:21.419154  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:23.419271  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:20.723620  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:21.223286  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:21.723663  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:22.223594  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:22.723415  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:23.223643  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:23.723395  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:24.223476  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:24.723236  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:25.223620  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:22.500967  438001 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.968455152s)
	I0819 19:13:22.501030  438001 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0819 19:13:22.501036  438001 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (1.968509024s)
	I0819 19:13:22.501068  438001 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0819 19:13:22.501108  438001 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0819 19:13:22.501138  438001 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0819 19:13:22.501175  438001 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0819 19:13:22.506796  438001 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0819 19:13:23.962797  438001 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.461519717s)
	I0819 19:13:23.962838  438001 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0819 19:13:23.962876  438001 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0819 19:13:23.962959  438001 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0819 19:13:25.927805  438001 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.964816993s)
	I0819 19:13:25.927836  438001 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0819 19:13:25.927868  438001 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0819 19:13:25.927922  438001 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0819 19:13:26.572310  438001 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0819 19:13:26.572368  438001 cache_images.go:123] Successfully loaded all cached images
	I0819 19:13:26.572376  438001 cache_images.go:92] duration metric: took 15.306632126s to LoadCachedImages
	I0819 19:13:26.572397  438001 kubeadm.go:934] updating node { 192.168.39.106 8443 v1.31.0 crio true true} ...
	I0819 19:13:26.572549  438001 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-278232 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.106
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-278232 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 19:13:26.572635  438001 ssh_runner.go:195] Run: crio config
	I0819 19:13:26.623839  438001 cni.go:84] Creating CNI manager for ""
	I0819 19:13:26.623862  438001 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 19:13:26.623872  438001 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 19:13:26.623896  438001 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.106 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-278232 NodeName:no-preload-278232 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.106"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.106 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 19:13:26.624138  438001 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.106
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-278232"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.106
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.106"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 19:13:26.624226  438001 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 19:13:22.693093  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:24.694313  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:26.695312  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:25.918793  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:27.919721  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:25.722593  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:26.223582  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:26.722927  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:27.223364  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:27.723223  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:28.223458  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:28.723262  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:29.222823  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:29.722837  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:30.223196  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:26.634770  438001 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 19:13:26.634844  438001 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 19:13:26.644193  438001 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0819 19:13:26.661226  438001 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 19:13:26.677413  438001 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0819 19:13:26.696260  438001 ssh_runner.go:195] Run: grep 192.168.39.106	control-plane.minikube.internal$ /etc/hosts
	I0819 19:13:26.700029  438001 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.106	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 19:13:26.711667  438001 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:13:26.849658  438001 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 19:13:26.867185  438001 certs.go:68] Setting up /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/no-preload-278232 for IP: 192.168.39.106
	I0819 19:13:26.867216  438001 certs.go:194] generating shared ca certs ...
	I0819 19:13:26.867240  438001 certs.go:226] acquiring lock for ca certs: {Name:mk639e03f593e0bccac045f6e9f5ba3b96cc81e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:13:26.867431  438001 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key
	I0819 19:13:26.867489  438001 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key
	I0819 19:13:26.867502  438001 certs.go:256] generating profile certs ...
	I0819 19:13:26.867600  438001 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/no-preload-278232/client.key
	I0819 19:13:26.867705  438001 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/no-preload-278232/apiserver.key.4086521c
	I0819 19:13:26.867759  438001 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/no-preload-278232/proxy-client.key
	I0819 19:13:26.867936  438001 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem (1338 bytes)
	W0819 19:13:26.867980  438001 certs.go:480] ignoring /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009_empty.pem, impossibly tiny 0 bytes
	I0819 19:13:26.867995  438001 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 19:13:26.868037  438001 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem (1082 bytes)
	I0819 19:13:26.868075  438001 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem (1123 bytes)
	I0819 19:13:26.868107  438001 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem (1675 bytes)
	I0819 19:13:26.868171  438001 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem (1708 bytes)
	I0819 19:13:26.869217  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 19:13:26.903250  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 19:13:26.928593  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 19:13:26.957098  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 19:13:26.982422  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/no-preload-278232/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0819 19:13:27.009252  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/no-preload-278232/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 19:13:27.038043  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/no-preload-278232/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 19:13:27.075400  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/no-preload-278232/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 19:13:27.101568  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem --> /usr/share/ca-certificates/3800092.pem (1708 bytes)
	I0819 19:13:27.127162  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 19:13:27.152327  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem --> /usr/share/ca-certificates/380009.pem (1338 bytes)
	I0819 19:13:27.176207  438001 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 19:13:27.194919  438001 ssh_runner.go:195] Run: openssl version
	I0819 19:13:27.201002  438001 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3800092.pem && ln -fs /usr/share/ca-certificates/3800092.pem /etc/ssl/certs/3800092.pem"
	I0819 19:13:27.212050  438001 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3800092.pem
	I0819 19:13:27.216607  438001 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 17:56 /usr/share/ca-certificates/3800092.pem
	I0819 19:13:27.216663  438001 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3800092.pem
	I0819 19:13:27.222437  438001 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3800092.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 19:13:27.234112  438001 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 19:13:27.245472  438001 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:13:27.250203  438001 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:13:27.250257  438001 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:13:27.256045  438001 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 19:13:27.266746  438001 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/380009.pem && ln -fs /usr/share/ca-certificates/380009.pem /etc/ssl/certs/380009.pem"
	I0819 19:13:27.277316  438001 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/380009.pem
	I0819 19:13:27.281660  438001 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 17:56 /usr/share/ca-certificates/380009.pem
	I0819 19:13:27.281721  438001 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/380009.pem
	I0819 19:13:27.287223  438001 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/380009.pem /etc/ssl/certs/51391683.0"
	I0819 19:13:27.299791  438001 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 19:13:27.304470  438001 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 19:13:27.310642  438001 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 19:13:27.316259  438001 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 19:13:27.322248  438001 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 19:13:27.327902  438001 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 19:13:27.333447  438001 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 19:13:27.339044  438001 kubeadm.go:392] StartCluster: {Name:no-preload-278232 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-278232 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.106 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 19:13:27.339165  438001 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 19:13:27.339241  438001 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 19:13:27.378362  438001 cri.go:89] found id: ""
	I0819 19:13:27.378436  438001 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 19:13:27.388560  438001 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 19:13:27.388580  438001 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 19:13:27.388623  438001 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 19:13:27.397834  438001 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 19:13:27.399336  438001 kubeconfig.go:125] found "no-preload-278232" server: "https://192.168.39.106:8443"
	I0819 19:13:27.402651  438001 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 19:13:27.412108  438001 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.106
	I0819 19:13:27.412155  438001 kubeadm.go:1160] stopping kube-system containers ...
	I0819 19:13:27.412170  438001 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0819 19:13:27.412230  438001 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 19:13:27.450332  438001 cri.go:89] found id: ""
	I0819 19:13:27.450431  438001 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0819 19:13:27.466943  438001 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 19:13:27.476741  438001 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 19:13:27.476765  438001 kubeadm.go:157] found existing configuration files:
	
	I0819 19:13:27.476810  438001 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 19:13:27.485630  438001 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 19:13:27.485695  438001 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 19:13:27.495232  438001 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 19:13:27.504379  438001 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 19:13:27.504449  438001 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 19:13:27.513723  438001 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 19:13:27.522864  438001 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 19:13:27.522946  438001 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 19:13:27.532402  438001 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 19:13:27.541502  438001 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 19:13:27.541592  438001 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 19:13:27.550934  438001 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 19:13:27.560650  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:13:27.684890  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:13:28.534223  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:13:28.757538  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:13:28.831313  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:13:28.897644  438001 api_server.go:52] waiting for apiserver process to appear ...
	I0819 19:13:28.897735  438001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:29.398486  438001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:29.898494  438001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:29.924881  438001 api_server.go:72] duration metric: took 1.027247684s to wait for apiserver process to appear ...
	I0819 19:13:29.924918  438001 api_server.go:88] waiting for apiserver healthz status ...
	I0819 19:13:29.924944  438001 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8443/healthz ...
	I0819 19:13:29.925535  438001 api_server.go:269] stopped: https://192.168.39.106:8443/healthz: Get "https://192.168.39.106:8443/healthz": dial tcp 192.168.39.106:8443: connect: connection refused
	I0819 19:13:30.425624  438001 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8443/healthz ...
	I0819 19:13:29.193722  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:31.194540  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:32.406445  438001 api_server.go:279] https://192.168.39.106:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 19:13:32.406476  438001 api_server.go:103] status: https://192.168.39.106:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 19:13:32.406491  438001 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8443/healthz ...
	I0819 19:13:32.470160  438001 api_server.go:279] https://192.168.39.106:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 19:13:32.470195  438001 api_server.go:103] status: https://192.168.39.106:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 19:13:32.470211  438001 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8443/healthz ...
	I0819 19:13:32.486292  438001 api_server.go:279] https://192.168.39.106:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 19:13:32.486322  438001 api_server.go:103] status: https://192.168.39.106:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 19:13:32.925943  438001 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8443/healthz ...
	I0819 19:13:32.933024  438001 api_server.go:279] https://192.168.39.106:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 19:13:32.933068  438001 api_server.go:103] status: https://192.168.39.106:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 19:13:33.425638  438001 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8443/healthz ...
	I0819 19:13:33.431919  438001 api_server.go:279] https://192.168.39.106:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 19:13:33.432051  438001 api_server.go:103] status: https://192.168.39.106:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 19:13:33.925369  438001 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8443/healthz ...
	I0819 19:13:33.930489  438001 api_server.go:279] https://192.168.39.106:8443/healthz returned 200:
	ok
	I0819 19:13:33.937758  438001 api_server.go:141] control plane version: v1.31.0
	I0819 19:13:33.937789  438001 api_server.go:131] duration metric: took 4.012862801s to wait for apiserver health ...
	I0819 19:13:33.937800  438001 cni.go:84] Creating CNI manager for ""
	I0819 19:13:33.937807  438001 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 19:13:33.939711  438001 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 19:13:30.419241  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:32.419437  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:30.723537  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:31.223437  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:31.723289  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:32.222714  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:32.723037  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:33.223138  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:33.723303  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:34.223334  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:34.722692  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:35.223021  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:33.941055  438001 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 19:13:33.953427  438001 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 19:13:33.982889  438001 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 19:13:33.998701  438001 system_pods.go:59] 8 kube-system pods found
	I0819 19:13:33.998750  438001 system_pods.go:61] "coredns-6f6b679f8f-22lbt" [c8a5cabd-41d4-41cb-91c1-2db1f3471db3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0819 19:13:33.998762  438001 system_pods.go:61] "etcd-no-preload-278232" [36d555a1-33e4-4c6c-b24e-2fee4fd84f2b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0819 19:13:33.998775  438001 system_pods.go:61] "kube-apiserver-no-preload-278232" [af7173e5-c4ac-4ece-b8b9-bb81cb6b9bfd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0819 19:13:33.998784  438001 system_pods.go:61] "kube-controller-manager-no-preload-278232" [2463d97a-5221-40ce-8fd7-08151165d6f7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0819 19:13:33.998794  438001 system_pods.go:61] "kube-proxy-rcf49" [85d5814a-1ba9-46be-ab11-17bf40c0f029] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0819 19:13:33.998807  438001 system_pods.go:61] "kube-scheduler-no-preload-278232" [3b327704-f70c-4d6f-a774-15427a305472] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0819 19:13:33.998819  438001 system_pods.go:61] "metrics-server-6867b74b74-vxwrs" [e8b74128-b393-4f0f-90fe-e05f20d54acd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 19:13:33.998827  438001 system_pods.go:61] "storage-provisioner" [24766475-1a5b-4f1a-9350-3e891b5272cc] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0819 19:13:33.998841  438001 system_pods.go:74] duration metric: took 15.918876ms to wait for pod list to return data ...
	I0819 19:13:33.998853  438001 node_conditions.go:102] verifying NodePressure condition ...
	I0819 19:13:34.003102  438001 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 19:13:34.003131  438001 node_conditions.go:123] node cpu capacity is 2
	I0819 19:13:34.003145  438001 node_conditions.go:105] duration metric: took 4.283682ms to run NodePressure ...
	I0819 19:13:34.003163  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:13:34.300052  438001 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0819 19:13:34.304483  438001 kubeadm.go:739] kubelet initialised
	I0819 19:13:34.304505  438001 kubeadm.go:740] duration metric: took 4.421894ms waiting for restarted kubelet to initialise ...
	I0819 19:13:34.304513  438001 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 19:13:34.310575  438001 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-22lbt" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:34.316040  438001 pod_ready.go:98] node "no-preload-278232" hosting pod "coredns-6f6b679f8f-22lbt" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:34.316068  438001 pod_ready.go:82] duration metric: took 5.462078ms for pod "coredns-6f6b679f8f-22lbt" in "kube-system" namespace to be "Ready" ...
	E0819 19:13:34.316080  438001 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-278232" hosting pod "coredns-6f6b679f8f-22lbt" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:34.316088  438001 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:34.320731  438001 pod_ready.go:98] node "no-preload-278232" hosting pod "etcd-no-preload-278232" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:34.320751  438001 pod_ready.go:82] duration metric: took 4.649545ms for pod "etcd-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	E0819 19:13:34.320758  438001 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-278232" hosting pod "etcd-no-preload-278232" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:34.320763  438001 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:34.325499  438001 pod_ready.go:98] node "no-preload-278232" hosting pod "kube-apiserver-no-preload-278232" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:34.325519  438001 pod_ready.go:82] duration metric: took 4.750861ms for pod "kube-apiserver-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	E0819 19:13:34.325526  438001 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-278232" hosting pod "kube-apiserver-no-preload-278232" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:34.325531  438001 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:34.388221  438001 pod_ready.go:98] node "no-preload-278232" hosting pod "kube-controller-manager-no-preload-278232" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:34.388248  438001 pod_ready.go:82] duration metric: took 62.708596ms for pod "kube-controller-manager-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	E0819 19:13:34.388259  438001 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-278232" hosting pod "kube-controller-manager-no-preload-278232" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:34.388265  438001 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-rcf49" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:34.787164  438001 pod_ready.go:98] node "no-preload-278232" hosting pod "kube-proxy-rcf49" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:34.787193  438001 pod_ready.go:82] duration metric: took 398.919585ms for pod "kube-proxy-rcf49" in "kube-system" namespace to be "Ready" ...
	E0819 19:13:34.787203  438001 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-278232" hosting pod "kube-proxy-rcf49" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:34.787210  438001 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:35.186336  438001 pod_ready.go:98] node "no-preload-278232" hosting pod "kube-scheduler-no-preload-278232" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:35.186365  438001 pod_ready.go:82] duration metric: took 399.147858ms for pod "kube-scheduler-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	E0819 19:13:35.186377  438001 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-278232" hosting pod "kube-scheduler-no-preload-278232" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:35.186386  438001 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:35.586266  438001 pod_ready.go:98] node "no-preload-278232" hosting pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:35.586292  438001 pod_ready.go:82] duration metric: took 399.895038ms for pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace to be "Ready" ...
	E0819 19:13:35.586301  438001 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-278232" hosting pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:35.586307  438001 pod_ready.go:39] duration metric: took 1.281785432s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 19:13:35.586326  438001 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 19:13:35.598523  438001 ops.go:34] apiserver oom_adj: -16
	I0819 19:13:35.598545  438001 kubeadm.go:597] duration metric: took 8.20995933s to restartPrimaryControlPlane
	I0819 19:13:35.598554  438001 kubeadm.go:394] duration metric: took 8.259514907s to StartCluster
	I0819 19:13:35.598576  438001 settings.go:142] acquiring lock: {Name:mk396fcf49a1d0e69583cf37ff3c819e37118163 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:13:35.598662  438001 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19468-372744/kubeconfig
	I0819 19:13:35.600424  438001 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/kubeconfig: {Name:mk8e7b4e1bb7da665111d2acd83eb48882c66853 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:13:35.600672  438001 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.106 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 19:13:35.600768  438001 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 19:13:35.600850  438001 addons.go:69] Setting storage-provisioner=true in profile "no-preload-278232"
	I0819 19:13:35.600879  438001 addons.go:69] Setting metrics-server=true in profile "no-preload-278232"
	I0819 19:13:35.600924  438001 addons.go:234] Setting addon metrics-server=true in "no-preload-278232"
	W0819 19:13:35.600938  438001 addons.go:243] addon metrics-server should already be in state true
	I0819 19:13:35.600884  438001 addons.go:234] Setting addon storage-provisioner=true in "no-preload-278232"
	W0819 19:13:35.600969  438001 addons.go:243] addon storage-provisioner should already be in state true
	I0819 19:13:35.600966  438001 config.go:182] Loaded profile config "no-preload-278232": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:13:35.600976  438001 host.go:66] Checking if "no-preload-278232" exists ...
	I0819 19:13:35.600988  438001 host.go:66] Checking if "no-preload-278232" exists ...
	I0819 19:13:35.601395  438001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:13:35.601428  438001 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:13:35.601436  438001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:13:35.601453  438001 addons.go:69] Setting default-storageclass=true in profile "no-preload-278232"
	I0819 19:13:35.601501  438001 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-278232"
	I0819 19:13:35.601463  438001 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:13:35.601898  438001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:13:35.601948  438001 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:13:35.602507  438001 out.go:177] * Verifying Kubernetes components...
	I0819 19:13:35.604092  438001 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:13:35.617515  438001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34839
	I0819 19:13:35.617538  438001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36157
	I0819 19:13:35.617521  438001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35771
	I0819 19:13:35.618045  438001 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:13:35.618101  438001 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:13:35.618163  438001 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:13:35.618570  438001 main.go:141] libmachine: Using API Version  1
	I0819 19:13:35.618598  438001 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:13:35.618712  438001 main.go:141] libmachine: Using API Version  1
	I0819 19:13:35.618734  438001 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:13:35.618715  438001 main.go:141] libmachine: Using API Version  1
	I0819 19:13:35.618754  438001 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:13:35.618989  438001 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:13:35.619109  438001 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:13:35.619111  438001 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:13:35.619177  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetState
	I0819 19:13:35.619649  438001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:13:35.619693  438001 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:13:35.619695  438001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:13:35.619768  438001 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:13:35.641244  438001 addons.go:234] Setting addon default-storageclass=true in "no-preload-278232"
	W0819 19:13:35.641268  438001 addons.go:243] addon default-storageclass should already be in state true
	I0819 19:13:35.641298  438001 host.go:66] Checking if "no-preload-278232" exists ...
	I0819 19:13:35.641558  438001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:13:35.641610  438001 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:13:35.659392  438001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39373
	I0819 19:13:35.659999  438001 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:13:35.660432  438001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38477
	I0819 19:13:35.660432  438001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35477
	I0819 19:13:35.660604  438001 main.go:141] libmachine: Using API Version  1
	I0819 19:13:35.660631  438001 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:13:35.661089  438001 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:13:35.661149  438001 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:13:35.661169  438001 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:13:35.661641  438001 main.go:141] libmachine: Using API Version  1
	I0819 19:13:35.661661  438001 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:13:35.661757  438001 main.go:141] libmachine: Using API Version  1
	I0819 19:13:35.661772  438001 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:13:35.661792  438001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:13:35.661826  438001 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:13:35.662039  438001 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:13:35.662142  438001 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:13:35.662222  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetState
	I0819 19:13:35.662375  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetState
	I0819 19:13:35.664221  438001 main.go:141] libmachine: (no-preload-278232) Calling .DriverName
	I0819 19:13:35.664397  438001 main.go:141] libmachine: (no-preload-278232) Calling .DriverName
	I0819 19:13:35.666459  438001 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0819 19:13:35.666471  438001 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:13:35.667849  438001 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0819 19:13:35.667864  438001 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0819 19:13:35.667882  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHHostname
	I0819 19:13:35.667944  438001 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 19:13:35.667959  438001 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 19:13:35.667977  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHHostname
	I0819 19:13:35.673516  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHPort
	I0819 19:13:35.673544  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:35.673520  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:35.673578  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:35.673593  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:35.673602  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:35.673521  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHPort
	I0819 19:13:35.673615  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:35.673793  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:35.673937  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:35.673986  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHUsername
	I0819 19:13:35.674150  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHUsername
	I0819 19:13:35.674324  438001 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/no-preload-278232/id_rsa Username:docker}
	I0819 19:13:35.674350  438001 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/no-preload-278232/id_rsa Username:docker}
	I0819 19:13:35.683691  438001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39783
	I0819 19:13:35.684219  438001 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:13:35.684806  438001 main.go:141] libmachine: Using API Version  1
	I0819 19:13:35.684831  438001 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:13:35.685251  438001 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:13:35.685515  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetState
	I0819 19:13:35.687268  438001 main.go:141] libmachine: (no-preload-278232) Calling .DriverName
	I0819 19:13:35.687485  438001 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 19:13:35.687503  438001 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 19:13:35.687524  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHHostname
	I0819 19:13:35.690504  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:35.691297  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHPort
	I0819 19:13:35.691333  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:35.691356  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:35.691477  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:35.691659  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHUsername
	I0819 19:13:35.691814  438001 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/no-preload-278232/id_rsa Username:docker}
	I0819 19:13:35.833054  438001 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 19:13:35.855442  438001 node_ready.go:35] waiting up to 6m0s for node "no-preload-278232" to be "Ready" ...
	I0819 19:13:35.923521  438001 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0819 19:13:35.923551  438001 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0819 19:13:35.940005  438001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 19:13:35.965657  438001 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0819 19:13:35.965686  438001 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0819 19:13:36.002636  438001 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 19:13:36.002665  438001 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0819 19:13:36.024764  438001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 19:13:36.058824  438001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 19:13:36.420421  438001 main.go:141] libmachine: Making call to close driver server
	I0819 19:13:36.420452  438001 main.go:141] libmachine: (no-preload-278232) Calling .Close
	I0819 19:13:36.420785  438001 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:13:36.420804  438001 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:13:36.420844  438001 main.go:141] libmachine: (no-preload-278232) DBG | Closing plugin on server side
	I0819 19:13:36.420904  438001 main.go:141] libmachine: Making call to close driver server
	I0819 19:13:36.420918  438001 main.go:141] libmachine: (no-preload-278232) Calling .Close
	I0819 19:13:36.421185  438001 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:13:36.421210  438001 main.go:141] libmachine: (no-preload-278232) DBG | Closing plugin on server side
	I0819 19:13:36.421224  438001 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:13:36.429463  438001 main.go:141] libmachine: Making call to close driver server
	I0819 19:13:36.429481  438001 main.go:141] libmachine: (no-preload-278232) Calling .Close
	I0819 19:13:36.429811  438001 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:13:36.429830  438001 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:13:37.141893  438001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.117083882s)
	I0819 19:13:37.141987  438001 main.go:141] libmachine: Making call to close driver server
	I0819 19:13:37.141999  438001 main.go:141] libmachine: (no-preload-278232) Calling .Close
	I0819 19:13:37.142472  438001 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:13:37.142495  438001 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:13:37.142506  438001 main.go:141] libmachine: Making call to close driver server
	I0819 19:13:37.142515  438001 main.go:141] libmachine: (no-preload-278232) Calling .Close
	I0819 19:13:37.142788  438001 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:13:37.142808  438001 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:13:37.142814  438001 main.go:141] libmachine: (no-preload-278232) DBG | Closing plugin on server side
	I0819 19:13:37.161659  438001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.10278963s)
	I0819 19:13:37.161723  438001 main.go:141] libmachine: Making call to close driver server
	I0819 19:13:37.161739  438001 main.go:141] libmachine: (no-preload-278232) Calling .Close
	I0819 19:13:37.162067  438001 main.go:141] libmachine: (no-preload-278232) DBG | Closing plugin on server side
	I0819 19:13:37.162099  438001 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:13:37.162125  438001 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:13:37.162142  438001 main.go:141] libmachine: Making call to close driver server
	I0819 19:13:37.162154  438001 main.go:141] libmachine: (no-preload-278232) Calling .Close
	I0819 19:13:37.162404  438001 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:13:37.162420  438001 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:13:37.162432  438001 addons.go:475] Verifying addon metrics-server=true in "no-preload-278232"
	I0819 19:13:37.164423  438001 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0819 19:13:33.694203  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:35.694403  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:34.918988  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:36.919564  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:35.722784  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:36.223168  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:36.723041  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:37.222801  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:37.722855  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:38.223296  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:38.722936  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:39.223326  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:39.722883  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:40.223284  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:37.165767  438001 addons.go:510] duration metric: took 1.565026237s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0819 19:13:37.859454  438001 node_ready.go:53] node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:39.859662  438001 node_ready.go:53] node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:38.193207  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:40.694127  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:39.418572  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:41.918302  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:43.918558  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:40.722612  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:41.222700  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:41.723144  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:42.223369  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:42.723209  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:43.222849  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:43.723518  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:44.223585  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:44.722772  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:45.223078  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:41.859965  438001 node_ready.go:53] node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:43.359120  438001 node_ready.go:49] node "no-preload-278232" has status "Ready":"True"
	I0819 19:13:43.359151  438001 node_ready.go:38] duration metric: took 7.503671074s for node "no-preload-278232" to be "Ready" ...
	I0819 19:13:43.359169  438001 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 19:13:43.365307  438001 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-22lbt" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:43.369626  438001 pod_ready.go:93] pod "coredns-6f6b679f8f-22lbt" in "kube-system" namespace has status "Ready":"True"
	I0819 19:13:43.369646  438001 pod_ready.go:82] duration metric: took 4.316734ms for pod "coredns-6f6b679f8f-22lbt" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:43.369654  438001 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:45.377672  438001 pod_ready.go:103] pod "etcd-no-preload-278232" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:43.193775  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:45.693494  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:45.919705  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:48.418981  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:45.723287  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:46.223666  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:46.722754  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:47.223414  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:47.723567  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:48.222938  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:48.723011  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:49.223076  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:49.723443  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:50.223627  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:47.875409  438001 pod_ready.go:103] pod "etcd-no-preload-278232" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:48.377127  438001 pod_ready.go:93] pod "etcd-no-preload-278232" in "kube-system" namespace has status "Ready":"True"
	I0819 19:13:48.377155  438001 pod_ready.go:82] duration metric: took 5.007493319s for pod "etcd-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:48.377169  438001 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:48.381841  438001 pod_ready.go:93] pod "kube-apiserver-no-preload-278232" in "kube-system" namespace has status "Ready":"True"
	I0819 19:13:48.381864  438001 pod_ready.go:82] duration metric: took 4.686309ms for pod "kube-apiserver-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:48.381877  438001 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:48.386382  438001 pod_ready.go:93] pod "kube-controller-manager-no-preload-278232" in "kube-system" namespace has status "Ready":"True"
	I0819 19:13:48.386397  438001 pod_ready.go:82] duration metric: took 4.514361ms for pod "kube-controller-manager-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:48.386405  438001 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rcf49" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:48.390940  438001 pod_ready.go:93] pod "kube-proxy-rcf49" in "kube-system" namespace has status "Ready":"True"
	I0819 19:13:48.390955  438001 pod_ready.go:82] duration metric: took 4.544499ms for pod "kube-proxy-rcf49" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:48.390963  438001 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:48.395159  438001 pod_ready.go:93] pod "kube-scheduler-no-preload-278232" in "kube-system" namespace has status "Ready":"True"
	I0819 19:13:48.395180  438001 pod_ready.go:82] duration metric: took 4.211012ms for pod "kube-scheduler-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:48.395197  438001 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:50.402109  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:47.693601  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:50.193183  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:50.918811  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:52.919981  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:50.723259  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:51.222697  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:51.723284  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:52.222757  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:52.723414  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:53.223202  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:53.722721  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:54.223578  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:54.723400  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:55.222730  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:52.901901  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:54.903583  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:52.693231  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:54.693934  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:56.695700  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:55.418965  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:57.918885  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:55.723644  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:56.223212  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:56.722729  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:57.223226  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:57.723045  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:58.222901  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:58.722710  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:59.223149  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:59.723186  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:00.222763  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:00.222844  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:00.271266  438716 cri.go:89] found id: ""
	I0819 19:14:00.271296  438716 logs.go:276] 0 containers: []
	W0819 19:14:00.271305  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:00.271312  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:00.271373  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:00.311870  438716 cri.go:89] found id: ""
	I0819 19:14:00.311900  438716 logs.go:276] 0 containers: []
	W0819 19:14:00.311936  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:00.311946  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:00.312011  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:00.350476  438716 cri.go:89] found id: ""
	I0819 19:14:00.350505  438716 logs.go:276] 0 containers: []
	W0819 19:14:00.350514  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:00.350520  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:00.350586  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:00.387404  438716 cri.go:89] found id: ""
	I0819 19:14:00.387438  438716 logs.go:276] 0 containers: []
	W0819 19:14:00.387447  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:00.387457  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:00.387516  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:00.423493  438716 cri.go:89] found id: ""
	I0819 19:14:00.423521  438716 logs.go:276] 0 containers: []
	W0819 19:14:00.423529  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:00.423535  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:00.423596  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:00.458593  438716 cri.go:89] found id: ""
	I0819 19:14:00.458630  438716 logs.go:276] 0 containers: []
	W0819 19:14:00.458642  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:00.458651  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:00.458722  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:00.495645  438716 cri.go:89] found id: ""
	I0819 19:14:00.495695  438716 logs.go:276] 0 containers: []
	W0819 19:14:00.495709  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:00.495717  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:00.495782  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:00.531464  438716 cri.go:89] found id: ""
	I0819 19:14:00.531498  438716 logs.go:276] 0 containers: []
	W0819 19:14:00.531508  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:00.531529  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:00.531543  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:13:57.401329  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:59.402701  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:59.192781  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:01.194411  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:00.419287  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:02.918450  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:00.584029  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:00.584078  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:00.597870  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:00.597908  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:00.746061  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:00.746085  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:00.746098  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:00.818001  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:00.818042  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:03.358509  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:03.371262  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:03.371345  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:03.408201  438716 cri.go:89] found id: ""
	I0819 19:14:03.408231  438716 logs.go:276] 0 containers: []
	W0819 19:14:03.408241  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:03.408248  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:03.408306  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:03.445354  438716 cri.go:89] found id: ""
	I0819 19:14:03.445386  438716 logs.go:276] 0 containers: []
	W0819 19:14:03.445396  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:03.445408  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:03.445470  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:03.481144  438716 cri.go:89] found id: ""
	I0819 19:14:03.481178  438716 logs.go:276] 0 containers: []
	W0819 19:14:03.481188  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:03.481195  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:03.481260  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:03.529069  438716 cri.go:89] found id: ""
	I0819 19:14:03.529109  438716 logs.go:276] 0 containers: []
	W0819 19:14:03.529141  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:03.529148  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:03.529216  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:03.590325  438716 cri.go:89] found id: ""
	I0819 19:14:03.590364  438716 logs.go:276] 0 containers: []
	W0819 19:14:03.590377  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:03.590386  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:03.590456  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:03.634924  438716 cri.go:89] found id: ""
	I0819 19:14:03.634969  438716 logs.go:276] 0 containers: []
	W0819 19:14:03.634981  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:03.634990  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:03.635062  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:03.684133  438716 cri.go:89] found id: ""
	I0819 19:14:03.684164  438716 logs.go:276] 0 containers: []
	W0819 19:14:03.684176  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:03.684184  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:03.684253  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:03.722285  438716 cri.go:89] found id: ""
	I0819 19:14:03.722312  438716 logs.go:276] 0 containers: []
	W0819 19:14:03.722321  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:03.722330  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:03.722372  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:03.735937  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:03.735965  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:03.814906  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:03.814931  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:03.814948  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:03.896323  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:03.896363  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:03.943002  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:03.943037  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:01.901154  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:03.902972  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:05.903388  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:03.694686  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:06.193228  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:04.919332  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:07.419221  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:06.496886  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:06.510719  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:06.510790  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:06.544692  438716 cri.go:89] found id: ""
	I0819 19:14:06.544724  438716 logs.go:276] 0 containers: []
	W0819 19:14:06.544737  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:06.544747  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:06.544818  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:06.578935  438716 cri.go:89] found id: ""
	I0819 19:14:06.578962  438716 logs.go:276] 0 containers: []
	W0819 19:14:06.578971  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:06.578979  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:06.579033  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:06.614488  438716 cri.go:89] found id: ""
	I0819 19:14:06.614516  438716 logs.go:276] 0 containers: []
	W0819 19:14:06.614525  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:06.614532  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:06.614583  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:06.648579  438716 cri.go:89] found id: ""
	I0819 19:14:06.648612  438716 logs.go:276] 0 containers: []
	W0819 19:14:06.648623  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:06.648630  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:06.648685  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:06.685168  438716 cri.go:89] found id: ""
	I0819 19:14:06.685198  438716 logs.go:276] 0 containers: []
	W0819 19:14:06.685208  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:06.685217  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:06.685280  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:06.720391  438716 cri.go:89] found id: ""
	I0819 19:14:06.720424  438716 logs.go:276] 0 containers: []
	W0819 19:14:06.720433  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:06.720440  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:06.720491  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:06.758183  438716 cri.go:89] found id: ""
	I0819 19:14:06.758217  438716 logs.go:276] 0 containers: []
	W0819 19:14:06.758228  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:06.758237  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:06.758307  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:06.800182  438716 cri.go:89] found id: ""
	I0819 19:14:06.800215  438716 logs.go:276] 0 containers: []
	W0819 19:14:06.800224  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:06.800234  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:06.800247  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:06.852735  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:06.852777  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:06.867214  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:06.867249  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:06.938942  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:06.938967  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:06.938980  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:07.023950  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:07.023992  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:09.568889  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:09.588481  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:09.588545  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:09.630790  438716 cri.go:89] found id: ""
	I0819 19:14:09.630825  438716 logs.go:276] 0 containers: []
	W0819 19:14:09.630839  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:09.630848  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:09.630926  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:09.673258  438716 cri.go:89] found id: ""
	I0819 19:14:09.673291  438716 logs.go:276] 0 containers: []
	W0819 19:14:09.673302  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:09.673311  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:09.673374  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:09.709500  438716 cri.go:89] found id: ""
	I0819 19:14:09.709530  438716 logs.go:276] 0 containers: []
	W0819 19:14:09.709541  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:09.709549  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:09.709617  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:09.743110  438716 cri.go:89] found id: ""
	I0819 19:14:09.743139  438716 logs.go:276] 0 containers: []
	W0819 19:14:09.743150  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:09.743164  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:09.743238  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:09.776717  438716 cri.go:89] found id: ""
	I0819 19:14:09.776746  438716 logs.go:276] 0 containers: []
	W0819 19:14:09.776754  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:09.776761  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:09.776820  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:09.811381  438716 cri.go:89] found id: ""
	I0819 19:14:09.811409  438716 logs.go:276] 0 containers: []
	W0819 19:14:09.811417  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:09.811423  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:09.811474  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:09.843699  438716 cri.go:89] found id: ""
	I0819 19:14:09.843730  438716 logs.go:276] 0 containers: []
	W0819 19:14:09.843741  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:09.843750  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:09.843822  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:09.882972  438716 cri.go:89] found id: ""
	I0819 19:14:09.883005  438716 logs.go:276] 0 containers: []
	W0819 19:14:09.883018  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:09.883033  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:09.883050  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:09.973077  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:09.973114  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:10.014505  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:10.014556  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:10.069779  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:10.069819  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:10.084337  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:10.084367  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:10.164870  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:08.402464  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:10.900684  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:08.193980  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:10.194818  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:09.918852  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:12.419687  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:12.665929  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:12.679881  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:12.679960  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:12.718305  438716 cri.go:89] found id: ""
	I0819 19:14:12.718332  438716 logs.go:276] 0 containers: []
	W0819 19:14:12.718341  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:12.718348  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:12.718398  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:12.759084  438716 cri.go:89] found id: ""
	I0819 19:14:12.759112  438716 logs.go:276] 0 containers: []
	W0819 19:14:12.759127  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:12.759135  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:12.759205  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:12.793193  438716 cri.go:89] found id: ""
	I0819 19:14:12.793228  438716 logs.go:276] 0 containers: []
	W0819 19:14:12.793238  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:12.793245  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:12.793299  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:12.828283  438716 cri.go:89] found id: ""
	I0819 19:14:12.828310  438716 logs.go:276] 0 containers: []
	W0819 19:14:12.828322  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:12.828329  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:12.828379  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:12.861971  438716 cri.go:89] found id: ""
	I0819 19:14:12.862004  438716 logs.go:276] 0 containers: []
	W0819 19:14:12.862016  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:12.862025  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:12.862092  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:12.898173  438716 cri.go:89] found id: ""
	I0819 19:14:12.898203  438716 logs.go:276] 0 containers: []
	W0819 19:14:12.898214  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:12.898223  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:12.898287  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:12.940203  438716 cri.go:89] found id: ""
	I0819 19:14:12.940234  438716 logs.go:276] 0 containers: []
	W0819 19:14:12.940246  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:12.940254  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:12.940309  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:12.978092  438716 cri.go:89] found id: ""
	I0819 19:14:12.978123  438716 logs.go:276] 0 containers: []
	W0819 19:14:12.978134  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:12.978147  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:12.978172  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:12.992082  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:12.992117  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:13.073609  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:13.073636  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:13.073649  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:13.153060  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:13.153105  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:13.196535  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:13.196581  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:12.903116  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:15.401183  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:12.693872  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:14.694252  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:17.193116  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:14.919563  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:17.418946  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:15.750298  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:15.763913  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:15.763996  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:15.804515  438716 cri.go:89] found id: ""
	I0819 19:14:15.804542  438716 logs.go:276] 0 containers: []
	W0819 19:14:15.804551  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:15.804558  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:15.804624  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:15.847077  438716 cri.go:89] found id: ""
	I0819 19:14:15.847112  438716 logs.go:276] 0 containers: []
	W0819 19:14:15.847125  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:15.847133  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:15.847200  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:15.882316  438716 cri.go:89] found id: ""
	I0819 19:14:15.882348  438716 logs.go:276] 0 containers: []
	W0819 19:14:15.882358  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:15.882365  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:15.882417  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:15.919084  438716 cri.go:89] found id: ""
	I0819 19:14:15.919114  438716 logs.go:276] 0 containers: []
	W0819 19:14:15.919125  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:15.919132  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:15.919202  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:15.953139  438716 cri.go:89] found id: ""
	I0819 19:14:15.953175  438716 logs.go:276] 0 containers: []
	W0819 19:14:15.953188  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:15.953209  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:15.953276  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:15.993231  438716 cri.go:89] found id: ""
	I0819 19:14:15.993259  438716 logs.go:276] 0 containers: []
	W0819 19:14:15.993268  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:15.993286  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:15.993337  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:16.030382  438716 cri.go:89] found id: ""
	I0819 19:14:16.030412  438716 logs.go:276] 0 containers: []
	W0819 19:14:16.030422  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:16.030428  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:16.030482  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:16.065834  438716 cri.go:89] found id: ""
	I0819 19:14:16.065861  438716 logs.go:276] 0 containers: []
	W0819 19:14:16.065872  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:16.065885  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:16.065901  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:16.117943  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:16.117983  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:16.132010  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:16.132041  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:16.202398  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:16.202416  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:16.202429  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:16.286609  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:16.286653  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:18.830502  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:18.844022  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:18.844107  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:18.880539  438716 cri.go:89] found id: ""
	I0819 19:14:18.880576  438716 logs.go:276] 0 containers: []
	W0819 19:14:18.880588  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:18.880595  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:18.880657  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:18.918426  438716 cri.go:89] found id: ""
	I0819 19:14:18.918454  438716 logs.go:276] 0 containers: []
	W0819 19:14:18.918463  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:18.918470  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:18.918531  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:18.954534  438716 cri.go:89] found id: ""
	I0819 19:14:18.954566  438716 logs.go:276] 0 containers: []
	W0819 19:14:18.954578  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:18.954587  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:18.954651  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:18.993820  438716 cri.go:89] found id: ""
	I0819 19:14:18.993852  438716 logs.go:276] 0 containers: []
	W0819 19:14:18.993864  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:18.993885  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:18.993967  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:19.026947  438716 cri.go:89] found id: ""
	I0819 19:14:19.026982  438716 logs.go:276] 0 containers: []
	W0819 19:14:19.026995  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:19.027005  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:19.027072  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:19.062097  438716 cri.go:89] found id: ""
	I0819 19:14:19.062130  438716 logs.go:276] 0 containers: []
	W0819 19:14:19.062142  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:19.062150  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:19.062207  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:19.099522  438716 cri.go:89] found id: ""
	I0819 19:14:19.099549  438716 logs.go:276] 0 containers: []
	W0819 19:14:19.099559  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:19.099567  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:19.099630  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:19.134766  438716 cri.go:89] found id: ""
	I0819 19:14:19.134803  438716 logs.go:276] 0 containers: []
	W0819 19:14:19.134815  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:19.134850  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:19.134867  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:19.176428  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:19.176458  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:19.231448  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:19.231484  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:19.245631  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:19.245687  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:19.318679  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:19.318703  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:19.318717  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:17.401916  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:19.402628  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:19.195224  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:21.693528  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:19.918727  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:21.918863  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:23.919050  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:21.898430  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:21.913840  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:21.913911  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:21.955682  438716 cri.go:89] found id: ""
	I0819 19:14:21.955720  438716 logs.go:276] 0 containers: []
	W0819 19:14:21.955732  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:21.955743  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:21.955820  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:21.994798  438716 cri.go:89] found id: ""
	I0819 19:14:21.994836  438716 logs.go:276] 0 containers: []
	W0819 19:14:21.994845  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:21.994852  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:21.994904  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:22.029155  438716 cri.go:89] found id: ""
	I0819 19:14:22.029191  438716 logs.go:276] 0 containers: []
	W0819 19:14:22.029202  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:22.029210  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:22.029281  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:22.072489  438716 cri.go:89] found id: ""
	I0819 19:14:22.072534  438716 logs.go:276] 0 containers: []
	W0819 19:14:22.072546  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:22.072559  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:22.072621  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:22.109160  438716 cri.go:89] found id: ""
	I0819 19:14:22.109192  438716 logs.go:276] 0 containers: []
	W0819 19:14:22.109203  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:22.109211  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:22.109281  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:22.146161  438716 cri.go:89] found id: ""
	I0819 19:14:22.146194  438716 logs.go:276] 0 containers: []
	W0819 19:14:22.146206  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:22.146215  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:22.146276  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:22.183005  438716 cri.go:89] found id: ""
	I0819 19:14:22.183033  438716 logs.go:276] 0 containers: []
	W0819 19:14:22.183046  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:22.183054  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:22.183108  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:22.220745  438716 cri.go:89] found id: ""
	I0819 19:14:22.220772  438716 logs.go:276] 0 containers: []
	W0819 19:14:22.220784  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:22.220798  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:22.220817  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:22.297377  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:22.297403  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:22.297416  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:22.373503  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:22.373542  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:22.414922  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:22.414956  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:22.477902  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:22.477944  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:24.993405  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:25.007305  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:25.007379  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:25.041157  438716 cri.go:89] found id: ""
	I0819 19:14:25.041191  438716 logs.go:276] 0 containers: []
	W0819 19:14:25.041203  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:25.041211  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:25.041278  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:25.078572  438716 cri.go:89] found id: ""
	I0819 19:14:25.078605  438716 logs.go:276] 0 containers: []
	W0819 19:14:25.078617  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:25.078625  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:25.078695  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:25.114571  438716 cri.go:89] found id: ""
	I0819 19:14:25.114603  438716 logs.go:276] 0 containers: []
	W0819 19:14:25.114615  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:25.114624  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:25.114690  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:25.154341  438716 cri.go:89] found id: ""
	I0819 19:14:25.154366  438716 logs.go:276] 0 containers: []
	W0819 19:14:25.154375  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:25.154381  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:25.154434  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:25.192592  438716 cri.go:89] found id: ""
	I0819 19:14:25.192620  438716 logs.go:276] 0 containers: []
	W0819 19:14:25.192631  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:25.192640  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:25.192705  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:25.227813  438716 cri.go:89] found id: ""
	I0819 19:14:25.227847  438716 logs.go:276] 0 containers: []
	W0819 19:14:25.227860  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:25.227869  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:25.227933  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:25.264321  438716 cri.go:89] found id: ""
	I0819 19:14:25.264349  438716 logs.go:276] 0 containers: []
	W0819 19:14:25.264357  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:25.264364  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:25.264427  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:25.298562  438716 cri.go:89] found id: ""
	I0819 19:14:25.298596  438716 logs.go:276] 0 containers: []
	W0819 19:14:25.298608  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:25.298621  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:25.298638  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:25.352659  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:25.352695  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:25.366638  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:25.366665  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:25.432964  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:25.432992  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:25.433010  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:25.511487  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:25.511549  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:21.902660  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:24.401454  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:26.402255  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:24.193406  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:26.194758  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:25.919090  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:28.420031  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:28.057003  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:28.070849  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:28.070914  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:28.107817  438716 cri.go:89] found id: ""
	I0819 19:14:28.107852  438716 logs.go:276] 0 containers: []
	W0819 19:14:28.107865  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:28.107875  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:28.107948  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:28.141816  438716 cri.go:89] found id: ""
	I0819 19:14:28.141862  438716 logs.go:276] 0 containers: []
	W0819 19:14:28.141874  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:28.141887  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:28.141958  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:28.179854  438716 cri.go:89] found id: ""
	I0819 19:14:28.179885  438716 logs.go:276] 0 containers: []
	W0819 19:14:28.179893  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:28.179905  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:28.179972  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:28.217335  438716 cri.go:89] found id: ""
	I0819 19:14:28.217364  438716 logs.go:276] 0 containers: []
	W0819 19:14:28.217372  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:28.217380  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:28.217438  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:28.254161  438716 cri.go:89] found id: ""
	I0819 19:14:28.254193  438716 logs.go:276] 0 containers: []
	W0819 19:14:28.254204  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:28.254213  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:28.254276  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:28.288658  438716 cri.go:89] found id: ""
	I0819 19:14:28.288682  438716 logs.go:276] 0 containers: []
	W0819 19:14:28.288691  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:28.288698  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:28.288749  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:28.321957  438716 cri.go:89] found id: ""
	I0819 19:14:28.321987  438716 logs.go:276] 0 containers: []
	W0819 19:14:28.321996  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:28.322004  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:28.322057  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:28.355032  438716 cri.go:89] found id: ""
	I0819 19:14:28.355068  438716 logs.go:276] 0 containers: []
	W0819 19:14:28.355080  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:28.355094  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:28.355111  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:28.406220  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:28.406253  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:28.420877  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:28.420907  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:28.502576  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:28.502598  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:28.502614  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:28.582717  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:28.582769  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:28.904716  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:31.401098  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:28.195001  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:30.693605  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:30.917957  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:32.918239  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:31.121960  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:31.135502  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:31.135568  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:31.170423  438716 cri.go:89] found id: ""
	I0819 19:14:31.170451  438716 logs.go:276] 0 containers: []
	W0819 19:14:31.170461  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:31.170467  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:31.170532  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:31.207328  438716 cri.go:89] found id: ""
	I0819 19:14:31.207356  438716 logs.go:276] 0 containers: []
	W0819 19:14:31.207364  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:31.207370  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:31.207430  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:31.245655  438716 cri.go:89] found id: ""
	I0819 19:14:31.245687  438716 logs.go:276] 0 containers: []
	W0819 19:14:31.245698  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:31.245707  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:31.245773  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:31.282174  438716 cri.go:89] found id: ""
	I0819 19:14:31.282208  438716 logs.go:276] 0 containers: []
	W0819 19:14:31.282221  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:31.282230  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:31.282303  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:31.316779  438716 cri.go:89] found id: ""
	I0819 19:14:31.316810  438716 logs.go:276] 0 containers: []
	W0819 19:14:31.316818  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:31.316826  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:31.316879  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:31.356849  438716 cri.go:89] found id: ""
	I0819 19:14:31.356884  438716 logs.go:276] 0 containers: []
	W0819 19:14:31.356894  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:31.356900  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:31.356963  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:31.395102  438716 cri.go:89] found id: ""
	I0819 19:14:31.395135  438716 logs.go:276] 0 containers: []
	W0819 19:14:31.395143  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:31.395150  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:31.395205  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:31.433018  438716 cri.go:89] found id: ""
	I0819 19:14:31.433045  438716 logs.go:276] 0 containers: []
	W0819 19:14:31.433076  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:31.433091  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:31.433108  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:31.446294  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:31.446319  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:31.518158  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:31.518180  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:31.518196  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:31.600568  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:31.600611  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:31.642356  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:31.642386  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:34.195665  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:34.210300  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:34.210370  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:34.248715  438716 cri.go:89] found id: ""
	I0819 19:14:34.248753  438716 logs.go:276] 0 containers: []
	W0819 19:14:34.248767  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:34.248775  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:34.248849  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:34.285305  438716 cri.go:89] found id: ""
	I0819 19:14:34.285334  438716 logs.go:276] 0 containers: []
	W0819 19:14:34.285347  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:34.285355  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:34.285438  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:34.326114  438716 cri.go:89] found id: ""
	I0819 19:14:34.326148  438716 logs.go:276] 0 containers: []
	W0819 19:14:34.326160  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:34.326168  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:34.326235  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:34.360587  438716 cri.go:89] found id: ""
	I0819 19:14:34.360616  438716 logs.go:276] 0 containers: []
	W0819 19:14:34.360628  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:34.360638  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:34.360715  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:34.397452  438716 cri.go:89] found id: ""
	I0819 19:14:34.397483  438716 logs.go:276] 0 containers: []
	W0819 19:14:34.397491  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:34.397498  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:34.397556  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:34.433651  438716 cri.go:89] found id: ""
	I0819 19:14:34.433683  438716 logs.go:276] 0 containers: []
	W0819 19:14:34.433694  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:34.433702  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:34.433771  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:34.468758  438716 cri.go:89] found id: ""
	I0819 19:14:34.468787  438716 logs.go:276] 0 containers: []
	W0819 19:14:34.468796  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:34.468802  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:34.468856  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:34.505787  438716 cri.go:89] found id: ""
	I0819 19:14:34.505816  438716 logs.go:276] 0 containers: []
	W0819 19:14:34.505828  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:34.505842  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:34.505859  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:34.519430  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:34.519463  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:34.592785  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:34.592810  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:34.592827  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:34.671215  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:34.671254  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:34.711248  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:34.711277  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:33.403429  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:35.901124  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:33.194319  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:35.694280  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:34.918372  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:37.418982  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:37.265131  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:37.279035  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:37.279127  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:37.325556  438716 cri.go:89] found id: ""
	I0819 19:14:37.325589  438716 logs.go:276] 0 containers: []
	W0819 19:14:37.325601  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:37.325610  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:37.325676  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:37.360514  438716 cri.go:89] found id: ""
	I0819 19:14:37.360541  438716 logs.go:276] 0 containers: []
	W0819 19:14:37.360553  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:37.360561  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:37.360616  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:37.394428  438716 cri.go:89] found id: ""
	I0819 19:14:37.394456  438716 logs.go:276] 0 containers: []
	W0819 19:14:37.394465  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:37.394472  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:37.394531  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:37.430221  438716 cri.go:89] found id: ""
	I0819 19:14:37.430249  438716 logs.go:276] 0 containers: []
	W0819 19:14:37.430257  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:37.430264  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:37.430324  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:37.466598  438716 cri.go:89] found id: ""
	I0819 19:14:37.466630  438716 logs.go:276] 0 containers: []
	W0819 19:14:37.466641  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:37.466649  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:37.466719  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:37.510455  438716 cri.go:89] found id: ""
	I0819 19:14:37.510484  438716 logs.go:276] 0 containers: []
	W0819 19:14:37.510492  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:37.510499  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:37.510563  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:37.546122  438716 cri.go:89] found id: ""
	I0819 19:14:37.546157  438716 logs.go:276] 0 containers: []
	W0819 19:14:37.546169  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:37.546178  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:37.546247  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:37.579425  438716 cri.go:89] found id: ""
	I0819 19:14:37.579452  438716 logs.go:276] 0 containers: []
	W0819 19:14:37.579463  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:37.579475  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:37.579491  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:37.592673  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:37.592704  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:37.674026  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:37.674048  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:37.674065  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:37.752206  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:37.752244  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:37.791281  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:37.791321  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:40.345520  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:40.358771  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:40.358835  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:40.394515  438716 cri.go:89] found id: ""
	I0819 19:14:40.394549  438716 logs.go:276] 0 containers: []
	W0819 19:14:40.394565  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:40.394575  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:40.394637  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:40.430971  438716 cri.go:89] found id: ""
	I0819 19:14:40.431007  438716 logs.go:276] 0 containers: []
	W0819 19:14:40.431018  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:40.431027  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:40.431094  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:40.471417  438716 cri.go:89] found id: ""
	I0819 19:14:40.471443  438716 logs.go:276] 0 containers: []
	W0819 19:14:40.471452  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:40.471458  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:40.471511  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:40.508641  438716 cri.go:89] found id: ""
	I0819 19:14:40.508670  438716 logs.go:276] 0 containers: []
	W0819 19:14:40.508678  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:40.508684  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:40.508749  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:37.903083  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:40.402562  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:37.695031  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:40.193724  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:39.921480  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:42.420201  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:40.542418  438716 cri.go:89] found id: ""
	I0819 19:14:40.542456  438716 logs.go:276] 0 containers: []
	W0819 19:14:40.542465  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:40.542472  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:40.542533  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:40.577367  438716 cri.go:89] found id: ""
	I0819 19:14:40.577399  438716 logs.go:276] 0 containers: []
	W0819 19:14:40.577408  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:40.577414  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:40.577476  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:40.611111  438716 cri.go:89] found id: ""
	I0819 19:14:40.611138  438716 logs.go:276] 0 containers: []
	W0819 19:14:40.611147  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:40.611155  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:40.611222  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:40.650769  438716 cri.go:89] found id: ""
	I0819 19:14:40.650797  438716 logs.go:276] 0 containers: []
	W0819 19:14:40.650805  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:40.650814  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:40.650827  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:40.688085  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:40.688111  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:40.740187  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:40.740225  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:40.754774  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:40.754803  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:40.828689  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:40.828712  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:40.828728  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:43.419171  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:43.432127  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:43.432201  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:43.468751  438716 cri.go:89] found id: ""
	I0819 19:14:43.468778  438716 logs.go:276] 0 containers: []
	W0819 19:14:43.468787  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:43.468803  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:43.468870  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:43.503290  438716 cri.go:89] found id: ""
	I0819 19:14:43.503319  438716 logs.go:276] 0 containers: []
	W0819 19:14:43.503328  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:43.503334  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:43.503390  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:43.536382  438716 cri.go:89] found id: ""
	I0819 19:14:43.536416  438716 logs.go:276] 0 containers: []
	W0819 19:14:43.536435  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:43.536443  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:43.536494  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:43.571570  438716 cri.go:89] found id: ""
	I0819 19:14:43.571602  438716 logs.go:276] 0 containers: []
	W0819 19:14:43.571611  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:43.571617  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:43.571682  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:43.610421  438716 cri.go:89] found id: ""
	I0819 19:14:43.610455  438716 logs.go:276] 0 containers: []
	W0819 19:14:43.610465  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:43.610473  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:43.610524  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:43.647173  438716 cri.go:89] found id: ""
	I0819 19:14:43.647200  438716 logs.go:276] 0 containers: []
	W0819 19:14:43.647209  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:43.647215  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:43.647266  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:43.684493  438716 cri.go:89] found id: ""
	I0819 19:14:43.684525  438716 logs.go:276] 0 containers: []
	W0819 19:14:43.684535  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:43.684541  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:43.684609  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:43.718781  438716 cri.go:89] found id: ""
	I0819 19:14:43.718811  438716 logs.go:276] 0 containers: []
	W0819 19:14:43.718822  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:43.718834  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:43.718858  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:43.732546  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:43.732578  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:43.819640  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:43.819665  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:43.819700  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:43.900246  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:43.900286  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:43.941751  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:43.941783  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:42.901387  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:44.901876  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:42.693950  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:45.193132  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:44.918631  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:47.417977  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:46.498232  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:46.511167  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:46.511237  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:46.545493  438716 cri.go:89] found id: ""
	I0819 19:14:46.545528  438716 logs.go:276] 0 containers: []
	W0819 19:14:46.545541  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:46.545549  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:46.545607  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:46.580599  438716 cri.go:89] found id: ""
	I0819 19:14:46.580626  438716 logs.go:276] 0 containers: []
	W0819 19:14:46.580634  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:46.580640  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:46.580760  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:46.614515  438716 cri.go:89] found id: ""
	I0819 19:14:46.614551  438716 logs.go:276] 0 containers: []
	W0819 19:14:46.614561  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:46.614570  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:46.614637  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:46.647767  438716 cri.go:89] found id: ""
	I0819 19:14:46.647803  438716 logs.go:276] 0 containers: []
	W0819 19:14:46.647816  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:46.647825  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:46.647893  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:46.681660  438716 cri.go:89] found id: ""
	I0819 19:14:46.681695  438716 logs.go:276] 0 containers: []
	W0819 19:14:46.681707  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:46.681717  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:46.681788  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:46.718828  438716 cri.go:89] found id: ""
	I0819 19:14:46.718858  438716 logs.go:276] 0 containers: []
	W0819 19:14:46.718868  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:46.718875  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:46.718929  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:46.760524  438716 cri.go:89] found id: ""
	I0819 19:14:46.760553  438716 logs.go:276] 0 containers: []
	W0819 19:14:46.760561  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:46.760569  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:46.760634  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:46.799014  438716 cri.go:89] found id: ""
	I0819 19:14:46.799042  438716 logs.go:276] 0 containers: []
	W0819 19:14:46.799054  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:46.799067  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:46.799135  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:46.850769  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:46.850812  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:46.865647  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:46.865698  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:46.942197  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:46.942228  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:46.942244  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:47.019295  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:47.019337  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:49.562713  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:49.575406  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:49.575484  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:49.610067  438716 cri.go:89] found id: ""
	I0819 19:14:49.610105  438716 logs.go:276] 0 containers: []
	W0819 19:14:49.610115  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:49.610121  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:49.610182  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:49.646164  438716 cri.go:89] found id: ""
	I0819 19:14:49.646205  438716 logs.go:276] 0 containers: []
	W0819 19:14:49.646230  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:49.646238  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:49.646317  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:49.680268  438716 cri.go:89] found id: ""
	I0819 19:14:49.680303  438716 logs.go:276] 0 containers: []
	W0819 19:14:49.680314  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:49.680322  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:49.680387  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:49.714952  438716 cri.go:89] found id: ""
	I0819 19:14:49.714981  438716 logs.go:276] 0 containers: []
	W0819 19:14:49.714992  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:49.715001  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:49.715067  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:49.749483  438716 cri.go:89] found id: ""
	I0819 19:14:49.749516  438716 logs.go:276] 0 containers: []
	W0819 19:14:49.749528  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:49.749537  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:49.749616  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:49.794506  438716 cri.go:89] found id: ""
	I0819 19:14:49.794538  438716 logs.go:276] 0 containers: []
	W0819 19:14:49.794550  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:49.794558  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:49.794628  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:49.847284  438716 cri.go:89] found id: ""
	I0819 19:14:49.847313  438716 logs.go:276] 0 containers: []
	W0819 19:14:49.847324  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:49.847334  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:49.847398  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:49.903800  438716 cri.go:89] found id: ""
	I0819 19:14:49.903829  438716 logs.go:276] 0 containers: []
	W0819 19:14:49.903839  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:49.903850  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:49.903867  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:49.972836  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:49.972866  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:49.972885  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:50.049939  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:50.049976  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:50.086514  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:50.086550  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:50.140681  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:50.140718  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:46.903667  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:49.402220  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:51.402281  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:47.693723  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:49.694755  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:52.193220  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:49.919931  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:52.419880  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:52.656573  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:52.670043  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:52.670124  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:52.704514  438716 cri.go:89] found id: ""
	I0819 19:14:52.704541  438716 logs.go:276] 0 containers: []
	W0819 19:14:52.704551  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:52.704558  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:52.704621  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:52.738329  438716 cri.go:89] found id: ""
	I0819 19:14:52.738357  438716 logs.go:276] 0 containers: []
	W0819 19:14:52.738365  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:52.738371  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:52.738423  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:52.774886  438716 cri.go:89] found id: ""
	I0819 19:14:52.774917  438716 logs.go:276] 0 containers: []
	W0819 19:14:52.774926  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:52.774933  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:52.774986  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:52.810262  438716 cri.go:89] found id: ""
	I0819 19:14:52.810288  438716 logs.go:276] 0 containers: []
	W0819 19:14:52.810296  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:52.810303  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:52.810363  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:52.848429  438716 cri.go:89] found id: ""
	I0819 19:14:52.848455  438716 logs.go:276] 0 containers: []
	W0819 19:14:52.848463  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:52.848474  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:52.848539  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:52.886135  438716 cri.go:89] found id: ""
	I0819 19:14:52.886163  438716 logs.go:276] 0 containers: []
	W0819 19:14:52.886179  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:52.886185  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:52.886241  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:52.923288  438716 cri.go:89] found id: ""
	I0819 19:14:52.923314  438716 logs.go:276] 0 containers: []
	W0819 19:14:52.923325  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:52.923333  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:52.923397  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:52.957273  438716 cri.go:89] found id: ""
	I0819 19:14:52.957303  438716 logs.go:276] 0 containers: []
	W0819 19:14:52.957315  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:52.957328  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:52.957345  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:52.970687  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:52.970714  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:53.045081  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:53.045108  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:53.045125  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:53.122233  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:53.122279  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:53.161525  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:53.161554  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:53.901584  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:55.902739  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:54.194220  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:56.197070  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:54.917358  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:56.918562  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:58.919041  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:55.714177  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:55.733726  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:55.733809  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:55.781435  438716 cri.go:89] found id: ""
	I0819 19:14:55.781472  438716 logs.go:276] 0 containers: []
	W0819 19:14:55.781485  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:55.781493  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:55.781560  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:55.846316  438716 cri.go:89] found id: ""
	I0819 19:14:55.846351  438716 logs.go:276] 0 containers: []
	W0819 19:14:55.846362  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:55.846370  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:55.846439  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:55.881587  438716 cri.go:89] found id: ""
	I0819 19:14:55.881623  438716 logs.go:276] 0 containers: []
	W0819 19:14:55.881635  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:55.881644  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:55.881719  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:55.919332  438716 cri.go:89] found id: ""
	I0819 19:14:55.919374  438716 logs.go:276] 0 containers: []
	W0819 19:14:55.919382  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:55.919389  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:55.919441  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:55.954704  438716 cri.go:89] found id: ""
	I0819 19:14:55.954739  438716 logs.go:276] 0 containers: []
	W0819 19:14:55.954752  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:55.954761  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:55.954836  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:55.989289  438716 cri.go:89] found id: ""
	I0819 19:14:55.989321  438716 logs.go:276] 0 containers: []
	W0819 19:14:55.989332  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:55.989340  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:55.989406  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:56.025771  438716 cri.go:89] found id: ""
	I0819 19:14:56.025800  438716 logs.go:276] 0 containers: []
	W0819 19:14:56.025809  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:56.025816  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:56.025883  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:56.065631  438716 cri.go:89] found id: ""
	I0819 19:14:56.065673  438716 logs.go:276] 0 containers: []
	W0819 19:14:56.065686  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:56.065699  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:56.065722  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:56.119482  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:56.119523  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:56.133885  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:56.133915  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:56.207012  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:56.207033  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:56.207045  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:56.288158  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:56.288195  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:58.829677  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:58.844085  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:58.844158  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:58.880900  438716 cri.go:89] found id: ""
	I0819 19:14:58.880934  438716 logs.go:276] 0 containers: []
	W0819 19:14:58.880945  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:58.880951  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:58.881016  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:58.918833  438716 cri.go:89] found id: ""
	I0819 19:14:58.918862  438716 logs.go:276] 0 containers: []
	W0819 19:14:58.918872  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:58.918881  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:58.918939  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:58.956577  438716 cri.go:89] found id: ""
	I0819 19:14:58.956612  438716 logs.go:276] 0 containers: []
	W0819 19:14:58.956623  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:58.956634  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:58.956705  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:58.993884  438716 cri.go:89] found id: ""
	I0819 19:14:58.993914  438716 logs.go:276] 0 containers: []
	W0819 19:14:58.993923  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:58.993930  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:58.993988  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:59.031366  438716 cri.go:89] found id: ""
	I0819 19:14:59.031389  438716 logs.go:276] 0 containers: []
	W0819 19:14:59.031398  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:59.031405  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:59.031464  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:59.072014  438716 cri.go:89] found id: ""
	I0819 19:14:59.072047  438716 logs.go:276] 0 containers: []
	W0819 19:14:59.072058  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:59.072065  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:59.072129  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:59.108713  438716 cri.go:89] found id: ""
	I0819 19:14:59.108744  438716 logs.go:276] 0 containers: []
	W0819 19:14:59.108756  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:59.108765  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:59.108866  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:59.147599  438716 cri.go:89] found id: ""
	I0819 19:14:59.147634  438716 logs.go:276] 0 containers: []
	W0819 19:14:59.147647  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:59.147659  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:59.147695  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:59.224745  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:59.224781  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:59.264586  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:59.264616  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:59.317065  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:59.317104  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:59.331230  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:59.331264  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:59.398370  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:58.401471  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:00.402623  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:58.694096  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:01.193262  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:01.418063  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:03.418302  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:01.899123  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:01.912743  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:01.912824  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:01.949717  438716 cri.go:89] found id: ""
	I0819 19:15:01.949748  438716 logs.go:276] 0 containers: []
	W0819 19:15:01.949756  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:01.949763  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:01.949819  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:01.992776  438716 cri.go:89] found id: ""
	I0819 19:15:01.992802  438716 logs.go:276] 0 containers: []
	W0819 19:15:01.992812  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:01.992819  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:01.992884  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:02.030551  438716 cri.go:89] found id: ""
	I0819 19:15:02.030579  438716 logs.go:276] 0 containers: []
	W0819 19:15:02.030592  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:02.030600  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:02.030672  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:02.069927  438716 cri.go:89] found id: ""
	I0819 19:15:02.069955  438716 logs.go:276] 0 containers: []
	W0819 19:15:02.069964  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:02.069971  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:02.070031  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:02.106584  438716 cri.go:89] found id: ""
	I0819 19:15:02.106609  438716 logs.go:276] 0 containers: []
	W0819 19:15:02.106619  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:02.106629  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:02.106695  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:02.145007  438716 cri.go:89] found id: ""
	I0819 19:15:02.145035  438716 logs.go:276] 0 containers: []
	W0819 19:15:02.145044  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:02.145051  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:02.145113  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:02.180693  438716 cri.go:89] found id: ""
	I0819 19:15:02.180730  438716 logs.go:276] 0 containers: []
	W0819 19:15:02.180741  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:02.180748  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:02.180800  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:02.215563  438716 cri.go:89] found id: ""
	I0819 19:15:02.215597  438716 logs.go:276] 0 containers: []
	W0819 19:15:02.215609  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:02.215623  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:02.215641  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:02.285658  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:02.285692  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:02.285711  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:02.363620  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:02.363660  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:02.414240  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:02.414274  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:02.467336  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:02.467380  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:04.981935  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:04.995537  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:04.995611  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:05.032700  438716 cri.go:89] found id: ""
	I0819 19:15:05.032735  438716 logs.go:276] 0 containers: []
	W0819 19:15:05.032748  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:05.032756  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:05.032827  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:05.069132  438716 cri.go:89] found id: ""
	I0819 19:15:05.069162  438716 logs.go:276] 0 containers: []
	W0819 19:15:05.069173  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:05.069181  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:05.069247  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:05.105320  438716 cri.go:89] found id: ""
	I0819 19:15:05.105346  438716 logs.go:276] 0 containers: []
	W0819 19:15:05.105355  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:05.105361  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:05.105421  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:05.142311  438716 cri.go:89] found id: ""
	I0819 19:15:05.142343  438716 logs.go:276] 0 containers: []
	W0819 19:15:05.142354  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:05.142362  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:05.142412  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:05.177398  438716 cri.go:89] found id: ""
	I0819 19:15:05.177426  438716 logs.go:276] 0 containers: []
	W0819 19:15:05.177437  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:05.177450  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:05.177506  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:05.212749  438716 cri.go:89] found id: ""
	I0819 19:15:05.212780  438716 logs.go:276] 0 containers: []
	W0819 19:15:05.212789  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:05.212796  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:05.212854  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:05.246325  438716 cri.go:89] found id: ""
	I0819 19:15:05.246356  438716 logs.go:276] 0 containers: []
	W0819 19:15:05.246364  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:05.246371  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:05.246420  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:05.287429  438716 cri.go:89] found id: ""
	I0819 19:15:05.287456  438716 logs.go:276] 0 containers: []
	W0819 19:15:05.287466  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:05.287476  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:05.287489  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:05.338742  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:05.338787  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:05.352948  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:05.352978  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:05.421478  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:05.421502  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:05.421529  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:05.497772  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:05.497809  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:02.902202  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:05.403518  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:03.193491  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:05.194340  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:05.419361  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:07.918522  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:08.040403  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:08.053761  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:08.053827  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:08.087047  438716 cri.go:89] found id: ""
	I0819 19:15:08.087073  438716 logs.go:276] 0 containers: []
	W0819 19:15:08.087082  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:08.087089  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:08.087140  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:08.122012  438716 cri.go:89] found id: ""
	I0819 19:15:08.122048  438716 logs.go:276] 0 containers: []
	W0819 19:15:08.122059  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:08.122068  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:08.122134  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:08.155319  438716 cri.go:89] found id: ""
	I0819 19:15:08.155349  438716 logs.go:276] 0 containers: []
	W0819 19:15:08.155360  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:08.155368  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:08.155447  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:08.196003  438716 cri.go:89] found id: ""
	I0819 19:15:08.196027  438716 logs.go:276] 0 containers: []
	W0819 19:15:08.196035  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:08.196041  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:08.196091  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:08.230798  438716 cri.go:89] found id: ""
	I0819 19:15:08.230826  438716 logs.go:276] 0 containers: []
	W0819 19:15:08.230836  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:08.230845  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:08.230910  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:08.267522  438716 cri.go:89] found id: ""
	I0819 19:15:08.267554  438716 logs.go:276] 0 containers: []
	W0819 19:15:08.267562  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:08.267569  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:08.267621  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:08.304775  438716 cri.go:89] found id: ""
	I0819 19:15:08.304801  438716 logs.go:276] 0 containers: []
	W0819 19:15:08.304809  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:08.304815  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:08.304866  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:08.344694  438716 cri.go:89] found id: ""
	I0819 19:15:08.344720  438716 logs.go:276] 0 containers: []
	W0819 19:15:08.344734  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:08.344744  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:08.344757  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:08.383581  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:08.383619  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:08.433868  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:08.433905  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:08.447627  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:08.447657  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:08.518846  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:08.518869  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:08.518887  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:07.901746  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:09.902647  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:07.693351  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:10.193893  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:12.194400  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:09.919436  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:12.418215  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:11.104449  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:11.118149  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:11.118228  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:11.157917  438716 cri.go:89] found id: ""
	I0819 19:15:11.157951  438716 logs.go:276] 0 containers: []
	W0819 19:15:11.157963  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:11.157971  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:11.158040  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:11.196685  438716 cri.go:89] found id: ""
	I0819 19:15:11.196711  438716 logs.go:276] 0 containers: []
	W0819 19:15:11.196721  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:11.196729  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:11.196788  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:11.231089  438716 cri.go:89] found id: ""
	I0819 19:15:11.231124  438716 logs.go:276] 0 containers: []
	W0819 19:15:11.231135  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:11.231144  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:11.231223  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:11.267001  438716 cri.go:89] found id: ""
	I0819 19:15:11.267032  438716 logs.go:276] 0 containers: []
	W0819 19:15:11.267041  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:11.267048  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:11.267113  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:11.302178  438716 cri.go:89] found id: ""
	I0819 19:15:11.302210  438716 logs.go:276] 0 containers: []
	W0819 19:15:11.302223  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:11.302232  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:11.302292  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:11.336335  438716 cri.go:89] found id: ""
	I0819 19:15:11.336368  438716 logs.go:276] 0 containers: []
	W0819 19:15:11.336442  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:11.336458  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:11.336525  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:11.370891  438716 cri.go:89] found id: ""
	I0819 19:15:11.370926  438716 logs.go:276] 0 containers: []
	W0819 19:15:11.370937  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:11.370945  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:11.371007  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:11.407439  438716 cri.go:89] found id: ""
	I0819 19:15:11.407466  438716 logs.go:276] 0 containers: []
	W0819 19:15:11.407473  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:11.407482  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:11.407497  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:11.458692  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:11.458735  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:11.473104  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:11.473133  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:11.542004  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:11.542031  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:11.542050  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:11.619972  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:11.620014  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:14.159220  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:14.173135  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:14.173204  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:14.210347  438716 cri.go:89] found id: ""
	I0819 19:15:14.210377  438716 logs.go:276] 0 containers: []
	W0819 19:15:14.210389  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:14.210398  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:14.210468  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:14.247143  438716 cri.go:89] found id: ""
	I0819 19:15:14.247169  438716 logs.go:276] 0 containers: []
	W0819 19:15:14.247180  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:14.247187  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:14.247260  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:14.284949  438716 cri.go:89] found id: ""
	I0819 19:15:14.284981  438716 logs.go:276] 0 containers: []
	W0819 19:15:14.284995  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:14.285003  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:14.285071  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:14.326801  438716 cri.go:89] found id: ""
	I0819 19:15:14.326826  438716 logs.go:276] 0 containers: []
	W0819 19:15:14.326834  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:14.326842  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:14.326903  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:14.362730  438716 cri.go:89] found id: ""
	I0819 19:15:14.362764  438716 logs.go:276] 0 containers: []
	W0819 19:15:14.362775  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:14.362783  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:14.362852  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:14.403406  438716 cri.go:89] found id: ""
	I0819 19:15:14.403437  438716 logs.go:276] 0 containers: []
	W0819 19:15:14.403448  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:14.403456  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:14.403514  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:14.440641  438716 cri.go:89] found id: ""
	I0819 19:15:14.440670  438716 logs.go:276] 0 containers: []
	W0819 19:15:14.440678  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:14.440685  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:14.440737  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:14.479477  438716 cri.go:89] found id: ""
	I0819 19:15:14.479511  438716 logs.go:276] 0 containers: []
	W0819 19:15:14.479521  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:14.479530  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:14.479544  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:14.530573  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:14.530620  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:14.545329  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:14.545368  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:14.619632  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:14.619652  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:14.619680  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:14.694923  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:14.694956  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:12.401350  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:14.402845  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:14.693534  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:16.693737  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:14.420872  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:16.918227  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:18.919244  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:17.237830  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:17.250579  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:17.250645  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:17.284706  438716 cri.go:89] found id: ""
	I0819 19:15:17.284738  438716 logs.go:276] 0 containers: []
	W0819 19:15:17.284750  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:17.284759  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:17.284832  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:17.320313  438716 cri.go:89] found id: ""
	I0819 19:15:17.320342  438716 logs.go:276] 0 containers: []
	W0819 19:15:17.320350  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:17.320356  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:17.320419  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:17.355974  438716 cri.go:89] found id: ""
	I0819 19:15:17.356008  438716 logs.go:276] 0 containers: []
	W0819 19:15:17.356018  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:17.356027  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:17.356093  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:17.390759  438716 cri.go:89] found id: ""
	I0819 19:15:17.390786  438716 logs.go:276] 0 containers: []
	W0819 19:15:17.390795  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:17.390803  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:17.390861  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:17.431951  438716 cri.go:89] found id: ""
	I0819 19:15:17.431982  438716 logs.go:276] 0 containers: []
	W0819 19:15:17.431993  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:17.432001  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:17.432068  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:17.467183  438716 cri.go:89] found id: ""
	I0819 19:15:17.467215  438716 logs.go:276] 0 containers: []
	W0819 19:15:17.467227  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:17.467236  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:17.467306  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:17.502678  438716 cri.go:89] found id: ""
	I0819 19:15:17.502709  438716 logs.go:276] 0 containers: []
	W0819 19:15:17.502721  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:17.502730  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:17.502801  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:17.537597  438716 cri.go:89] found id: ""
	I0819 19:15:17.537629  438716 logs.go:276] 0 containers: []
	W0819 19:15:17.537643  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:17.537656  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:17.537672  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:17.620076  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:17.620117  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:17.659979  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:17.660009  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:17.710963  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:17.711006  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:17.725556  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:17.725590  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:17.796176  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:20.297246  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:20.311395  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:20.311476  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:20.352279  438716 cri.go:89] found id: ""
	I0819 19:15:20.352317  438716 logs.go:276] 0 containers: []
	W0819 19:15:20.352328  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:20.352338  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:20.352401  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:20.390335  438716 cri.go:89] found id: ""
	I0819 19:15:20.390368  438716 logs.go:276] 0 containers: []
	W0819 19:15:20.390377  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:20.390384  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:20.390450  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:20.430264  438716 cri.go:89] found id: ""
	I0819 19:15:20.430300  438716 logs.go:276] 0 containers: []
	W0819 19:15:20.430312  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:20.430320  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:20.430386  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:20.469670  438716 cri.go:89] found id: ""
	I0819 19:15:20.469703  438716 logs.go:276] 0 containers: []
	W0819 19:15:20.469715  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:20.469723  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:20.469790  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:20.503233  438716 cri.go:89] found id: ""
	I0819 19:15:20.503263  438716 logs.go:276] 0 containers: []
	W0819 19:15:20.503274  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:20.503283  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:20.503371  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:16.902246  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:19.402407  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:18.693921  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:21.193124  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:21.418463  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:23.418730  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:20.538180  438716 cri.go:89] found id: ""
	I0819 19:15:20.538211  438716 logs.go:276] 0 containers: []
	W0819 19:15:20.538223  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:20.538231  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:20.538302  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:20.573301  438716 cri.go:89] found id: ""
	I0819 19:15:20.573329  438716 logs.go:276] 0 containers: []
	W0819 19:15:20.573337  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:20.573352  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:20.573411  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:20.606962  438716 cri.go:89] found id: ""
	I0819 19:15:20.606995  438716 logs.go:276] 0 containers: []
	W0819 19:15:20.607007  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:20.607019  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:20.607035  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:20.658392  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:20.658428  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:20.672063  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:20.672092  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:20.747987  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:20.748010  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:20.748035  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:20.829367  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:20.829415  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:23.378885  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:23.393711  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:23.393778  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:23.430629  438716 cri.go:89] found id: ""
	I0819 19:15:23.430655  438716 logs.go:276] 0 containers: []
	W0819 19:15:23.430665  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:23.430675  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:23.430727  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:23.467509  438716 cri.go:89] found id: ""
	I0819 19:15:23.467541  438716 logs.go:276] 0 containers: []
	W0819 19:15:23.467552  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:23.467560  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:23.467634  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:23.505313  438716 cri.go:89] found id: ""
	I0819 19:15:23.505351  438716 logs.go:276] 0 containers: []
	W0819 19:15:23.505359  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:23.505366  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:23.505416  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:23.543393  438716 cri.go:89] found id: ""
	I0819 19:15:23.543428  438716 logs.go:276] 0 containers: []
	W0819 19:15:23.543441  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:23.543450  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:23.543514  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:23.578265  438716 cri.go:89] found id: ""
	I0819 19:15:23.578293  438716 logs.go:276] 0 containers: []
	W0819 19:15:23.578301  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:23.578308  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:23.578376  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:23.613951  438716 cri.go:89] found id: ""
	I0819 19:15:23.613981  438716 logs.go:276] 0 containers: []
	W0819 19:15:23.613989  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:23.613996  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:23.614061  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:23.647387  438716 cri.go:89] found id: ""
	I0819 19:15:23.647418  438716 logs.go:276] 0 containers: []
	W0819 19:15:23.647426  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:23.647433  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:23.647501  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:23.682482  438716 cri.go:89] found id: ""
	I0819 19:15:23.682510  438716 logs.go:276] 0 containers: []
	W0819 19:15:23.682519  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:23.682530  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:23.682547  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:23.696601  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:23.696629  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:23.766762  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:23.766788  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:23.766804  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:23.850947  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:23.850988  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:23.891113  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:23.891146  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:21.902926  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:24.401874  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:23.193192  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:25.193347  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:25.919555  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:28.419920  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:26.444086  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:26.457774  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:26.457844  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:26.494525  438716 cri.go:89] found id: ""
	I0819 19:15:26.494552  438716 logs.go:276] 0 containers: []
	W0819 19:15:26.494560  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:26.494567  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:26.494618  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:26.535317  438716 cri.go:89] found id: ""
	I0819 19:15:26.535348  438716 logs.go:276] 0 containers: []
	W0819 19:15:26.535359  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:26.535368  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:26.535437  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:26.570853  438716 cri.go:89] found id: ""
	I0819 19:15:26.570886  438716 logs.go:276] 0 containers: []
	W0819 19:15:26.570896  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:26.570920  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:26.570987  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:26.610739  438716 cri.go:89] found id: ""
	I0819 19:15:26.610773  438716 logs.go:276] 0 containers: []
	W0819 19:15:26.610785  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:26.610794  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:26.610885  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:26.651274  438716 cri.go:89] found id: ""
	I0819 19:15:26.651303  438716 logs.go:276] 0 containers: []
	W0819 19:15:26.651311  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:26.651318  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:26.651367  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:26.689963  438716 cri.go:89] found id: ""
	I0819 19:15:26.689993  438716 logs.go:276] 0 containers: []
	W0819 19:15:26.690005  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:26.690013  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:26.690083  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:26.729433  438716 cri.go:89] found id: ""
	I0819 19:15:26.729465  438716 logs.go:276] 0 containers: []
	W0819 19:15:26.729475  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:26.729483  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:26.729548  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:26.768386  438716 cri.go:89] found id: ""
	I0819 19:15:26.768418  438716 logs.go:276] 0 containers: []
	W0819 19:15:26.768427  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:26.768436  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:26.768449  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:26.821526  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:26.821564  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:26.835714  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:26.835763  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:26.907981  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:26.908007  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:26.908023  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:26.991969  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:26.992008  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:29.529743  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:29.544812  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:29.544883  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:29.581455  438716 cri.go:89] found id: ""
	I0819 19:15:29.581486  438716 logs.go:276] 0 containers: []
	W0819 19:15:29.581496  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:29.581503  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:29.581559  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:29.634542  438716 cri.go:89] found id: ""
	I0819 19:15:29.634576  438716 logs.go:276] 0 containers: []
	W0819 19:15:29.634587  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:29.634596  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:29.634663  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:29.670388  438716 cri.go:89] found id: ""
	I0819 19:15:29.670422  438716 logs.go:276] 0 containers: []
	W0819 19:15:29.670439  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:29.670449  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:29.670511  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:29.712267  438716 cri.go:89] found id: ""
	I0819 19:15:29.712293  438716 logs.go:276] 0 containers: []
	W0819 19:15:29.712304  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:29.712313  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:29.712376  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:29.752392  438716 cri.go:89] found id: ""
	I0819 19:15:29.752423  438716 logs.go:276] 0 containers: []
	W0819 19:15:29.752432  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:29.752438  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:29.752500  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:29.791734  438716 cri.go:89] found id: ""
	I0819 19:15:29.791763  438716 logs.go:276] 0 containers: []
	W0819 19:15:29.791772  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:29.791778  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:29.791830  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:29.832882  438716 cri.go:89] found id: ""
	I0819 19:15:29.832910  438716 logs.go:276] 0 containers: []
	W0819 19:15:29.832921  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:29.832929  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:29.832986  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:29.872035  438716 cri.go:89] found id: ""
	I0819 19:15:29.872068  438716 logs.go:276] 0 containers: []
	W0819 19:15:29.872076  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:29.872086  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:29.872098  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:29.926551  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:29.926588  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:29.940500  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:29.940537  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:30.010327  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:30.010348  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:30.010368  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:30.090864  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:30.090910  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:26.902881  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:29.401449  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:27.692753  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:29.693161  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:32.193256  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:30.421066  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:32.918642  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:32.636291  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:32.649264  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:32.649334  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:32.683746  438716 cri.go:89] found id: ""
	I0819 19:15:32.683774  438716 logs.go:276] 0 containers: []
	W0819 19:15:32.683785  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:32.683794  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:32.683867  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:32.723805  438716 cri.go:89] found id: ""
	I0819 19:15:32.723838  438716 logs.go:276] 0 containers: []
	W0819 19:15:32.723850  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:32.723858  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:32.723917  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:32.758119  438716 cri.go:89] found id: ""
	I0819 19:15:32.758148  438716 logs.go:276] 0 containers: []
	W0819 19:15:32.758157  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:32.758164  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:32.758215  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:32.792726  438716 cri.go:89] found id: ""
	I0819 19:15:32.792754  438716 logs.go:276] 0 containers: []
	W0819 19:15:32.792768  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:32.792775  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:32.792823  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:32.829180  438716 cri.go:89] found id: ""
	I0819 19:15:32.829208  438716 logs.go:276] 0 containers: []
	W0819 19:15:32.829217  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:32.829224  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:32.829274  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:32.869045  438716 cri.go:89] found id: ""
	I0819 19:15:32.869081  438716 logs.go:276] 0 containers: []
	W0819 19:15:32.869093  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:32.869102  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:32.869172  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:32.904780  438716 cri.go:89] found id: ""
	I0819 19:15:32.904803  438716 logs.go:276] 0 containers: []
	W0819 19:15:32.904811  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:32.904818  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:32.904870  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:32.940846  438716 cri.go:89] found id: ""
	I0819 19:15:32.940876  438716 logs.go:276] 0 containers: []
	W0819 19:15:32.940886  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:32.940900  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:32.940924  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:33.008569  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:33.008592  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:33.008606  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:33.092605  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:33.092657  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:33.133016  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:33.133045  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:33.188335  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:33.188376  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:31.901719  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:34.401060  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:36.401983  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:34.193690  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:36.694042  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:34.918948  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:37.418186  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:35.704043  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:35.717647  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:35.717708  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:35.752337  438716 cri.go:89] found id: ""
	I0819 19:15:35.752364  438716 logs.go:276] 0 containers: []
	W0819 19:15:35.752372  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:35.752378  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:35.752431  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:35.787233  438716 cri.go:89] found id: ""
	I0819 19:15:35.787261  438716 logs.go:276] 0 containers: []
	W0819 19:15:35.787269  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:35.787275  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:35.787334  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:35.819641  438716 cri.go:89] found id: ""
	I0819 19:15:35.819667  438716 logs.go:276] 0 containers: []
	W0819 19:15:35.819697  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:35.819705  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:35.819775  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:35.856133  438716 cri.go:89] found id: ""
	I0819 19:15:35.856160  438716 logs.go:276] 0 containers: []
	W0819 19:15:35.856169  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:35.856176  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:35.856240  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:35.889390  438716 cri.go:89] found id: ""
	I0819 19:15:35.889422  438716 logs.go:276] 0 containers: []
	W0819 19:15:35.889432  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:35.889438  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:35.889501  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:35.927477  438716 cri.go:89] found id: ""
	I0819 19:15:35.927519  438716 logs.go:276] 0 containers: []
	W0819 19:15:35.927531  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:35.927539  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:35.927600  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:35.961787  438716 cri.go:89] found id: ""
	I0819 19:15:35.961825  438716 logs.go:276] 0 containers: []
	W0819 19:15:35.961837  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:35.961845  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:35.961912  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:35.998350  438716 cri.go:89] found id: ""
	I0819 19:15:35.998384  438716 logs.go:276] 0 containers: []
	W0819 19:15:35.998396  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:35.998407  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:35.998419  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:36.054352  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:36.054394  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:36.078278  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:36.078311  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:36.166388  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:36.166416  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:36.166433  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:36.247222  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:36.247269  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:38.786510  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:38.800306  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:38.800364  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:38.834555  438716 cri.go:89] found id: ""
	I0819 19:15:38.834583  438716 logs.go:276] 0 containers: []
	W0819 19:15:38.834591  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:38.834598  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:38.834648  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:38.869078  438716 cri.go:89] found id: ""
	I0819 19:15:38.869105  438716 logs.go:276] 0 containers: []
	W0819 19:15:38.869114  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:38.869120  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:38.869174  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:38.903702  438716 cri.go:89] found id: ""
	I0819 19:15:38.903728  438716 logs.go:276] 0 containers: []
	W0819 19:15:38.903736  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:38.903743  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:38.903795  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:38.938326  438716 cri.go:89] found id: ""
	I0819 19:15:38.938352  438716 logs.go:276] 0 containers: []
	W0819 19:15:38.938360  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:38.938367  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:38.938422  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:38.976032  438716 cri.go:89] found id: ""
	I0819 19:15:38.976063  438716 logs.go:276] 0 containers: []
	W0819 19:15:38.976075  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:38.976084  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:38.976149  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:39.009957  438716 cri.go:89] found id: ""
	I0819 19:15:39.009991  438716 logs.go:276] 0 containers: []
	W0819 19:15:39.010002  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:39.010011  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:39.010077  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:39.046381  438716 cri.go:89] found id: ""
	I0819 19:15:39.046408  438716 logs.go:276] 0 containers: []
	W0819 19:15:39.046416  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:39.046422  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:39.046474  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:39.083022  438716 cri.go:89] found id: ""
	I0819 19:15:39.083050  438716 logs.go:276] 0 containers: []
	W0819 19:15:39.083058  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:39.083067  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:39.083079  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:39.160731  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:39.160768  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:39.204846  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:39.204879  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:39.259248  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:39.259287  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:39.273764  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:39.273796  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:39.344477  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:38.402275  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:40.901494  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:39.194367  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:41.692933  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:39.419291  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:41.919708  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:43.919984  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:41.845258  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:41.861691  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:41.861754  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:41.908235  438716 cri.go:89] found id: ""
	I0819 19:15:41.908269  438716 logs.go:276] 0 containers: []
	W0819 19:15:41.908281  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:41.908289  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:41.908357  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:41.965631  438716 cri.go:89] found id: ""
	I0819 19:15:41.965657  438716 logs.go:276] 0 containers: []
	W0819 19:15:41.965667  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:41.965673  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:41.965732  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:42.004540  438716 cri.go:89] found id: ""
	I0819 19:15:42.004569  438716 logs.go:276] 0 containers: []
	W0819 19:15:42.004578  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:42.004585  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:42.004650  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:42.042189  438716 cri.go:89] found id: ""
	I0819 19:15:42.042215  438716 logs.go:276] 0 containers: []
	W0819 19:15:42.042224  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:42.042231  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:42.042299  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:42.079313  438716 cri.go:89] found id: ""
	I0819 19:15:42.079349  438716 logs.go:276] 0 containers: []
	W0819 19:15:42.079361  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:42.079370  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:42.079450  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:42.116130  438716 cri.go:89] found id: ""
	I0819 19:15:42.116164  438716 logs.go:276] 0 containers: []
	W0819 19:15:42.116176  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:42.116184  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:42.116253  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:42.154886  438716 cri.go:89] found id: ""
	I0819 19:15:42.154919  438716 logs.go:276] 0 containers: []
	W0819 19:15:42.154928  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:42.154935  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:42.154987  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:42.191204  438716 cri.go:89] found id: ""
	I0819 19:15:42.191237  438716 logs.go:276] 0 containers: []
	W0819 19:15:42.191248  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:42.191258  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:42.191275  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:42.244395  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:42.244434  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:42.258029  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:42.258066  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:42.323461  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:42.323481  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:42.323498  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:42.401932  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:42.401969  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:44.943615  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:44.958243  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:44.958315  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:44.995181  438716 cri.go:89] found id: ""
	I0819 19:15:44.995217  438716 logs.go:276] 0 containers: []
	W0819 19:15:44.995236  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:44.995244  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:44.995309  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:45.030705  438716 cri.go:89] found id: ""
	I0819 19:15:45.030743  438716 logs.go:276] 0 containers: []
	W0819 19:15:45.030752  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:45.030759  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:45.030814  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:45.068186  438716 cri.go:89] found id: ""
	I0819 19:15:45.068215  438716 logs.go:276] 0 containers: []
	W0819 19:15:45.068224  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:45.068231  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:45.068314  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:45.105415  438716 cri.go:89] found id: ""
	I0819 19:15:45.105443  438716 logs.go:276] 0 containers: []
	W0819 19:15:45.105452  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:45.105458  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:45.105517  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:45.143628  438716 cri.go:89] found id: ""
	I0819 19:15:45.143662  438716 logs.go:276] 0 containers: []
	W0819 19:15:45.143694  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:45.143704  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:45.143771  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:45.184896  438716 cri.go:89] found id: ""
	I0819 19:15:45.184922  438716 logs.go:276] 0 containers: []
	W0819 19:15:45.184930  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:45.184937  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:45.185000  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:45.222599  438716 cri.go:89] found id: ""
	I0819 19:15:45.222631  438716 logs.go:276] 0 containers: []
	W0819 19:15:45.222639  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:45.222645  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:45.222700  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:45.260310  438716 cri.go:89] found id: ""
	I0819 19:15:45.260341  438716 logs.go:276] 0 containers: []
	W0819 19:15:45.260352  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:45.260361  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:45.260379  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:45.273687  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:45.273718  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:45.351367  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:45.351390  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:45.351407  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:45.428751  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:45.428787  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:45.468830  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:45.468869  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:42.902576  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:45.402812  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:43.693205  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:46.192804  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:46.419903  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:48.918620  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:48.023654  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:48.037206  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:48.037294  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:48.071647  438716 cri.go:89] found id: ""
	I0819 19:15:48.071686  438716 logs.go:276] 0 containers: []
	W0819 19:15:48.071695  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:48.071704  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:48.071765  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:48.106542  438716 cri.go:89] found id: ""
	I0819 19:15:48.106575  438716 logs.go:276] 0 containers: []
	W0819 19:15:48.106586  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:48.106596  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:48.106662  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:48.151917  438716 cri.go:89] found id: ""
	I0819 19:15:48.151949  438716 logs.go:276] 0 containers: []
	W0819 19:15:48.151959  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:48.151966  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:48.152022  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:48.190095  438716 cri.go:89] found id: ""
	I0819 19:15:48.190125  438716 logs.go:276] 0 containers: []
	W0819 19:15:48.190137  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:48.190146  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:48.190211  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:48.227193  438716 cri.go:89] found id: ""
	I0819 19:15:48.227228  438716 logs.go:276] 0 containers: []
	W0819 19:15:48.227240  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:48.227248  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:48.227317  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:48.261353  438716 cri.go:89] found id: ""
	I0819 19:15:48.261386  438716 logs.go:276] 0 containers: []
	W0819 19:15:48.261396  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:48.261403  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:48.261455  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:48.295749  438716 cri.go:89] found id: ""
	I0819 19:15:48.295782  438716 logs.go:276] 0 containers: []
	W0819 19:15:48.295794  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:48.295803  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:48.295874  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:48.338350  438716 cri.go:89] found id: ""
	I0819 19:15:48.338383  438716 logs.go:276] 0 containers: []
	W0819 19:15:48.338394  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:48.338404  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:48.338420  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:48.420705  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:48.420749  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:48.464114  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:48.464153  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:48.519461  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:48.519505  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:48.534324  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:48.534357  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:48.603580  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:47.900813  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:49.902363  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:48.194425  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:50.693598  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:51.419909  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:53.918494  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:51.104343  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:51.117552  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:51.117629  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:51.150630  438716 cri.go:89] found id: ""
	I0819 19:15:51.150665  438716 logs.go:276] 0 containers: []
	W0819 19:15:51.150677  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:51.150691  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:51.150765  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:51.184316  438716 cri.go:89] found id: ""
	I0819 19:15:51.184346  438716 logs.go:276] 0 containers: []
	W0819 19:15:51.184356  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:51.184362  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:51.184410  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:51.221252  438716 cri.go:89] found id: ""
	I0819 19:15:51.221277  438716 logs.go:276] 0 containers: []
	W0819 19:15:51.221286  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:51.221292  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:51.221349  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:51.255727  438716 cri.go:89] found id: ""
	I0819 19:15:51.255755  438716 logs.go:276] 0 containers: []
	W0819 19:15:51.255763  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:51.255769  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:51.255823  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:51.290615  438716 cri.go:89] found id: ""
	I0819 19:15:51.290651  438716 logs.go:276] 0 containers: []
	W0819 19:15:51.290660  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:51.290667  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:51.290721  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:51.326895  438716 cri.go:89] found id: ""
	I0819 19:15:51.326922  438716 logs.go:276] 0 containers: []
	W0819 19:15:51.326930  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:51.326937  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:51.326987  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:51.365516  438716 cri.go:89] found id: ""
	I0819 19:15:51.365547  438716 logs.go:276] 0 containers: []
	W0819 19:15:51.365558  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:51.365566  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:51.365632  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:51.399002  438716 cri.go:89] found id: ""
	I0819 19:15:51.399030  438716 logs.go:276] 0 containers: []
	W0819 19:15:51.399038  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:51.399048  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:51.399059  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:51.453481  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:51.453524  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:51.467246  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:51.467277  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:51.548547  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:51.548578  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:51.548595  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:51.635627  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:51.635670  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:54.175003  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:54.190462  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:54.190537  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:54.232140  438716 cri.go:89] found id: ""
	I0819 19:15:54.232168  438716 logs.go:276] 0 containers: []
	W0819 19:15:54.232178  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:54.232186  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:54.232254  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:54.267700  438716 cri.go:89] found id: ""
	I0819 19:15:54.267732  438716 logs.go:276] 0 containers: []
	W0819 19:15:54.267742  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:54.267748  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:54.267807  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:54.306272  438716 cri.go:89] found id: ""
	I0819 19:15:54.306300  438716 logs.go:276] 0 containers: []
	W0819 19:15:54.306308  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:54.306315  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:54.306368  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:54.341503  438716 cri.go:89] found id: ""
	I0819 19:15:54.341536  438716 logs.go:276] 0 containers: []
	W0819 19:15:54.341549  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:54.341556  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:54.341609  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:54.375535  438716 cri.go:89] found id: ""
	I0819 19:15:54.375570  438716 logs.go:276] 0 containers: []
	W0819 19:15:54.375582  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:54.375591  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:54.375661  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:54.409611  438716 cri.go:89] found id: ""
	I0819 19:15:54.409641  438716 logs.go:276] 0 containers: []
	W0819 19:15:54.409653  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:54.409662  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:54.409731  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:54.444318  438716 cri.go:89] found id: ""
	I0819 19:15:54.444346  438716 logs.go:276] 0 containers: []
	W0819 19:15:54.444358  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:54.444366  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:54.444425  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:54.480746  438716 cri.go:89] found id: ""
	I0819 19:15:54.480777  438716 logs.go:276] 0 containers: []
	W0819 19:15:54.480789  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:54.480802  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:54.480817  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:54.534209  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:54.534245  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:54.549557  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:54.549598  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:54.625086  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:54.625111  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:54.625136  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:54.705549  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:54.705589  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:52.401150  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:54.402049  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:56.402545  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:52.693826  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:54.694875  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:57.193741  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:56.418166  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:58.418955  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:57.257440  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:57.276724  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:57.276812  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:57.319032  438716 cri.go:89] found id: ""
	I0819 19:15:57.319062  438716 logs.go:276] 0 containers: []
	W0819 19:15:57.319073  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:57.319081  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:57.319163  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:57.357093  438716 cri.go:89] found id: ""
	I0819 19:15:57.357129  438716 logs.go:276] 0 containers: []
	W0819 19:15:57.357140  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:57.357152  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:57.357222  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:57.393978  438716 cri.go:89] found id: ""
	I0819 19:15:57.394013  438716 logs.go:276] 0 containers: []
	W0819 19:15:57.394025  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:57.394033  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:57.394102  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:57.428731  438716 cri.go:89] found id: ""
	I0819 19:15:57.428760  438716 logs.go:276] 0 containers: []
	W0819 19:15:57.428768  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:57.428775  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:57.428824  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:57.467772  438716 cri.go:89] found id: ""
	I0819 19:15:57.467810  438716 logs.go:276] 0 containers: []
	W0819 19:15:57.467822  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:57.467832  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:57.467904  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:57.502398  438716 cri.go:89] found id: ""
	I0819 19:15:57.502434  438716 logs.go:276] 0 containers: []
	W0819 19:15:57.502444  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:57.502450  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:57.502503  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:57.536729  438716 cri.go:89] found id: ""
	I0819 19:15:57.536760  438716 logs.go:276] 0 containers: []
	W0819 19:15:57.536771  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:57.536779  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:57.536845  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:57.574738  438716 cri.go:89] found id: ""
	I0819 19:15:57.574762  438716 logs.go:276] 0 containers: []
	W0819 19:15:57.574770  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:57.574780  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:57.574793  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:57.630063  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:57.630113  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:57.643083  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:57.643111  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:57.725081  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:57.725104  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:57.725118  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:57.805065  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:57.805105  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:00.344557  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:00.357940  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:00.358005  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:00.399319  438716 cri.go:89] found id: ""
	I0819 19:16:00.399355  438716 logs.go:276] 0 containers: []
	W0819 19:16:00.399368  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:00.399377  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:00.399446  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:00.444223  438716 cri.go:89] found id: ""
	I0819 19:16:00.444254  438716 logs.go:276] 0 containers: []
	W0819 19:16:00.444264  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:00.444271  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:00.444323  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:00.479903  438716 cri.go:89] found id: ""
	I0819 19:16:00.479932  438716 logs.go:276] 0 containers: []
	W0819 19:16:00.479942  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:00.479948  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:00.480003  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:00.515923  438716 cri.go:89] found id: ""
	I0819 19:16:00.515954  438716 logs.go:276] 0 containers: []
	W0819 19:16:00.515966  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:00.515974  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:00.516043  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:58.901349  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:00.902114  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:59.194660  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:01.693174  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:00.419210  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:02.918814  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:00.551319  438716 cri.go:89] found id: ""
	I0819 19:16:00.551348  438716 logs.go:276] 0 containers: []
	W0819 19:16:00.551360  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:00.551370  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:00.551434  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:00.587847  438716 cri.go:89] found id: ""
	I0819 19:16:00.587882  438716 logs.go:276] 0 containers: []
	W0819 19:16:00.587892  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:00.587901  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:00.587976  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:00.624769  438716 cri.go:89] found id: ""
	I0819 19:16:00.624800  438716 logs.go:276] 0 containers: []
	W0819 19:16:00.624812  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:00.624820  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:00.624894  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:00.659300  438716 cri.go:89] found id: ""
	I0819 19:16:00.659330  438716 logs.go:276] 0 containers: []
	W0819 19:16:00.659342  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:00.659355  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:00.659371  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:00.739073  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:00.739113  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:00.779087  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:00.779116  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:00.831864  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:00.831914  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:00.845832  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:00.845863  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:00.920622  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:03.420751  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:03.434599  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:03.434664  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:03.469288  438716 cri.go:89] found id: ""
	I0819 19:16:03.469326  438716 logs.go:276] 0 containers: []
	W0819 19:16:03.469349  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:03.469372  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:03.469445  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:03.507885  438716 cri.go:89] found id: ""
	I0819 19:16:03.507911  438716 logs.go:276] 0 containers: []
	W0819 19:16:03.507927  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:03.507934  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:03.507987  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:03.543805  438716 cri.go:89] found id: ""
	I0819 19:16:03.543837  438716 logs.go:276] 0 containers: []
	W0819 19:16:03.543847  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:03.543854  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:03.543928  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:03.584060  438716 cri.go:89] found id: ""
	I0819 19:16:03.584093  438716 logs.go:276] 0 containers: []
	W0819 19:16:03.584105  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:03.584114  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:03.584202  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:03.619724  438716 cri.go:89] found id: ""
	I0819 19:16:03.619758  438716 logs.go:276] 0 containers: []
	W0819 19:16:03.619769  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:03.619776  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:03.619854  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:03.657180  438716 cri.go:89] found id: ""
	I0819 19:16:03.657213  438716 logs.go:276] 0 containers: []
	W0819 19:16:03.657225  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:03.657234  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:03.657303  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:03.695099  438716 cri.go:89] found id: ""
	I0819 19:16:03.695125  438716 logs.go:276] 0 containers: []
	W0819 19:16:03.695134  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:03.695139  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:03.695193  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:03.730263  438716 cri.go:89] found id: ""
	I0819 19:16:03.730291  438716 logs.go:276] 0 containers: []
	W0819 19:16:03.730302  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:03.730314  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:03.730331  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:03.780776  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:03.780816  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:03.795381  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:03.795419  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:03.869995  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:03.870016  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:03.870029  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:03.949654  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:03.949691  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:03.402500  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:05.902412  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:03.694220  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:06.193280  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:04.919284  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:07.418061  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:06.493589  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:06.506758  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:06.506834  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:06.545325  438716 cri.go:89] found id: ""
	I0819 19:16:06.545357  438716 logs.go:276] 0 containers: []
	W0819 19:16:06.545370  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:06.545378  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:06.545443  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:06.581708  438716 cri.go:89] found id: ""
	I0819 19:16:06.581741  438716 logs.go:276] 0 containers: []
	W0819 19:16:06.581753  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:06.581761  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:06.581828  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:06.626543  438716 cri.go:89] found id: ""
	I0819 19:16:06.626588  438716 logs.go:276] 0 containers: []
	W0819 19:16:06.626600  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:06.626609  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:06.626676  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:06.662466  438716 cri.go:89] found id: ""
	I0819 19:16:06.662499  438716 logs.go:276] 0 containers: []
	W0819 19:16:06.662509  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:06.662518  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:06.662585  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:06.701584  438716 cri.go:89] found id: ""
	I0819 19:16:06.701619  438716 logs.go:276] 0 containers: []
	W0819 19:16:06.701628  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:06.701635  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:06.701688  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:06.736245  438716 cri.go:89] found id: ""
	I0819 19:16:06.736280  438716 logs.go:276] 0 containers: []
	W0819 19:16:06.736292  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:06.736300  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:06.736392  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:06.774411  438716 cri.go:89] found id: ""
	I0819 19:16:06.774439  438716 logs.go:276] 0 containers: []
	W0819 19:16:06.774447  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:06.774454  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:06.774510  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:06.809560  438716 cri.go:89] found id: ""
	I0819 19:16:06.809597  438716 logs.go:276] 0 containers: []
	W0819 19:16:06.809609  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:06.809624  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:06.809648  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:06.884841  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:06.884862  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:06.884878  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:06.971467  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:06.971507  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:07.010737  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:07.010767  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:07.063807  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:07.063846  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:09.578451  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:09.591643  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:09.591737  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:09.625607  438716 cri.go:89] found id: ""
	I0819 19:16:09.625639  438716 logs.go:276] 0 containers: []
	W0819 19:16:09.625650  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:09.625659  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:09.625727  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:09.669145  438716 cri.go:89] found id: ""
	I0819 19:16:09.669177  438716 logs.go:276] 0 containers: []
	W0819 19:16:09.669185  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:09.669191  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:09.669254  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:09.707035  438716 cri.go:89] found id: ""
	I0819 19:16:09.707064  438716 logs.go:276] 0 containers: []
	W0819 19:16:09.707073  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:09.707080  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:09.707142  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:09.742089  438716 cri.go:89] found id: ""
	I0819 19:16:09.742116  438716 logs.go:276] 0 containers: []
	W0819 19:16:09.742125  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:09.742132  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:09.742193  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:09.782736  438716 cri.go:89] found id: ""
	I0819 19:16:09.782774  438716 logs.go:276] 0 containers: []
	W0819 19:16:09.782785  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:09.782794  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:09.782860  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:09.818003  438716 cri.go:89] found id: ""
	I0819 19:16:09.818031  438716 logs.go:276] 0 containers: []
	W0819 19:16:09.818040  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:09.818047  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:09.818110  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:09.852716  438716 cri.go:89] found id: ""
	I0819 19:16:09.852748  438716 logs.go:276] 0 containers: []
	W0819 19:16:09.852757  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:09.852764  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:09.852828  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:09.887176  438716 cri.go:89] found id: ""
	I0819 19:16:09.887206  438716 logs.go:276] 0 containers: []
	W0819 19:16:09.887218  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:09.887230  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:09.887247  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:09.901547  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:09.901573  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:09.969153  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:09.969190  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:09.969205  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:10.053777  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:10.053820  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:10.100888  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:10.100916  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:08.401650  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:10.402279  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:08.194305  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:10.693097  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:09.418856  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:11.918836  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:12.655112  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:12.667824  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:12.667897  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:12.702337  438716 cri.go:89] found id: ""
	I0819 19:16:12.702364  438716 logs.go:276] 0 containers: []
	W0819 19:16:12.702373  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:12.702379  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:12.702432  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:12.736628  438716 cri.go:89] found id: ""
	I0819 19:16:12.736655  438716 logs.go:276] 0 containers: []
	W0819 19:16:12.736663  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:12.736669  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:12.736720  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:12.773598  438716 cri.go:89] found id: ""
	I0819 19:16:12.773628  438716 logs.go:276] 0 containers: []
	W0819 19:16:12.773636  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:12.773643  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:12.773695  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:12.806584  438716 cri.go:89] found id: ""
	I0819 19:16:12.806620  438716 logs.go:276] 0 containers: []
	W0819 19:16:12.806632  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:12.806640  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:12.806723  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:12.840535  438716 cri.go:89] found id: ""
	I0819 19:16:12.840561  438716 logs.go:276] 0 containers: []
	W0819 19:16:12.840569  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:12.840575  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:12.840639  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:12.877680  438716 cri.go:89] found id: ""
	I0819 19:16:12.877712  438716 logs.go:276] 0 containers: []
	W0819 19:16:12.877721  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:12.877728  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:12.877779  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:12.912226  438716 cri.go:89] found id: ""
	I0819 19:16:12.912253  438716 logs.go:276] 0 containers: []
	W0819 19:16:12.912264  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:12.912272  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:12.912342  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:12.953463  438716 cri.go:89] found id: ""
	I0819 19:16:12.953493  438716 logs.go:276] 0 containers: []
	W0819 19:16:12.953504  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:12.953524  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:12.953542  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:13.007648  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:13.007691  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:13.022452  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:13.022494  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:13.092411  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:13.092439  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:13.092455  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:13.168711  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:13.168750  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:12.903478  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:15.402551  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:12.693162  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:14.698051  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:17.193988  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:14.417821  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:16.418541  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:18.918478  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:15.711501  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:15.724841  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:15.724921  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:15.760120  438716 cri.go:89] found id: ""
	I0819 19:16:15.760149  438716 logs.go:276] 0 containers: []
	W0819 19:16:15.760158  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:15.760166  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:15.760234  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:15.794959  438716 cri.go:89] found id: ""
	I0819 19:16:15.794988  438716 logs.go:276] 0 containers: []
	W0819 19:16:15.794996  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:15.795002  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:15.795054  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:15.842776  438716 cri.go:89] found id: ""
	I0819 19:16:15.842804  438716 logs.go:276] 0 containers: []
	W0819 19:16:15.842814  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:15.842820  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:15.842874  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:15.882134  438716 cri.go:89] found id: ""
	I0819 19:16:15.882167  438716 logs.go:276] 0 containers: []
	W0819 19:16:15.882178  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:15.882187  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:15.882251  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:15.919296  438716 cri.go:89] found id: ""
	I0819 19:16:15.919325  438716 logs.go:276] 0 containers: []
	W0819 19:16:15.919336  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:15.919345  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:15.919409  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:15.956401  438716 cri.go:89] found id: ""
	I0819 19:16:15.956429  438716 logs.go:276] 0 containers: []
	W0819 19:16:15.956437  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:15.956444  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:15.956507  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:15.994271  438716 cri.go:89] found id: ""
	I0819 19:16:15.994304  438716 logs.go:276] 0 containers: []
	W0819 19:16:15.994314  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:15.994320  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:15.994378  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:16.033685  438716 cri.go:89] found id: ""
	I0819 19:16:16.033714  438716 logs.go:276] 0 containers: []
	W0819 19:16:16.033724  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:16.033736  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:16.033754  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:16.083929  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:16.083964  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:16.107309  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:16.107342  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:16.193657  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:16.193681  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:16.193697  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:16.276974  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:16.277016  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:18.818532  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:18.831586  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:18.831655  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:18.866663  438716 cri.go:89] found id: ""
	I0819 19:16:18.866689  438716 logs.go:276] 0 containers: []
	W0819 19:16:18.866700  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:18.866709  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:18.866769  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:18.900711  438716 cri.go:89] found id: ""
	I0819 19:16:18.900746  438716 logs.go:276] 0 containers: []
	W0819 19:16:18.900757  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:18.900765  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:18.900849  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:18.935156  438716 cri.go:89] found id: ""
	I0819 19:16:18.935179  438716 logs.go:276] 0 containers: []
	W0819 19:16:18.935186  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:18.935193  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:18.935246  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:18.973853  438716 cri.go:89] found id: ""
	I0819 19:16:18.973889  438716 logs.go:276] 0 containers: []
	W0819 19:16:18.973902  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:18.973911  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:18.973978  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:19.014212  438716 cri.go:89] found id: ""
	I0819 19:16:19.014241  438716 logs.go:276] 0 containers: []
	W0819 19:16:19.014250  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:19.014255  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:19.014317  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:19.056089  438716 cri.go:89] found id: ""
	I0819 19:16:19.056125  438716 logs.go:276] 0 containers: []
	W0819 19:16:19.056137  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:19.056146  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:19.056211  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:19.091372  438716 cri.go:89] found id: ""
	I0819 19:16:19.091399  438716 logs.go:276] 0 containers: []
	W0819 19:16:19.091411  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:19.091420  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:19.091478  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:19.129737  438716 cri.go:89] found id: ""
	I0819 19:16:19.129767  438716 logs.go:276] 0 containers: []
	W0819 19:16:19.129777  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:19.129787  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:19.129800  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:19.207325  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:19.207360  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:19.247780  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:19.247816  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:19.302496  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:19.302543  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:19.317706  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:19.317739  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:19.395029  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:17.901762  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:19.901818  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:19.195079  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:21.693863  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:21.418534  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:23.420217  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:21.895538  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:21.910595  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:21.910658  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:21.948363  438716 cri.go:89] found id: ""
	I0819 19:16:21.948398  438716 logs.go:276] 0 containers: []
	W0819 19:16:21.948410  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:21.948419  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:21.948492  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:21.983391  438716 cri.go:89] found id: ""
	I0819 19:16:21.983428  438716 logs.go:276] 0 containers: []
	W0819 19:16:21.983440  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:21.983449  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:21.983520  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:22.022383  438716 cri.go:89] found id: ""
	I0819 19:16:22.022415  438716 logs.go:276] 0 containers: []
	W0819 19:16:22.022427  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:22.022436  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:22.022493  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:22.060676  438716 cri.go:89] found id: ""
	I0819 19:16:22.060707  438716 logs.go:276] 0 containers: []
	W0819 19:16:22.060716  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:22.060725  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:22.060778  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:22.095188  438716 cri.go:89] found id: ""
	I0819 19:16:22.095218  438716 logs.go:276] 0 containers: []
	W0819 19:16:22.095227  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:22.095234  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:22.095300  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:22.131164  438716 cri.go:89] found id: ""
	I0819 19:16:22.131192  438716 logs.go:276] 0 containers: []
	W0819 19:16:22.131200  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:22.131209  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:22.131275  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:22.166539  438716 cri.go:89] found id: ""
	I0819 19:16:22.166566  438716 logs.go:276] 0 containers: []
	W0819 19:16:22.166573  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:22.166580  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:22.166643  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:22.205604  438716 cri.go:89] found id: ""
	I0819 19:16:22.205631  438716 logs.go:276] 0 containers: []
	W0819 19:16:22.205640  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:22.205649  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:22.205662  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:22.265650  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:22.265689  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:22.280401  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:22.280443  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:22.356818  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:22.356851  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:22.356872  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:22.437678  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:22.437719  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:24.979655  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:24.993462  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:24.993526  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:25.029955  438716 cri.go:89] found id: ""
	I0819 19:16:25.029983  438716 logs.go:276] 0 containers: []
	W0819 19:16:25.029992  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:25.029999  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:25.030049  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:25.068478  438716 cri.go:89] found id: ""
	I0819 19:16:25.068507  438716 logs.go:276] 0 containers: []
	W0819 19:16:25.068518  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:25.068527  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:25.068594  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:25.105209  438716 cri.go:89] found id: ""
	I0819 19:16:25.105238  438716 logs.go:276] 0 containers: []
	W0819 19:16:25.105247  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:25.105256  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:25.105327  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:25.143166  438716 cri.go:89] found id: ""
	I0819 19:16:25.143203  438716 logs.go:276] 0 containers: []
	W0819 19:16:25.143218  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:25.143225  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:25.143279  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:25.177993  438716 cri.go:89] found id: ""
	I0819 19:16:25.178023  438716 logs.go:276] 0 containers: []
	W0819 19:16:25.178035  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:25.178044  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:25.178129  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:25.216473  438716 cri.go:89] found id: ""
	I0819 19:16:25.216501  438716 logs.go:276] 0 containers: []
	W0819 19:16:25.216523  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:25.216540  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:25.216603  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:25.251454  438716 cri.go:89] found id: ""
	I0819 19:16:25.251486  438716 logs.go:276] 0 containers: []
	W0819 19:16:25.251495  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:25.251501  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:25.251555  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:25.287145  438716 cri.go:89] found id: ""
	I0819 19:16:25.287179  438716 logs.go:276] 0 containers: []
	W0819 19:16:25.287188  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:25.287198  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:25.287210  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:25.371571  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:25.371619  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:25.418247  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:25.418277  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:25.472209  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:25.472248  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:25.486286  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:25.486315  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 19:16:21.902887  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:23.904358  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:26.403026  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:24.193797  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:26.194535  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:25.919371  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:28.418267  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	W0819 19:16:25.554470  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:28.055382  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:28.068750  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:28.068827  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:28.101856  438716 cri.go:89] found id: ""
	I0819 19:16:28.101891  438716 logs.go:276] 0 containers: []
	W0819 19:16:28.101903  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:28.101912  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:28.101977  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:28.136402  438716 cri.go:89] found id: ""
	I0819 19:16:28.136437  438716 logs.go:276] 0 containers: []
	W0819 19:16:28.136449  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:28.136460  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:28.136528  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:28.171766  438716 cri.go:89] found id: ""
	I0819 19:16:28.171795  438716 logs.go:276] 0 containers: []
	W0819 19:16:28.171803  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:28.171809  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:28.171864  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:28.206228  438716 cri.go:89] found id: ""
	I0819 19:16:28.206256  438716 logs.go:276] 0 containers: []
	W0819 19:16:28.206264  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:28.206272  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:28.206337  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:28.248877  438716 cri.go:89] found id: ""
	I0819 19:16:28.248912  438716 logs.go:276] 0 containers: []
	W0819 19:16:28.248923  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:28.248931  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:28.249002  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:28.290160  438716 cri.go:89] found id: ""
	I0819 19:16:28.290201  438716 logs.go:276] 0 containers: []
	W0819 19:16:28.290212  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:28.290221  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:28.290287  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:28.340413  438716 cri.go:89] found id: ""
	I0819 19:16:28.340445  438716 logs.go:276] 0 containers: []
	W0819 19:16:28.340454  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:28.340461  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:28.340513  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:28.385486  438716 cri.go:89] found id: ""
	I0819 19:16:28.385513  438716 logs.go:276] 0 containers: []
	W0819 19:16:28.385521  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:28.385532  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:28.385544  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:28.441987  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:28.442029  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:28.456509  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:28.456538  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:28.527941  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:28.527976  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:28.527993  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:28.612696  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:28.612738  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:28.901312  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:30.901640  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:28.693578  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:30.693686  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:30.418811  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:32.919696  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:31.154773  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:31.168718  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:31.168789  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:31.205365  438716 cri.go:89] found id: ""
	I0819 19:16:31.205399  438716 logs.go:276] 0 containers: []
	W0819 19:16:31.205411  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:31.205419  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:31.205496  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:31.238829  438716 cri.go:89] found id: ""
	I0819 19:16:31.238871  438716 logs.go:276] 0 containers: []
	W0819 19:16:31.238879  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:31.238886  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:31.238936  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:31.273229  438716 cri.go:89] found id: ""
	I0819 19:16:31.273259  438716 logs.go:276] 0 containers: []
	W0819 19:16:31.273304  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:31.273313  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:31.273377  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:31.309559  438716 cri.go:89] found id: ""
	I0819 19:16:31.309601  438716 logs.go:276] 0 containers: []
	W0819 19:16:31.309613  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:31.309622  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:31.309689  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:31.344939  438716 cri.go:89] found id: ""
	I0819 19:16:31.344971  438716 logs.go:276] 0 containers: []
	W0819 19:16:31.344981  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:31.344987  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:31.345043  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:31.382423  438716 cri.go:89] found id: ""
	I0819 19:16:31.382455  438716 logs.go:276] 0 containers: []
	W0819 19:16:31.382468  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:31.382474  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:31.382525  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:31.420148  438716 cri.go:89] found id: ""
	I0819 19:16:31.420174  438716 logs.go:276] 0 containers: []
	W0819 19:16:31.420184  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:31.420192  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:31.420262  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:31.455691  438716 cri.go:89] found id: ""
	I0819 19:16:31.455720  438716 logs.go:276] 0 containers: []
	W0819 19:16:31.455730  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:31.455740  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:31.455753  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:31.509501  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:31.509549  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:31.523650  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:31.523693  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:31.591535  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:31.591557  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:31.591574  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:31.674038  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:31.674077  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:34.216506  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:34.232782  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:34.232875  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:34.286103  438716 cri.go:89] found id: ""
	I0819 19:16:34.286136  438716 logs.go:276] 0 containers: []
	W0819 19:16:34.286147  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:34.286156  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:34.286221  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:34.324193  438716 cri.go:89] found id: ""
	I0819 19:16:34.324220  438716 logs.go:276] 0 containers: []
	W0819 19:16:34.324229  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:34.324235  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:34.324292  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:34.382777  438716 cri.go:89] found id: ""
	I0819 19:16:34.382804  438716 logs.go:276] 0 containers: []
	W0819 19:16:34.382814  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:34.382822  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:34.382887  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:34.420714  438716 cri.go:89] found id: ""
	I0819 19:16:34.420743  438716 logs.go:276] 0 containers: []
	W0819 19:16:34.420753  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:34.420771  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:34.420840  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:34.455338  438716 cri.go:89] found id: ""
	I0819 19:16:34.455369  438716 logs.go:276] 0 containers: []
	W0819 19:16:34.455381  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:34.455391  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:34.455467  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:34.489528  438716 cri.go:89] found id: ""
	I0819 19:16:34.489566  438716 logs.go:276] 0 containers: []
	W0819 19:16:34.489575  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:34.489581  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:34.489634  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:34.523830  438716 cri.go:89] found id: ""
	I0819 19:16:34.523857  438716 logs.go:276] 0 containers: []
	W0819 19:16:34.523866  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:34.523873  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:34.523940  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:34.559023  438716 cri.go:89] found id: ""
	I0819 19:16:34.559052  438716 logs.go:276] 0 containers: []
	W0819 19:16:34.559063  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:34.559077  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:34.559092  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:34.639116  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:34.639159  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:34.675990  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:34.676017  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:34.730900  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:34.730935  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:34.744938  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:34.744964  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:34.816267  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:32.902138  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:35.401865  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:32.696537  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:35.192648  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:35.687633  438245 pod_ready.go:82] duration metric: took 4m0.000667446s for pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace to be "Ready" ...
	E0819 19:16:35.687688  438245 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0819 19:16:35.687715  438245 pod_ready.go:39] duration metric: took 4m13.552784118s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 19:16:35.687770  438245 kubeadm.go:597] duration metric: took 4m20.936149722s to restartPrimaryControlPlane
	W0819 19:16:35.687875  438245 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0819 19:16:35.687929  438245 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 19:16:35.419327  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:37.420007  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:37.317314  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:37.331915  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:37.331982  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:37.370233  438716 cri.go:89] found id: ""
	I0819 19:16:37.370261  438716 logs.go:276] 0 containers: []
	W0819 19:16:37.370269  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:37.370276  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:37.370343  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:37.409042  438716 cri.go:89] found id: ""
	I0819 19:16:37.409071  438716 logs.go:276] 0 containers: []
	W0819 19:16:37.409082  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:37.409090  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:37.409161  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:37.445903  438716 cri.go:89] found id: ""
	I0819 19:16:37.445932  438716 logs.go:276] 0 containers: []
	W0819 19:16:37.445941  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:37.445948  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:37.445999  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:37.484275  438716 cri.go:89] found id: ""
	I0819 19:16:37.484318  438716 logs.go:276] 0 containers: []
	W0819 19:16:37.484328  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:37.484334  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:37.484393  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:37.528131  438716 cri.go:89] found id: ""
	I0819 19:16:37.528161  438716 logs.go:276] 0 containers: []
	W0819 19:16:37.528174  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:37.528180  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:37.528243  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:37.563374  438716 cri.go:89] found id: ""
	I0819 19:16:37.563406  438716 logs.go:276] 0 containers: []
	W0819 19:16:37.563414  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:37.563421  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:37.563473  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:37.597234  438716 cri.go:89] found id: ""
	I0819 19:16:37.597260  438716 logs.go:276] 0 containers: []
	W0819 19:16:37.597267  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:37.597274  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:37.597329  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:37.634809  438716 cri.go:89] found id: ""
	I0819 19:16:37.634845  438716 logs.go:276] 0 containers: []
	W0819 19:16:37.634854  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:37.634864  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:37.634879  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:37.704354  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:37.704380  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:37.704396  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:37.788606  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:37.788646  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:37.830486  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:37.830513  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:37.890642  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:37.890681  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:40.405473  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:40.420019  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:40.420094  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:40.458558  438716 cri.go:89] found id: ""
	I0819 19:16:40.458586  438716 logs.go:276] 0 containers: []
	W0819 19:16:40.458598  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:40.458606  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:40.458671  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:40.500353  438716 cri.go:89] found id: ""
	I0819 19:16:40.500379  438716 logs.go:276] 0 containers: []
	W0819 19:16:40.500388  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:40.500394  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:40.500445  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:37.901881  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:39.902097  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:39.918877  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:41.919112  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:43.920092  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:40.534281  438716 cri.go:89] found id: ""
	I0819 19:16:40.534307  438716 logs.go:276] 0 containers: []
	W0819 19:16:40.534316  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:40.534322  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:40.534379  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:40.569537  438716 cri.go:89] found id: ""
	I0819 19:16:40.569568  438716 logs.go:276] 0 containers: []
	W0819 19:16:40.569578  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:40.569587  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:40.569654  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:40.603066  438716 cri.go:89] found id: ""
	I0819 19:16:40.603097  438716 logs.go:276] 0 containers: []
	W0819 19:16:40.603110  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:40.603118  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:40.603171  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:40.637598  438716 cri.go:89] found id: ""
	I0819 19:16:40.637628  438716 logs.go:276] 0 containers: []
	W0819 19:16:40.637637  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:40.637643  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:40.637704  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:40.673583  438716 cri.go:89] found id: ""
	I0819 19:16:40.673616  438716 logs.go:276] 0 containers: []
	W0819 19:16:40.673629  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:40.673637  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:40.673692  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:40.708324  438716 cri.go:89] found id: ""
	I0819 19:16:40.708354  438716 logs.go:276] 0 containers: []
	W0819 19:16:40.708363  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:40.708373  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:40.708387  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:40.789743  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:40.789782  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:40.830849  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:40.830884  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:40.882662  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:40.882700  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:40.896843  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:40.896869  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:40.969491  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:43.470579  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:43.483791  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:43.483876  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:43.523764  438716 cri.go:89] found id: ""
	I0819 19:16:43.523797  438716 logs.go:276] 0 containers: []
	W0819 19:16:43.523809  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:43.523817  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:43.523882  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:43.557925  438716 cri.go:89] found id: ""
	I0819 19:16:43.557953  438716 logs.go:276] 0 containers: []
	W0819 19:16:43.557960  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:43.557966  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:43.558017  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:43.591324  438716 cri.go:89] found id: ""
	I0819 19:16:43.591355  438716 logs.go:276] 0 containers: []
	W0819 19:16:43.591364  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:43.591370  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:43.591421  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:43.625798  438716 cri.go:89] found id: ""
	I0819 19:16:43.625826  438716 logs.go:276] 0 containers: []
	W0819 19:16:43.625834  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:43.625840  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:43.625898  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:43.659787  438716 cri.go:89] found id: ""
	I0819 19:16:43.659815  438716 logs.go:276] 0 containers: []
	W0819 19:16:43.659823  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:43.659830  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:43.659882  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:43.692982  438716 cri.go:89] found id: ""
	I0819 19:16:43.693008  438716 logs.go:276] 0 containers: []
	W0819 19:16:43.693017  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:43.693024  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:43.693075  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:43.726059  438716 cri.go:89] found id: ""
	I0819 19:16:43.726092  438716 logs.go:276] 0 containers: []
	W0819 19:16:43.726104  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:43.726113  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:43.726187  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:43.760906  438716 cri.go:89] found id: ""
	I0819 19:16:43.760947  438716 logs.go:276] 0 containers: []
	W0819 19:16:43.760958  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:43.760971  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:43.760994  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:43.812249  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:43.812285  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:43.826538  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:43.826566  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:43.894904  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:43.894926  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:43.894941  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:43.975746  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:43.975796  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:41.902398  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:43.902728  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:46.401834  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:46.419345  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:48.918688  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:46.515329  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:46.529088  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:46.529170  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:46.564525  438716 cri.go:89] found id: ""
	I0819 19:16:46.564557  438716 logs.go:276] 0 containers: []
	W0819 19:16:46.564570  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:46.564578  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:46.564647  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:46.598457  438716 cri.go:89] found id: ""
	I0819 19:16:46.598485  438716 logs.go:276] 0 containers: []
	W0819 19:16:46.598494  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:46.598499  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:46.598549  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:46.631767  438716 cri.go:89] found id: ""
	I0819 19:16:46.631798  438716 logs.go:276] 0 containers: []
	W0819 19:16:46.631807  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:46.631814  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:46.631867  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:46.664978  438716 cri.go:89] found id: ""
	I0819 19:16:46.665013  438716 logs.go:276] 0 containers: []
	W0819 19:16:46.665026  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:46.665034  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:46.665094  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:46.701024  438716 cri.go:89] found id: ""
	I0819 19:16:46.701052  438716 logs.go:276] 0 containers: []
	W0819 19:16:46.701061  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:46.701067  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:46.701132  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:46.735834  438716 cri.go:89] found id: ""
	I0819 19:16:46.735874  438716 logs.go:276] 0 containers: []
	W0819 19:16:46.735886  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:46.735894  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:46.735978  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:46.773392  438716 cri.go:89] found id: ""
	I0819 19:16:46.773426  438716 logs.go:276] 0 containers: []
	W0819 19:16:46.773437  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:46.773445  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:46.773498  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:46.819800  438716 cri.go:89] found id: ""
	I0819 19:16:46.819829  438716 logs.go:276] 0 containers: []
	W0819 19:16:46.819841  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:46.819869  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:46.819889  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:46.860633  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:46.860669  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:46.911895  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:46.911936  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:46.927388  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:46.927422  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:46.998601  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:46.998628  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:46.998645  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:49.585303  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:49.598962  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:49.599032  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:49.631891  438716 cri.go:89] found id: ""
	I0819 19:16:49.631920  438716 logs.go:276] 0 containers: []
	W0819 19:16:49.631931  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:49.631940  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:49.631998  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:49.671731  438716 cri.go:89] found id: ""
	I0819 19:16:49.671761  438716 logs.go:276] 0 containers: []
	W0819 19:16:49.671777  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:49.671786  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:49.671846  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:49.707517  438716 cri.go:89] found id: ""
	I0819 19:16:49.707556  438716 logs.go:276] 0 containers: []
	W0819 19:16:49.707568  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:49.707578  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:49.707651  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:49.744255  438716 cri.go:89] found id: ""
	I0819 19:16:49.744289  438716 logs.go:276] 0 containers: []
	W0819 19:16:49.744299  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:49.744305  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:49.744357  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:49.779224  438716 cri.go:89] found id: ""
	I0819 19:16:49.779252  438716 logs.go:276] 0 containers: []
	W0819 19:16:49.779259  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:49.779266  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:49.779322  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:49.815641  438716 cri.go:89] found id: ""
	I0819 19:16:49.815689  438716 logs.go:276] 0 containers: []
	W0819 19:16:49.815701  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:49.815711  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:49.815769  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:49.851861  438716 cri.go:89] found id: ""
	I0819 19:16:49.851894  438716 logs.go:276] 0 containers: []
	W0819 19:16:49.851906  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:49.851915  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:49.851984  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:49.888140  438716 cri.go:89] found id: ""
	I0819 19:16:49.888173  438716 logs.go:276] 0 containers: []
	W0819 19:16:49.888186  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:49.888199  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:49.888215  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:49.940389  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:49.940430  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:49.954519  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:49.954553  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:50.028462  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:50.028486  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:50.028502  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:50.108319  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:50.108362  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:48.901902  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:50.902702  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:50.919079  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:52.919271  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:52.647146  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:52.660468  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:52.660558  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:52.697665  438716 cri.go:89] found id: ""
	I0819 19:16:52.697703  438716 logs.go:276] 0 containers: []
	W0819 19:16:52.697719  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:52.697727  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:52.697786  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:52.739169  438716 cri.go:89] found id: ""
	I0819 19:16:52.739203  438716 logs.go:276] 0 containers: []
	W0819 19:16:52.739214  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:52.739222  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:52.739289  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:52.776580  438716 cri.go:89] found id: ""
	I0819 19:16:52.776610  438716 logs.go:276] 0 containers: []
	W0819 19:16:52.776619  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:52.776630  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:52.776683  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:52.813443  438716 cri.go:89] found id: ""
	I0819 19:16:52.813475  438716 logs.go:276] 0 containers: []
	W0819 19:16:52.813488  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:52.813497  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:52.813557  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:52.848035  438716 cri.go:89] found id: ""
	I0819 19:16:52.848064  438716 logs.go:276] 0 containers: []
	W0819 19:16:52.848075  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:52.848082  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:52.848150  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:52.881814  438716 cri.go:89] found id: ""
	I0819 19:16:52.881841  438716 logs.go:276] 0 containers: []
	W0819 19:16:52.881858  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:52.881867  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:52.881930  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:52.922179  438716 cri.go:89] found id: ""
	I0819 19:16:52.922202  438716 logs.go:276] 0 containers: []
	W0819 19:16:52.922210  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:52.922216  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:52.922277  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:52.958110  438716 cri.go:89] found id: ""
	I0819 19:16:52.958136  438716 logs.go:276] 0 containers: []
	W0819 19:16:52.958144  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:52.958153  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:52.958167  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:53.008553  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:53.008592  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:53.022826  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:53.022860  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:53.094940  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:53.094967  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:53.094982  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:53.173877  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:53.173920  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:53.403382  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:55.905504  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:55.419297  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:55.419331  438295 pod_ready.go:82] duration metric: took 4m0.007107243s for pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace to be "Ready" ...
	E0819 19:16:55.419345  438295 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0819 19:16:55.419355  438295 pod_ready.go:39] duration metric: took 4m4.316528467s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 19:16:55.419408  438295 api_server.go:52] waiting for apiserver process to appear ...
	I0819 19:16:55.419449  438295 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:55.419499  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:55.466648  438295 cri.go:89] found id: "d66ad075c652a3b446078444a32327c07459f74199be8f89197067dbad566d5a"
	I0819 19:16:55.466679  438295 cri.go:89] found id: ""
	I0819 19:16:55.466690  438295 logs.go:276] 1 containers: [d66ad075c652a3b446078444a32327c07459f74199be8f89197067dbad566d5a]
	I0819 19:16:55.466758  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:16:55.471085  438295 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:55.471164  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:55.509883  438295 cri.go:89] found id: "a3cb2c04e3eb3398fa324b660ca1864f22175cbf41fd84eae34a24ce7928b672"
	I0819 19:16:55.509910  438295 cri.go:89] found id: ""
	I0819 19:16:55.509921  438295 logs.go:276] 1 containers: [a3cb2c04e3eb3398fa324b660ca1864f22175cbf41fd84eae34a24ce7928b672]
	I0819 19:16:55.509984  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:16:55.516866  438295 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:55.516954  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:55.560957  438295 cri.go:89] found id: "a6bc5b24f616e32fdffb80b6ed0201250b02f143c8217d56ef90dc55551d709f"
	I0819 19:16:55.560988  438295 cri.go:89] found id: ""
	I0819 19:16:55.560999  438295 logs.go:276] 1 containers: [a6bc5b24f616e32fdffb80b6ed0201250b02f143c8217d56ef90dc55551d709f]
	I0819 19:16:55.561065  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:16:55.565592  438295 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:55.565662  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:55.610872  438295 cri.go:89] found id: "c09c2a3840c6b84c4d187a5b4938f1e79c515609ad3ff7077a163e94acd5fc22"
	I0819 19:16:55.610905  438295 cri.go:89] found id: ""
	I0819 19:16:55.610914  438295 logs.go:276] 1 containers: [c09c2a3840c6b84c4d187a5b4938f1e79c515609ad3ff7077a163e94acd5fc22]
	I0819 19:16:55.610976  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:16:55.615411  438295 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:55.615486  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:55.652759  438295 cri.go:89] found id: "3e23a8501fe9333693618c26b918ed665ca9f2ea955dfc771ddbd90f4af91338"
	I0819 19:16:55.652792  438295 cri.go:89] found id: ""
	I0819 19:16:55.652807  438295 logs.go:276] 1 containers: [3e23a8501fe9333693618c26b918ed665ca9f2ea955dfc771ddbd90f4af91338]
	I0819 19:16:55.652873  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:16:55.657124  438295 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:55.657190  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:55.699063  438295 cri.go:89] found id: "6e6dab43bac16fb6a2155177fd2cb01da57c882a322ae89145bc332c50c87071"
	I0819 19:16:55.699085  438295 cri.go:89] found id: ""
	I0819 19:16:55.699093  438295 logs.go:276] 1 containers: [6e6dab43bac16fb6a2155177fd2cb01da57c882a322ae89145bc332c50c87071]
	I0819 19:16:55.699145  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:16:55.703224  438295 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:55.703292  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:55.753166  438295 cri.go:89] found id: ""
	I0819 19:16:55.753198  438295 logs.go:276] 0 containers: []
	W0819 19:16:55.753210  438295 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:55.753218  438295 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 19:16:55.753286  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 19:16:55.803518  438295 cri.go:89] found id: "902796698c02b97c3f50f231cba5dfbc00bc7e8344f104fe7a36109e1d10a4f8"
	I0819 19:16:55.803551  438295 cri.go:89] found id: "44a4290db8405288dc877d1dbfa8f1a4976cb6221431aef419db3cdff822d3b6"
	I0819 19:16:55.803558  438295 cri.go:89] found id: ""
	I0819 19:16:55.803568  438295 logs.go:276] 2 containers: [902796698c02b97c3f50f231cba5dfbc00bc7e8344f104fe7a36109e1d10a4f8 44a4290db8405288dc877d1dbfa8f1a4976cb6221431aef419db3cdff822d3b6]
	I0819 19:16:55.803637  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:16:55.808063  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:16:55.812708  438295 logs.go:123] Gathering logs for container status ...
	I0819 19:16:55.812737  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:55.861697  438295 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:55.861736  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 19:16:55.911203  438295 logs.go:138] Found kubelet problem: Aug 19 19:12:40 embed-certs-024748 kubelet[936]: W0819 19:12:40.671901     936 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:embed-certs-024748" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-024748' and this object
	W0819 19:16:55.911420  438295 logs.go:138] Found kubelet problem: Aug 19 19:12:40 embed-certs-024748 kubelet[936]: E0819 19:12:40.672098     936 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:embed-certs-024748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-024748' and this object" logger="UnhandledError"
	W0819 19:16:55.911603  438295 logs.go:138] Found kubelet problem: Aug 19 19:12:40 embed-certs-024748 kubelet[936]: W0819 19:12:40.672624     936 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-024748" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-024748' and this object
	W0819 19:16:55.911834  438295 logs.go:138] Found kubelet problem: Aug 19 19:12:40 embed-certs-024748 kubelet[936]: E0819 19:12:40.672667     936 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:embed-certs-024748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-024748' and this object" logger="UnhandledError"
	I0819 19:16:55.949585  438295 logs.go:123] Gathering logs for kube-scheduler [c09c2a3840c6b84c4d187a5b4938f1e79c515609ad3ff7077a163e94acd5fc22] ...
	I0819 19:16:55.949663  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c09c2a3840c6b84c4d187a5b4938f1e79c515609ad3ff7077a163e94acd5fc22"
	I0819 19:16:55.995063  438295 logs.go:123] Gathering logs for kube-controller-manager [6e6dab43bac16fb6a2155177fd2cb01da57c882a322ae89145bc332c50c87071] ...
	I0819 19:16:55.995100  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e6dab43bac16fb6a2155177fd2cb01da57c882a322ae89145bc332c50c87071"
	I0819 19:16:56.062320  438295 logs.go:123] Gathering logs for storage-provisioner [902796698c02b97c3f50f231cba5dfbc00bc7e8344f104fe7a36109e1d10a4f8] ...
	I0819 19:16:56.062376  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 902796698c02b97c3f50f231cba5dfbc00bc7e8344f104fe7a36109e1d10a4f8"
	I0819 19:16:56.100112  438295 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:56.100152  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:56.589439  438295 logs.go:123] Gathering logs for kube-proxy [3e23a8501fe9333693618c26b918ed665ca9f2ea955dfc771ddbd90f4af91338] ...
	I0819 19:16:56.589486  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e23a8501fe9333693618c26b918ed665ca9f2ea955dfc771ddbd90f4af91338"
	I0819 19:16:56.632096  438295 logs.go:123] Gathering logs for storage-provisioner [44a4290db8405288dc877d1dbfa8f1a4976cb6221431aef419db3cdff822d3b6] ...
	I0819 19:16:56.632132  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44a4290db8405288dc877d1dbfa8f1a4976cb6221431aef419db3cdff822d3b6"
	I0819 19:16:56.670952  438295 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:56.670984  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:56.685246  438295 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:56.685279  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 19:16:56.826418  438295 logs.go:123] Gathering logs for kube-apiserver [d66ad075c652a3b446078444a32327c07459f74199be8f89197067dbad566d5a] ...
	I0819 19:16:56.826456  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d66ad075c652a3b446078444a32327c07459f74199be8f89197067dbad566d5a"
	I0819 19:16:56.876901  438295 logs.go:123] Gathering logs for etcd [a3cb2c04e3eb3398fa324b660ca1864f22175cbf41fd84eae34a24ce7928b672] ...
	I0819 19:16:56.876944  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a3cb2c04e3eb3398fa324b660ca1864f22175cbf41fd84eae34a24ce7928b672"
	I0819 19:16:56.920390  438295 logs.go:123] Gathering logs for coredns [a6bc5b24f616e32fdffb80b6ed0201250b02f143c8217d56ef90dc55551d709f] ...
	I0819 19:16:56.920423  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6bc5b24f616e32fdffb80b6ed0201250b02f143c8217d56ef90dc55551d709f"
	I0819 19:16:56.961691  438295 out.go:358] Setting ErrFile to fd 2...
	I0819 19:16:56.961718  438295 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 19:16:56.961793  438295 out.go:270] X Problems detected in kubelet:
	W0819 19:16:56.961805  438295 out.go:270]   Aug 19 19:12:40 embed-certs-024748 kubelet[936]: W0819 19:12:40.671901     936 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:embed-certs-024748" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-024748' and this object
	W0819 19:16:56.961824  438295 out.go:270]   Aug 19 19:12:40 embed-certs-024748 kubelet[936]: E0819 19:12:40.672098     936 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:embed-certs-024748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-024748' and this object" logger="UnhandledError"
	W0819 19:16:56.961839  438295 out.go:270]   Aug 19 19:12:40 embed-certs-024748 kubelet[936]: W0819 19:12:40.672624     936 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-024748" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-024748' and this object
	W0819 19:16:56.961853  438295 out.go:270]   Aug 19 19:12:40 embed-certs-024748 kubelet[936]: E0819 19:12:40.672667     936 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:embed-certs-024748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-024748' and this object" logger="UnhandledError"
	I0819 19:16:56.961884  438295 out.go:358] Setting ErrFile to fd 2...
	I0819 19:16:56.961893  438295 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:16:55.716096  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:55.734732  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:55.734817  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:55.780484  438716 cri.go:89] found id: ""
	I0819 19:16:55.780514  438716 logs.go:276] 0 containers: []
	W0819 19:16:55.780525  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:55.780534  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:55.780607  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:55.821755  438716 cri.go:89] found id: ""
	I0819 19:16:55.821778  438716 logs.go:276] 0 containers: []
	W0819 19:16:55.821786  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:55.821792  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:55.821855  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:55.861032  438716 cri.go:89] found id: ""
	I0819 19:16:55.861066  438716 logs.go:276] 0 containers: []
	W0819 19:16:55.861077  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:55.861086  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:55.861159  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:55.909978  438716 cri.go:89] found id: ""
	I0819 19:16:55.910004  438716 logs.go:276] 0 containers: []
	W0819 19:16:55.910015  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:55.910024  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:55.910087  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:55.956603  438716 cri.go:89] found id: ""
	I0819 19:16:55.956634  438716 logs.go:276] 0 containers: []
	W0819 19:16:55.956645  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:55.956653  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:55.956722  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:55.999176  438716 cri.go:89] found id: ""
	I0819 19:16:55.999203  438716 logs.go:276] 0 containers: []
	W0819 19:16:55.999216  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:55.999225  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:55.999286  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:56.035141  438716 cri.go:89] found id: ""
	I0819 19:16:56.035172  438716 logs.go:276] 0 containers: []
	W0819 19:16:56.035183  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:56.035192  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:56.035255  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:56.076152  438716 cri.go:89] found id: ""
	I0819 19:16:56.076185  438716 logs.go:276] 0 containers: []
	W0819 19:16:56.076197  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:56.076209  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:56.076226  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:56.136624  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:56.136671  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:56.151867  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:56.151902  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:56.231650  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:56.231696  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:56.231713  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:56.307203  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:56.307247  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:58.848295  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:58.861984  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:58.862172  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:58.900089  438716 cri.go:89] found id: ""
	I0819 19:16:58.900114  438716 logs.go:276] 0 containers: []
	W0819 19:16:58.900124  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:58.900132  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:58.900203  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:58.932528  438716 cri.go:89] found id: ""
	I0819 19:16:58.932551  438716 logs.go:276] 0 containers: []
	W0819 19:16:58.932559  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:58.932565  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:58.932618  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:58.967255  438716 cri.go:89] found id: ""
	I0819 19:16:58.967283  438716 logs.go:276] 0 containers: []
	W0819 19:16:58.967291  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:58.967298  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:58.967349  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:59.000887  438716 cri.go:89] found id: ""
	I0819 19:16:59.000923  438716 logs.go:276] 0 containers: []
	W0819 19:16:59.000934  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:59.000942  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:59.001009  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:59.041386  438716 cri.go:89] found id: ""
	I0819 19:16:59.041417  438716 logs.go:276] 0 containers: []
	W0819 19:16:59.041428  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:59.041436  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:59.041499  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:59.080036  438716 cri.go:89] found id: ""
	I0819 19:16:59.080078  438716 logs.go:276] 0 containers: []
	W0819 19:16:59.080090  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:59.080099  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:59.080168  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:59.113946  438716 cri.go:89] found id: ""
	I0819 19:16:59.113982  438716 logs.go:276] 0 containers: []
	W0819 19:16:59.113995  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:59.114004  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:59.114066  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:59.155413  438716 cri.go:89] found id: ""
	I0819 19:16:59.155437  438716 logs.go:276] 0 containers: []
	W0819 19:16:59.155446  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:59.155456  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:59.155477  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:59.223795  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:59.223815  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:59.223828  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:59.304516  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:59.304554  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:59.344975  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:59.345005  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:59.397751  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:59.397789  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:58.402453  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:00.901494  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:02.043611  438245 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.355651212s)
	I0819 19:17:02.043735  438245 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 19:17:02.066981  438245 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 19:17:02.083179  438245 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 19:17:02.100807  438245 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 19:17:02.100829  438245 kubeadm.go:157] found existing configuration files:
	
	I0819 19:17:02.100877  438245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0819 19:17:02.116462  438245 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 19:17:02.116534  438245 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 19:17:02.127313  438245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0819 19:17:02.147096  438245 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 19:17:02.147170  438245 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 19:17:02.159262  438245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0819 19:17:02.168825  438245 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 19:17:02.168918  438245 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 19:17:02.179354  438245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0819 19:17:02.188982  438245 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 19:17:02.189051  438245 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 19:17:02.199291  438245 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 19:17:01.914433  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:17:01.927468  438716 kubeadm.go:597] duration metric: took 4m3.453401239s to restartPrimaryControlPlane
	W0819 19:17:01.927564  438716 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0819 19:17:01.927600  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 19:17:02.647971  438716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 19:17:02.665946  438716 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 19:17:02.676665  438716 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 19:17:02.686818  438716 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 19:17:02.686840  438716 kubeadm.go:157] found existing configuration files:
	
	I0819 19:17:02.686885  438716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 19:17:02.697160  438716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 19:17:02.697228  438716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 19:17:02.707774  438716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 19:17:02.717251  438716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 19:17:02.717310  438716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 19:17:02.727481  438716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 19:17:02.738085  438716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 19:17:02.738141  438716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 19:17:02.749286  438716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 19:17:02.759965  438716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 19:17:02.760025  438716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 19:17:02.770753  438716 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 19:17:02.835857  438716 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0819 19:17:02.835940  438716 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 19:17:02.983775  438716 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 19:17:02.983974  438716 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 19:17:02.984149  438716 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0819 19:17:03.173404  438716 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 19:17:03.175412  438716 out.go:235]   - Generating certificates and keys ...
	I0819 19:17:03.175520  438716 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 19:17:03.175659  438716 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 19:17:03.175805  438716 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 19:17:03.175913  438716 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 19:17:03.176021  438716 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 19:17:03.176125  438716 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 19:17:03.176626  438716 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 19:17:03.177624  438716 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 19:17:03.178399  438716 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 19:17:03.179325  438716 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 19:17:03.179599  438716 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 19:17:03.179702  438716 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 19:17:03.416467  438716 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 19:17:03.505378  438716 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 19:17:03.588959  438716 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 19:17:03.680602  438716 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 19:17:03.697717  438716 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 19:17:03.700436  438716 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 19:17:03.700579  438716 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 19:17:03.858804  438716 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 19:17:03.861395  438716 out.go:235]   - Booting up control plane ...
	I0819 19:17:03.861520  438716 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 19:17:03.877387  438716 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 19:17:03.878611  438716 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 19:17:03.882842  438716 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 19:17:03.887436  438716 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0819 19:17:02.902839  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:05.402376  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:02.248409  438245 kubeadm.go:310] W0819 19:17:02.217617    2563 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 19:17:02.250447  438245 kubeadm.go:310] W0819 19:17:02.219827    2563 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 19:17:02.377127  438245 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 19:17:06.962848  438295 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:17:06.984774  438295 api_server.go:72] duration metric: took 4m23.117653428s to wait for apiserver process to appear ...
	I0819 19:17:06.984811  438295 api_server.go:88] waiting for apiserver healthz status ...
	I0819 19:17:06.984865  438295 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:17:06.984939  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:17:07.025158  438295 cri.go:89] found id: "d66ad075c652a3b446078444a32327c07459f74199be8f89197067dbad566d5a"
	I0819 19:17:07.025201  438295 cri.go:89] found id: ""
	I0819 19:17:07.025213  438295 logs.go:276] 1 containers: [d66ad075c652a3b446078444a32327c07459f74199be8f89197067dbad566d5a]
	I0819 19:17:07.025287  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:07.032365  438295 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:17:07.032446  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:17:07.073368  438295 cri.go:89] found id: "a3cb2c04e3eb3398fa324b660ca1864f22175cbf41fd84eae34a24ce7928b672"
	I0819 19:17:07.073394  438295 cri.go:89] found id: ""
	I0819 19:17:07.073403  438295 logs.go:276] 1 containers: [a3cb2c04e3eb3398fa324b660ca1864f22175cbf41fd84eae34a24ce7928b672]
	I0819 19:17:07.073463  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:07.078781  438295 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:17:07.078891  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:17:07.123263  438295 cri.go:89] found id: "a6bc5b24f616e32fdffb80b6ed0201250b02f143c8217d56ef90dc55551d709f"
	I0819 19:17:07.123293  438295 cri.go:89] found id: ""
	I0819 19:17:07.123303  438295 logs.go:276] 1 containers: [a6bc5b24f616e32fdffb80b6ed0201250b02f143c8217d56ef90dc55551d709f]
	I0819 19:17:07.123365  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:07.128485  438295 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:17:07.128579  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:17:07.167105  438295 cri.go:89] found id: "c09c2a3840c6b84c4d187a5b4938f1e79c515609ad3ff7077a163e94acd5fc22"
	I0819 19:17:07.167137  438295 cri.go:89] found id: ""
	I0819 19:17:07.167148  438295 logs.go:276] 1 containers: [c09c2a3840c6b84c4d187a5b4938f1e79c515609ad3ff7077a163e94acd5fc22]
	I0819 19:17:07.167215  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:07.171571  438295 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:17:07.171641  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:17:07.215524  438295 cri.go:89] found id: "3e23a8501fe9333693618c26b918ed665ca9f2ea955dfc771ddbd90f4af91338"
	I0819 19:17:07.215547  438295 cri.go:89] found id: ""
	I0819 19:17:07.215555  438295 logs.go:276] 1 containers: [3e23a8501fe9333693618c26b918ed665ca9f2ea955dfc771ddbd90f4af91338]
	I0819 19:17:07.215621  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:07.221604  438295 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:17:07.221676  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:17:07.263106  438295 cri.go:89] found id: "6e6dab43bac16fb6a2155177fd2cb01da57c882a322ae89145bc332c50c87071"
	I0819 19:17:07.263140  438295 cri.go:89] found id: ""
	I0819 19:17:07.263149  438295 logs.go:276] 1 containers: [6e6dab43bac16fb6a2155177fd2cb01da57c882a322ae89145bc332c50c87071]
	I0819 19:17:07.263209  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:07.267703  438295 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:17:07.267770  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:17:07.316006  438295 cri.go:89] found id: ""
	I0819 19:17:07.316042  438295 logs.go:276] 0 containers: []
	W0819 19:17:07.316054  438295 logs.go:278] No container was found matching "kindnet"
	I0819 19:17:07.316062  438295 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 19:17:07.316132  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 19:17:07.361100  438295 cri.go:89] found id: "902796698c02b97c3f50f231cba5dfbc00bc7e8344f104fe7a36109e1d10a4f8"
	I0819 19:17:07.361123  438295 cri.go:89] found id: "44a4290db8405288dc877d1dbfa8f1a4976cb6221431aef419db3cdff822d3b6"
	I0819 19:17:07.361126  438295 cri.go:89] found id: ""
	I0819 19:17:07.361133  438295 logs.go:276] 2 containers: [902796698c02b97c3f50f231cba5dfbc00bc7e8344f104fe7a36109e1d10a4f8 44a4290db8405288dc877d1dbfa8f1a4976cb6221431aef419db3cdff822d3b6]
	I0819 19:17:07.361190  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:07.366949  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:07.372724  438295 logs.go:123] Gathering logs for kubelet ...
	I0819 19:17:07.372748  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 19:17:07.413540  438295 logs.go:138] Found kubelet problem: Aug 19 19:12:40 embed-certs-024748 kubelet[936]: W0819 19:12:40.671901     936 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:embed-certs-024748" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-024748' and this object
	W0819 19:17:07.413722  438295 logs.go:138] Found kubelet problem: Aug 19 19:12:40 embed-certs-024748 kubelet[936]: E0819 19:12:40.672098     936 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:embed-certs-024748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-024748' and this object" logger="UnhandledError"
	W0819 19:17:07.413858  438295 logs.go:138] Found kubelet problem: Aug 19 19:12:40 embed-certs-024748 kubelet[936]: W0819 19:12:40.672624     936 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-024748" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-024748' and this object
	W0819 19:17:07.414017  438295 logs.go:138] Found kubelet problem: Aug 19 19:12:40 embed-certs-024748 kubelet[936]: E0819 19:12:40.672667     936 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:embed-certs-024748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-024748' and this object" logger="UnhandledError"
	I0819 19:17:07.452061  438295 logs.go:123] Gathering logs for coredns [a6bc5b24f616e32fdffb80b6ed0201250b02f143c8217d56ef90dc55551d709f] ...
	I0819 19:17:07.452104  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6bc5b24f616e32fdffb80b6ed0201250b02f143c8217d56ef90dc55551d709f"
	I0819 19:17:07.490598  438295 logs.go:123] Gathering logs for kube-scheduler [c09c2a3840c6b84c4d187a5b4938f1e79c515609ad3ff7077a163e94acd5fc22] ...
	I0819 19:17:07.490636  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c09c2a3840c6b84c4d187a5b4938f1e79c515609ad3ff7077a163e94acd5fc22"
	I0819 19:17:07.530454  438295 logs.go:123] Gathering logs for kube-proxy [3e23a8501fe9333693618c26b918ed665ca9f2ea955dfc771ddbd90f4af91338] ...
	I0819 19:17:07.530486  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e23a8501fe9333693618c26b918ed665ca9f2ea955dfc771ddbd90f4af91338"
	I0819 19:17:07.581488  438295 logs.go:123] Gathering logs for storage-provisioner [902796698c02b97c3f50f231cba5dfbc00bc7e8344f104fe7a36109e1d10a4f8] ...
	I0819 19:17:07.581528  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 902796698c02b97c3f50f231cba5dfbc00bc7e8344f104fe7a36109e1d10a4f8"
	I0819 19:17:07.621752  438295 logs.go:123] Gathering logs for storage-provisioner [44a4290db8405288dc877d1dbfa8f1a4976cb6221431aef419db3cdff822d3b6] ...
	I0819 19:17:07.621787  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44a4290db8405288dc877d1dbfa8f1a4976cb6221431aef419db3cdff822d3b6"
	I0819 19:17:07.661330  438295 logs.go:123] Gathering logs for container status ...
	I0819 19:17:07.661365  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:17:07.709227  438295 logs.go:123] Gathering logs for dmesg ...
	I0819 19:17:07.709261  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:17:07.724634  438295 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:17:07.724670  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 19:17:07.850212  438295 logs.go:123] Gathering logs for kube-apiserver [d66ad075c652a3b446078444a32327c07459f74199be8f89197067dbad566d5a] ...
	I0819 19:17:07.850247  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d66ad075c652a3b446078444a32327c07459f74199be8f89197067dbad566d5a"
	I0819 19:17:07.894464  438295 logs.go:123] Gathering logs for etcd [a3cb2c04e3eb3398fa324b660ca1864f22175cbf41fd84eae34a24ce7928b672] ...
	I0819 19:17:07.894507  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a3cb2c04e3eb3398fa324b660ca1864f22175cbf41fd84eae34a24ce7928b672"
	I0819 19:17:07.943807  438295 logs.go:123] Gathering logs for kube-controller-manager [6e6dab43bac16fb6a2155177fd2cb01da57c882a322ae89145bc332c50c87071] ...
	I0819 19:17:07.943841  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e6dab43bac16fb6a2155177fd2cb01da57c882a322ae89145bc332c50c87071"
	I0819 19:17:08.007428  438295 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:17:08.007463  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:17:08.487397  438295 out.go:358] Setting ErrFile to fd 2...
	I0819 19:17:08.487435  438295 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 19:17:08.487518  438295 out.go:270] X Problems detected in kubelet:
	W0819 19:17:08.487534  438295 out.go:270]   Aug 19 19:12:40 embed-certs-024748 kubelet[936]: W0819 19:12:40.671901     936 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:embed-certs-024748" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-024748' and this object
	W0819 19:17:08.487546  438295 out.go:270]   Aug 19 19:12:40 embed-certs-024748 kubelet[936]: E0819 19:12:40.672098     936 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:embed-certs-024748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-024748' and this object" logger="UnhandledError"
	W0819 19:17:08.487560  438295 out.go:270]   Aug 19 19:12:40 embed-certs-024748 kubelet[936]: W0819 19:12:40.672624     936 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-024748" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-024748' and this object
	W0819 19:17:08.487574  438295 out.go:270]   Aug 19 19:12:40 embed-certs-024748 kubelet[936]: E0819 19:12:40.672667     936 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:embed-certs-024748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-024748' and this object" logger="UnhandledError"
	I0819 19:17:08.487584  438295 out.go:358] Setting ErrFile to fd 2...
	I0819 19:17:08.487598  438295 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:17:10.237580  438245 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0819 19:17:10.237675  438245 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 19:17:10.237792  438245 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 19:17:10.237934  438245 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 19:17:10.238088  438245 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0819 19:17:10.238194  438245 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 19:17:10.239873  438245 out.go:235]   - Generating certificates and keys ...
	I0819 19:17:10.239957  438245 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 19:17:10.240051  438245 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 19:17:10.240187  438245 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 19:17:10.240294  438245 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 19:17:10.240410  438245 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 19:17:10.240495  438245 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 19:17:10.240598  438245 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 19:17:10.240680  438245 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 19:17:10.240747  438245 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 19:17:10.240843  438245 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 19:17:10.240886  438245 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 19:17:10.240958  438245 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 19:17:10.241024  438245 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 19:17:10.241094  438245 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0819 19:17:10.241159  438245 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 19:17:10.241248  438245 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 19:17:10.241328  438245 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 19:17:10.241431  438245 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 19:17:10.241535  438245 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 19:17:10.243764  438245 out.go:235]   - Booting up control plane ...
	I0819 19:17:10.243859  438245 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 19:17:10.243934  438245 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 19:17:10.243994  438245 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 19:17:10.244131  438245 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 19:17:10.244263  438245 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 19:17:10.244301  438245 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 19:17:10.244458  438245 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0819 19:17:10.244611  438245 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0819 19:17:10.244685  438245 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.412341ms
	I0819 19:17:10.244770  438245 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0819 19:17:10.244850  438245 kubeadm.go:310] [api-check] The API server is healthy after 5.002047877s
	I0819 19:17:10.244953  438245 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 19:17:10.245093  438245 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 19:17:10.245199  438245 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 19:17:10.245400  438245 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-982795 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 19:17:10.245465  438245 kubeadm.go:310] [bootstrap-token] Using token: trsfx5.kx2phd1605yhia2w
	I0819 19:17:10.247722  438245 out.go:235]   - Configuring RBAC rules ...
	I0819 19:17:10.247861  438245 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 19:17:10.247955  438245 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 19:17:10.248144  438245 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 19:17:10.248264  438245 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 19:17:10.248379  438245 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 19:17:10.248468  438245 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 19:17:10.248567  438245 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 19:17:10.248612  438245 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 19:17:10.248654  438245 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 19:17:10.248660  438245 kubeadm.go:310] 
	I0819 19:17:10.248708  438245 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 19:17:10.248713  438245 kubeadm.go:310] 
	I0819 19:17:10.248779  438245 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 19:17:10.248786  438245 kubeadm.go:310] 
	I0819 19:17:10.248806  438245 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 19:17:10.248866  438245 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 19:17:10.248910  438245 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 19:17:10.248916  438245 kubeadm.go:310] 
	I0819 19:17:10.248966  438245 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 19:17:10.248972  438245 kubeadm.go:310] 
	I0819 19:17:10.249014  438245 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 19:17:10.249024  438245 kubeadm.go:310] 
	I0819 19:17:10.249069  438245 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 19:17:10.249136  438245 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 19:17:10.249209  438245 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 19:17:10.249221  438245 kubeadm.go:310] 
	I0819 19:17:10.249319  438245 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 19:17:10.249386  438245 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 19:17:10.249392  438245 kubeadm.go:310] 
	I0819 19:17:10.249464  438245 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token trsfx5.kx2phd1605yhia2w \
	I0819 19:17:10.249553  438245 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3fcbd90565c5acbc36a47b2db682cb22dce9b172c9bf3af21e506ebb67608039 \
	I0819 19:17:10.249575  438245 kubeadm.go:310] 	--control-plane 
	I0819 19:17:10.249581  438245 kubeadm.go:310] 
	I0819 19:17:10.249658  438245 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 19:17:10.249664  438245 kubeadm.go:310] 
	I0819 19:17:10.249734  438245 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token trsfx5.kx2phd1605yhia2w \
	I0819 19:17:10.249833  438245 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3fcbd90565c5acbc36a47b2db682cb22dce9b172c9bf3af21e506ebb67608039 
	I0819 19:17:10.249849  438245 cni.go:84] Creating CNI manager for ""
	I0819 19:17:10.249857  438245 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 19:17:10.252133  438245 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 19:17:07.403590  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:09.901861  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:10.253419  438245 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 19:17:10.264266  438245 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 19:17:10.289509  438245 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 19:17:10.289661  438245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-982795 minikube.k8s.io/updated_at=2024_08_19T19_17_10_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=9c2db9d51ec33b5c53a86e9ba3d384ee332e3411 minikube.k8s.io/name=default-k8s-diff-port-982795 minikube.k8s.io/primary=true
	I0819 19:17:10.289663  438245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:17:10.322738  438245 ops.go:34] apiserver oom_adj: -16
	I0819 19:17:10.519946  438245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:17:11.020736  438245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:17:11.520925  438245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:17:12.020276  438245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:17:12.520277  438245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:17:13.020787  438245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:17:13.520048  438245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:17:14.020893  438245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:17:14.520869  438245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:17:14.642214  438245 kubeadm.go:1113] duration metric: took 4.352638211s to wait for elevateKubeSystemPrivileges
	I0819 19:17:14.642251  438245 kubeadm.go:394] duration metric: took 4m59.943476935s to StartCluster
	I0819 19:17:14.642295  438245 settings.go:142] acquiring lock: {Name:mk396fcf49a1d0e69583cf37ff3c819e37118163 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:17:14.642382  438245 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19468-372744/kubeconfig
	I0819 19:17:14.644103  438245 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/kubeconfig: {Name:mk8e7b4e1bb7da665111d2acd83eb48882c66853 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:17:14.644408  438245 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.48 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 19:17:14.644550  438245 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 19:17:14.644641  438245 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-982795"
	I0819 19:17:14.644665  438245 config.go:182] Loaded profile config "default-k8s-diff-port-982795": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:17:14.644687  438245 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-982795"
	W0819 19:17:14.644701  438245 addons.go:243] addon storage-provisioner should already be in state true
	I0819 19:17:14.644712  438245 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-982795"
	I0819 19:17:14.644735  438245 host.go:66] Checking if "default-k8s-diff-port-982795" exists ...
	I0819 19:17:14.644757  438245 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-982795"
	W0819 19:17:14.644770  438245 addons.go:243] addon metrics-server should already be in state true
	I0819 19:17:14.644678  438245 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-982795"
	I0819 19:17:14.644852  438245 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-982795"
	I0819 19:17:14.644797  438245 host.go:66] Checking if "default-k8s-diff-port-982795" exists ...
	I0819 19:17:14.645125  438245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:17:14.645176  438245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:17:14.645272  438245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:17:14.645291  438245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:17:14.645355  438245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:17:14.645401  438245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:17:14.646083  438245 out.go:177] * Verifying Kubernetes components...
	I0819 19:17:14.647579  438245 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:17:14.662756  438245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42581
	I0819 19:17:14.663407  438245 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:17:14.664088  438245 main.go:141] libmachine: Using API Version  1
	I0819 19:17:14.664117  438245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:17:14.664528  438245 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:17:14.665189  438245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:17:14.665222  438245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:17:14.665665  438245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43637
	I0819 19:17:14.665842  438245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44021
	I0819 19:17:14.666204  438245 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:17:14.666321  438245 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:17:14.666761  438245 main.go:141] libmachine: Using API Version  1
	I0819 19:17:14.666783  438245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:17:14.666955  438245 main.go:141] libmachine: Using API Version  1
	I0819 19:17:14.666979  438245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:17:14.667173  438245 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:17:14.667363  438245 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:17:14.667592  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetState
	I0819 19:17:14.667786  438245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:17:14.667818  438245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:17:14.671231  438245 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-982795"
	W0819 19:17:14.671249  438245 addons.go:243] addon default-storageclass should already be in state true
	I0819 19:17:14.671273  438245 host.go:66] Checking if "default-k8s-diff-port-982795" exists ...
	I0819 19:17:14.671507  438245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:17:14.671533  438245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:17:14.682996  438245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36593
	I0819 19:17:14.683560  438245 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:17:14.684268  438245 main.go:141] libmachine: Using API Version  1
	I0819 19:17:14.684292  438245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:17:14.684686  438245 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:17:14.684899  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetState
	I0819 19:17:14.686943  438245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44459
	I0819 19:17:14.687384  438245 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:17:14.687309  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .DriverName
	I0819 19:17:14.687874  438245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46587
	I0819 19:17:14.687965  438245 main.go:141] libmachine: Using API Version  1
	I0819 19:17:14.687980  438245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:17:14.688367  438245 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:17:14.688420  438245 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:17:14.688623  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetState
	I0819 19:17:14.689039  438245 main.go:141] libmachine: Using API Version  1
	I0819 19:17:14.689362  438245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:17:14.689690  438245 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:17:14.690179  438245 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:17:14.690626  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .DriverName
	I0819 19:17:14.690789  438245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:17:14.690823  438245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:17:14.690938  438245 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 19:17:14.690958  438245 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 19:17:14.690979  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHHostname
	I0819 19:17:14.692114  438245 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0819 19:17:11.902284  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:13.903205  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:16.402298  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:14.693147  438245 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0819 19:17:14.693163  438245 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0819 19:17:14.693182  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHHostname
	I0819 19:17:14.694601  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:17:14.695302  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:17:14.695333  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:17:14.695541  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHPort
	I0819 19:17:14.695760  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:17:14.696133  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHUsername
	I0819 19:17:14.696303  438245 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/default-k8s-diff-port-982795/id_rsa Username:docker}
	I0819 19:17:14.696554  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:17:14.696979  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:17:14.697003  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:17:14.697110  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHPort
	I0819 19:17:14.697274  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:17:14.697445  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHUsername
	I0819 19:17:14.697578  438245 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/default-k8s-diff-port-982795/id_rsa Username:docker}
	I0819 19:17:14.708592  438245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38807
	I0819 19:17:14.709140  438245 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:17:14.709716  438245 main.go:141] libmachine: Using API Version  1
	I0819 19:17:14.709737  438245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:17:14.710049  438245 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:17:14.710269  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetState
	I0819 19:17:14.711887  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .DriverName
	I0819 19:17:14.712147  438245 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 19:17:14.712162  438245 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 19:17:14.712179  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHHostname
	I0819 19:17:14.715593  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:17:14.716040  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:17:14.716062  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:17:14.716384  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHPort
	I0819 19:17:14.716561  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:17:14.716710  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHUsername
	I0819 19:17:14.716938  438245 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/default-k8s-diff-port-982795/id_rsa Username:docker}
	I0819 19:17:14.874857  438245 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 19:17:14.903798  438245 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-982795" to be "Ready" ...
	I0819 19:17:14.919842  438245 node_ready.go:49] node "default-k8s-diff-port-982795" has status "Ready":"True"
	I0819 19:17:14.919866  438245 node_ready.go:38] duration metric: took 16.039402ms for node "default-k8s-diff-port-982795" to be "Ready" ...
	I0819 19:17:14.919877  438245 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 19:17:14.932785  438245 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-845gx" in "kube-system" namespace to be "Ready" ...
	I0819 19:17:15.019664  438245 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0819 19:17:15.019718  438245 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0819 19:17:15.030317  438245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 19:17:15.056177  438245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 19:17:15.074202  438245 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0819 19:17:15.074235  438245 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0819 19:17:15.127037  438245 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 19:17:15.127071  438245 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0819 19:17:15.217951  438245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 19:17:15.351034  438245 main.go:141] libmachine: Making call to close driver server
	I0819 19:17:15.351067  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .Close
	I0819 19:17:15.351398  438245 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:17:15.351417  438245 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:17:15.351429  438245 main.go:141] libmachine: Making call to close driver server
	I0819 19:17:15.351441  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .Close
	I0819 19:17:15.351678  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | Closing plugin on server side
	I0819 19:17:15.351728  438245 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:17:15.351750  438245 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:17:15.357999  438245 main.go:141] libmachine: Making call to close driver server
	I0819 19:17:15.358023  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .Close
	I0819 19:17:15.358291  438245 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:17:15.358316  438245 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:17:16.196638  438245 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.140417152s)
	I0819 19:17:16.196694  438245 main.go:141] libmachine: Making call to close driver server
	I0819 19:17:16.196707  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .Close
	I0819 19:17:16.197022  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | Closing plugin on server side
	I0819 19:17:16.197112  438245 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:17:16.197137  438245 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:17:16.197157  438245 main.go:141] libmachine: Making call to close driver server
	I0819 19:17:16.197167  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .Close
	I0819 19:17:16.197449  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | Closing plugin on server side
	I0819 19:17:16.197493  438245 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:17:16.197505  438245 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:17:16.638069  438245 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.42006496s)
	I0819 19:17:16.638141  438245 main.go:141] libmachine: Making call to close driver server
	I0819 19:17:16.638159  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .Close
	I0819 19:17:16.638488  438245 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:17:16.638518  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | Closing plugin on server side
	I0819 19:17:16.638529  438245 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:17:16.638564  438245 main.go:141] libmachine: Making call to close driver server
	I0819 19:17:16.638574  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .Close
	I0819 19:17:16.638861  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | Closing plugin on server side
	I0819 19:17:16.638896  438245 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:17:16.638904  438245 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:17:16.638915  438245 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-982795"
	I0819 19:17:16.641476  438245 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0819 19:17:16.642733  438245 addons.go:510] duration metric: took 1.998196502s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0819 19:17:16.954631  438245 pod_ready.go:103] pod "coredns-6f6b679f8f-845gx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:18.489333  438295 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0819 19:17:18.494609  438295 api_server.go:279] https://192.168.72.96:8443/healthz returned 200:
	ok
	I0819 19:17:18.495587  438295 api_server.go:141] control plane version: v1.31.0
	I0819 19:17:18.495613  438295 api_server.go:131] duration metric: took 11.510793296s to wait for apiserver health ...
	I0819 19:17:18.495624  438295 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 19:17:18.495656  438295 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:17:18.495735  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:17:18.540446  438295 cri.go:89] found id: "d66ad075c652a3b446078444a32327c07459f74199be8f89197067dbad566d5a"
	I0819 19:17:18.540477  438295 cri.go:89] found id: ""
	I0819 19:17:18.540487  438295 logs.go:276] 1 containers: [d66ad075c652a3b446078444a32327c07459f74199be8f89197067dbad566d5a]
	I0819 19:17:18.540555  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:18.551443  438295 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:17:18.551527  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:17:18.592388  438295 cri.go:89] found id: "a3cb2c04e3eb3398fa324b660ca1864f22175cbf41fd84eae34a24ce7928b672"
	I0819 19:17:18.592416  438295 cri.go:89] found id: ""
	I0819 19:17:18.592427  438295 logs.go:276] 1 containers: [a3cb2c04e3eb3398fa324b660ca1864f22175cbf41fd84eae34a24ce7928b672]
	I0819 19:17:18.592495  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:18.597534  438295 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:17:18.597615  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:17:18.637782  438295 cri.go:89] found id: "a6bc5b24f616e32fdffb80b6ed0201250b02f143c8217d56ef90dc55551d709f"
	I0819 19:17:18.637804  438295 cri.go:89] found id: ""
	I0819 19:17:18.637812  438295 logs.go:276] 1 containers: [a6bc5b24f616e32fdffb80b6ed0201250b02f143c8217d56ef90dc55551d709f]
	I0819 19:17:18.637861  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:18.642557  438295 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:17:18.642618  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:17:18.679573  438295 cri.go:89] found id: "c09c2a3840c6b84c4d187a5b4938f1e79c515609ad3ff7077a163e94acd5fc22"
	I0819 19:17:18.679597  438295 cri.go:89] found id: ""
	I0819 19:17:18.679605  438295 logs.go:276] 1 containers: [c09c2a3840c6b84c4d187a5b4938f1e79c515609ad3ff7077a163e94acd5fc22]
	I0819 19:17:18.679657  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:18.684160  438295 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:17:18.684230  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:17:18.726848  438295 cri.go:89] found id: "3e23a8501fe9333693618c26b918ed665ca9f2ea955dfc771ddbd90f4af91338"
	I0819 19:17:18.726881  438295 cri.go:89] found id: ""
	I0819 19:17:18.726889  438295 logs.go:276] 1 containers: [3e23a8501fe9333693618c26b918ed665ca9f2ea955dfc771ddbd90f4af91338]
	I0819 19:17:18.726943  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:18.731422  438295 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:17:18.731484  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:17:18.773623  438295 cri.go:89] found id: "6e6dab43bac16fb6a2155177fd2cb01da57c882a322ae89145bc332c50c87071"
	I0819 19:17:18.773649  438295 cri.go:89] found id: ""
	I0819 19:17:18.773658  438295 logs.go:276] 1 containers: [6e6dab43bac16fb6a2155177fd2cb01da57c882a322ae89145bc332c50c87071]
	I0819 19:17:18.773709  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:18.779609  438295 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:17:18.779687  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:17:18.822876  438295 cri.go:89] found id: ""
	I0819 19:17:18.822911  438295 logs.go:276] 0 containers: []
	W0819 19:17:18.822922  438295 logs.go:278] No container was found matching "kindnet"
	I0819 19:17:18.822931  438295 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 19:17:18.822998  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 19:17:18.868653  438295 cri.go:89] found id: "902796698c02b97c3f50f231cba5dfbc00bc7e8344f104fe7a36109e1d10a4f8"
	I0819 19:17:18.868685  438295 cri.go:89] found id: "44a4290db8405288dc877d1dbfa8f1a4976cb6221431aef419db3cdff822d3b6"
	I0819 19:17:18.868691  438295 cri.go:89] found id: ""
	I0819 19:17:18.868701  438295 logs.go:276] 2 containers: [902796698c02b97c3f50f231cba5dfbc00bc7e8344f104fe7a36109e1d10a4f8 44a4290db8405288dc877d1dbfa8f1a4976cb6221431aef419db3cdff822d3b6]
	I0819 19:17:18.868776  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:18.873136  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:18.877397  438295 logs.go:123] Gathering logs for kube-proxy [3e23a8501fe9333693618c26b918ed665ca9f2ea955dfc771ddbd90f4af91338] ...
	I0819 19:17:18.877425  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e23a8501fe9333693618c26b918ed665ca9f2ea955dfc771ddbd90f4af91338"
	I0819 19:17:18.918085  438295 logs.go:123] Gathering logs for kube-controller-manager [6e6dab43bac16fb6a2155177fd2cb01da57c882a322ae89145bc332c50c87071] ...
	I0819 19:17:18.918118  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e6dab43bac16fb6a2155177fd2cb01da57c882a322ae89145bc332c50c87071"
	I0819 19:17:18.973344  438295 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:17:18.973378  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:17:18.901539  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:20.902550  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:19.440295  438245 pod_ready.go:103] pod "coredns-6f6b679f8f-845gx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:21.939652  438245 pod_ready.go:103] pod "coredns-6f6b679f8f-845gx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:19.443625  438295 logs.go:123] Gathering logs for container status ...
	I0819 19:17:19.443689  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:17:19.492650  438295 logs.go:123] Gathering logs for dmesg ...
	I0819 19:17:19.492696  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:17:19.507957  438295 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:17:19.507996  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 19:17:19.617295  438295 logs.go:123] Gathering logs for coredns [a6bc5b24f616e32fdffb80b6ed0201250b02f143c8217d56ef90dc55551d709f] ...
	I0819 19:17:19.617341  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6bc5b24f616e32fdffb80b6ed0201250b02f143c8217d56ef90dc55551d709f"
	I0819 19:17:19.669869  438295 logs.go:123] Gathering logs for kube-scheduler [c09c2a3840c6b84c4d187a5b4938f1e79c515609ad3ff7077a163e94acd5fc22] ...
	I0819 19:17:19.669930  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c09c2a3840c6b84c4d187a5b4938f1e79c515609ad3ff7077a163e94acd5fc22"
	I0819 19:17:19.706649  438295 logs.go:123] Gathering logs for storage-provisioner [44a4290db8405288dc877d1dbfa8f1a4976cb6221431aef419db3cdff822d3b6] ...
	I0819 19:17:19.706681  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44a4290db8405288dc877d1dbfa8f1a4976cb6221431aef419db3cdff822d3b6"
	I0819 19:17:19.746742  438295 logs.go:123] Gathering logs for kubelet ...
	I0819 19:17:19.746780  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 19:17:19.796224  438295 logs.go:138] Found kubelet problem: Aug 19 19:12:40 embed-certs-024748 kubelet[936]: W0819 19:12:40.671901     936 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:embed-certs-024748" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-024748' and this object
	W0819 19:17:19.796442  438295 logs.go:138] Found kubelet problem: Aug 19 19:12:40 embed-certs-024748 kubelet[936]: E0819 19:12:40.672098     936 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:embed-certs-024748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-024748' and this object" logger="UnhandledError"
	W0819 19:17:19.796622  438295 logs.go:138] Found kubelet problem: Aug 19 19:12:40 embed-certs-024748 kubelet[936]: W0819 19:12:40.672624     936 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-024748" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-024748' and this object
	W0819 19:17:19.796845  438295 logs.go:138] Found kubelet problem: Aug 19 19:12:40 embed-certs-024748 kubelet[936]: E0819 19:12:40.672667     936 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:embed-certs-024748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-024748' and this object" logger="UnhandledError"
	I0819 19:17:19.836283  438295 logs.go:123] Gathering logs for kube-apiserver [d66ad075c652a3b446078444a32327c07459f74199be8f89197067dbad566d5a] ...
	I0819 19:17:19.836328  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d66ad075c652a3b446078444a32327c07459f74199be8f89197067dbad566d5a"
	I0819 19:17:19.889829  438295 logs.go:123] Gathering logs for etcd [a3cb2c04e3eb3398fa324b660ca1864f22175cbf41fd84eae34a24ce7928b672] ...
	I0819 19:17:19.889875  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a3cb2c04e3eb3398fa324b660ca1864f22175cbf41fd84eae34a24ce7928b672"
	I0819 19:17:19.938361  438295 logs.go:123] Gathering logs for storage-provisioner [902796698c02b97c3f50f231cba5dfbc00bc7e8344f104fe7a36109e1d10a4f8] ...
	I0819 19:17:19.938397  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 902796698c02b97c3f50f231cba5dfbc00bc7e8344f104fe7a36109e1d10a4f8"
	I0819 19:17:19.978525  438295 out.go:358] Setting ErrFile to fd 2...
	I0819 19:17:19.978557  438295 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 19:17:19.978628  438295 out.go:270] X Problems detected in kubelet:
	W0819 19:17:19.978642  438295 out.go:270]   Aug 19 19:12:40 embed-certs-024748 kubelet[936]: W0819 19:12:40.671901     936 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:embed-certs-024748" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-024748' and this object
	W0819 19:17:19.978656  438295 out.go:270]   Aug 19 19:12:40 embed-certs-024748 kubelet[936]: E0819 19:12:40.672098     936 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:embed-certs-024748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-024748' and this object" logger="UnhandledError"
	W0819 19:17:19.978669  438295 out.go:270]   Aug 19 19:12:40 embed-certs-024748 kubelet[936]: W0819 19:12:40.672624     936 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-024748" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-024748' and this object
	W0819 19:17:19.978680  438295 out.go:270]   Aug 19 19:12:40 embed-certs-024748 kubelet[936]: E0819 19:12:40.672667     936 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:embed-certs-024748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-024748' and this object" logger="UnhandledError"
	I0819 19:17:19.978690  438295 out.go:358] Setting ErrFile to fd 2...
	I0819 19:17:19.978699  438295 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:17:23.941399  438245 pod_ready.go:93] pod "coredns-6f6b679f8f-845gx" in "kube-system" namespace has status "Ready":"True"
	I0819 19:17:23.941426  438245 pod_ready.go:82] duration metric: took 9.00859927s for pod "coredns-6f6b679f8f-845gx" in "kube-system" namespace to be "Ready" ...
	I0819 19:17:23.941438  438245 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-tlxtt" in "kube-system" namespace to be "Ready" ...
	I0819 19:17:23.946827  438245 pod_ready.go:93] pod "coredns-6f6b679f8f-tlxtt" in "kube-system" namespace has status "Ready":"True"
	I0819 19:17:23.946848  438245 pod_ready.go:82] duration metric: took 5.40058ms for pod "coredns-6f6b679f8f-tlxtt" in "kube-system" namespace to be "Ready" ...
	I0819 19:17:23.946859  438245 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:17:23.956158  438245 pod_ready.go:93] pod "etcd-default-k8s-diff-port-982795" in "kube-system" namespace has status "Ready":"True"
	I0819 19:17:23.956181  438245 pod_ready.go:82] duration metric: took 9.312871ms for pod "etcd-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:17:23.956193  438245 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:17:23.962573  438245 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-982795" in "kube-system" namespace has status "Ready":"True"
	I0819 19:17:23.962595  438245 pod_ready.go:82] duration metric: took 6.3934ms for pod "kube-apiserver-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:17:23.962607  438245 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:17:23.968186  438245 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-982795" in "kube-system" namespace has status "Ready":"True"
	I0819 19:17:23.968206  438245 pod_ready.go:82] duration metric: took 5.591464ms for pod "kube-controller-manager-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:17:23.968214  438245 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2v4hk" in "kube-system" namespace to be "Ready" ...
	I0819 19:17:24.337409  438245 pod_ready.go:93] pod "kube-proxy-2v4hk" in "kube-system" namespace has status "Ready":"True"
	I0819 19:17:24.337443  438245 pod_ready.go:82] duration metric: took 369.220318ms for pod "kube-proxy-2v4hk" in "kube-system" namespace to be "Ready" ...
	I0819 19:17:24.337460  438245 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:17:24.737326  438245 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-982795" in "kube-system" namespace has status "Ready":"True"
	I0819 19:17:24.737362  438245 pod_ready.go:82] duration metric: took 399.891804ms for pod "kube-scheduler-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:17:24.737375  438245 pod_ready.go:39] duration metric: took 9.817484404s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 19:17:24.737396  438245 api_server.go:52] waiting for apiserver process to appear ...
	I0819 19:17:24.737467  438245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:17:24.753681  438245 api_server.go:72] duration metric: took 10.109231411s to wait for apiserver process to appear ...
	I0819 19:17:24.753711  438245 api_server.go:88] waiting for apiserver healthz status ...
	I0819 19:17:24.753734  438245 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8444/healthz ...
	I0819 19:17:24.757976  438245 api_server.go:279] https://192.168.61.48:8444/healthz returned 200:
	ok
	I0819 19:17:24.758875  438245 api_server.go:141] control plane version: v1.31.0
	I0819 19:17:24.758899  438245 api_server.go:131] duration metric: took 5.179486ms to wait for apiserver health ...
	I0819 19:17:24.758908  438245 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 19:17:24.944008  438245 system_pods.go:59] 9 kube-system pods found
	I0819 19:17:24.944053  438245 system_pods.go:61] "coredns-6f6b679f8f-845gx" [95155dd2-d46c-4445-b735-26eae16aaff9] Running
	I0819 19:17:24.944058  438245 system_pods.go:61] "coredns-6f6b679f8f-tlxtt" [150ac4be-bef1-4f0a-ab16-f085284686cb] Running
	I0819 19:17:24.944062  438245 system_pods.go:61] "etcd-default-k8s-diff-port-982795" [eb29f445-6242-4b60-a8d5-7c684df17926] Running
	I0819 19:17:24.944066  438245 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-982795" [2add6270-bf14-43e7-834b-3e629f46efa3] Running
	I0819 19:17:24.944070  438245 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-982795" [6b636d4b-0efa-4cef-b0d4-d4539ddc5c90] Running
	I0819 19:17:24.944073  438245 system_pods.go:61] "kube-proxy-2v4hk" [042d5d54-6557-4d8e-8f4e-2d56e95882ce] Running
	I0819 19:17:24.944076  438245 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-982795" [6eff3815-26b3-4e95-a754-2dc65fd29126] Running
	I0819 19:17:24.944082  438245 system_pods.go:61] "metrics-server-6867b74b74-2dp5r" [04e0ce68-d9a2-426a-a0e9-47f6f7867efd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 19:17:24.944086  438245 system_pods.go:61] "storage-provisioner" [23fcea86-977e-4eb1-9e5a-23d6bdfb09c0] Running
	I0819 19:17:24.944094  438245 system_pods.go:74] duration metric: took 185.180015ms to wait for pod list to return data ...
	I0819 19:17:24.944104  438245 default_sa.go:34] waiting for default service account to be created ...
	I0819 19:17:25.137108  438245 default_sa.go:45] found service account: "default"
	I0819 19:17:25.137147  438245 default_sa.go:55] duration metric: took 193.033434ms for default service account to be created ...
	I0819 19:17:25.137160  438245 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 19:17:25.340115  438245 system_pods.go:86] 9 kube-system pods found
	I0819 19:17:25.340146  438245 system_pods.go:89] "coredns-6f6b679f8f-845gx" [95155dd2-d46c-4445-b735-26eae16aaff9] Running
	I0819 19:17:25.340155  438245 system_pods.go:89] "coredns-6f6b679f8f-tlxtt" [150ac4be-bef1-4f0a-ab16-f085284686cb] Running
	I0819 19:17:25.340161  438245 system_pods.go:89] "etcd-default-k8s-diff-port-982795" [eb29f445-6242-4b60-a8d5-7c684df17926] Running
	I0819 19:17:25.340167  438245 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-982795" [2add6270-bf14-43e7-834b-3e629f46efa3] Running
	I0819 19:17:25.340173  438245 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-982795" [6b636d4b-0efa-4cef-b0d4-d4539ddc5c90] Running
	I0819 19:17:25.340177  438245 system_pods.go:89] "kube-proxy-2v4hk" [042d5d54-6557-4d8e-8f4e-2d56e95882ce] Running
	I0819 19:17:25.340182  438245 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-982795" [6eff3815-26b3-4e95-a754-2dc65fd29126] Running
	I0819 19:17:25.340192  438245 system_pods.go:89] "metrics-server-6867b74b74-2dp5r" [04e0ce68-d9a2-426a-a0e9-47f6f7867efd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 19:17:25.340198  438245 system_pods.go:89] "storage-provisioner" [23fcea86-977e-4eb1-9e5a-23d6bdfb09c0] Running
	I0819 19:17:25.340211  438245 system_pods.go:126] duration metric: took 203.044324ms to wait for k8s-apps to be running ...
	I0819 19:17:25.340224  438245 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 19:17:25.340278  438245 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 19:17:25.355190  438245 system_svc.go:56] duration metric: took 14.954269ms WaitForService to wait for kubelet
	I0819 19:17:25.355223  438245 kubeadm.go:582] duration metric: took 10.710777567s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 19:17:25.355252  438245 node_conditions.go:102] verifying NodePressure condition ...
	I0819 19:17:25.537425  438245 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 19:17:25.537459  438245 node_conditions.go:123] node cpu capacity is 2
	I0819 19:17:25.537472  438245 node_conditions.go:105] duration metric: took 182.213218ms to run NodePressure ...
	I0819 19:17:25.537491  438245 start.go:241] waiting for startup goroutines ...
	I0819 19:17:25.537501  438245 start.go:246] waiting for cluster config update ...
	I0819 19:17:25.537516  438245 start.go:255] writing updated cluster config ...
	I0819 19:17:25.537851  438245 ssh_runner.go:195] Run: rm -f paused
	I0819 19:17:25.589212  438245 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 19:17:25.591352  438245 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-982795" cluster and "default" namespace by default
	I0819 19:17:22.902846  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:25.401911  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:29.988042  438295 system_pods.go:59] 8 kube-system pods found
	I0819 19:17:29.988074  438295 system_pods.go:61] "coredns-6f6b679f8f-7ww4z" [bbde00d4-6027-4d8d-b51e-bd68915da166] Running
	I0819 19:17:29.988080  438295 system_pods.go:61] "etcd-embed-certs-024748" [846ff0f0-5399-43fd-8e7b-1f64997cd291] Running
	I0819 19:17:29.988084  438295 system_pods.go:61] "kube-apiserver-embed-certs-024748" [3ff558d6-e82e-47a0-bb81-15244bee6470] Running
	I0819 19:17:29.988088  438295 system_pods.go:61] "kube-controller-manager-embed-certs-024748" [993b82ba-e8e7-4896-a06b-87c4f08d5985] Running
	I0819 19:17:29.988092  438295 system_pods.go:61] "kube-proxy-bmmbh" [1f77f152-f5f4-40f6-9632-1eaa36b9ea31] Running
	I0819 19:17:29.988095  438295 system_pods.go:61] "kube-scheduler-embed-certs-024748" [34684d4c-2479-45c5-883b-158cf9f974f5] Running
	I0819 19:17:29.988100  438295 system_pods.go:61] "metrics-server-6867b74b74-kxcwh" [15f86629-d916-4fdc-9ecf-9cb1b6c83f85] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 19:17:29.988104  438295 system_pods.go:61] "storage-provisioner" [7acb6ce1-21b6-4cdd-a5cb-76d694fc0a38] Running
	I0819 19:17:29.988113  438295 system_pods.go:74] duration metric: took 11.492481541s to wait for pod list to return data ...
	I0819 19:17:29.988120  438295 default_sa.go:34] waiting for default service account to be created ...
	I0819 19:17:29.991728  438295 default_sa.go:45] found service account: "default"
	I0819 19:17:29.991755  438295 default_sa.go:55] duration metric: took 3.62838ms for default service account to be created ...
	I0819 19:17:29.991764  438295 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 19:17:29.997212  438295 system_pods.go:86] 8 kube-system pods found
	I0819 19:17:29.997237  438295 system_pods.go:89] "coredns-6f6b679f8f-7ww4z" [bbde00d4-6027-4d8d-b51e-bd68915da166] Running
	I0819 19:17:29.997243  438295 system_pods.go:89] "etcd-embed-certs-024748" [846ff0f0-5399-43fd-8e7b-1f64997cd291] Running
	I0819 19:17:29.997247  438295 system_pods.go:89] "kube-apiserver-embed-certs-024748" [3ff558d6-e82e-47a0-bb81-15244bee6470] Running
	I0819 19:17:29.997252  438295 system_pods.go:89] "kube-controller-manager-embed-certs-024748" [993b82ba-e8e7-4896-a06b-87c4f08d5985] Running
	I0819 19:17:29.997256  438295 system_pods.go:89] "kube-proxy-bmmbh" [1f77f152-f5f4-40f6-9632-1eaa36b9ea31] Running
	I0819 19:17:29.997260  438295 system_pods.go:89] "kube-scheduler-embed-certs-024748" [34684d4c-2479-45c5-883b-158cf9f974f5] Running
	I0819 19:17:29.997267  438295 system_pods.go:89] "metrics-server-6867b74b74-kxcwh" [15f86629-d916-4fdc-9ecf-9cb1b6c83f85] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 19:17:29.997270  438295 system_pods.go:89] "storage-provisioner" [7acb6ce1-21b6-4cdd-a5cb-76d694fc0a38] Running
	I0819 19:17:29.997277  438295 system_pods.go:126] duration metric: took 5.507363ms to wait for k8s-apps to be running ...
	I0819 19:17:29.997283  438295 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 19:17:29.997329  438295 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 19:17:30.015349  438295 system_svc.go:56] duration metric: took 18.05422ms WaitForService to wait for kubelet
	I0819 19:17:30.015385  438295 kubeadm.go:582] duration metric: took 4m46.148274918s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 19:17:30.015408  438295 node_conditions.go:102] verifying NodePressure condition ...
	I0819 19:17:30.019744  438295 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 19:17:30.019767  438295 node_conditions.go:123] node cpu capacity is 2
	I0819 19:17:30.019779  438295 node_conditions.go:105] duration metric: took 4.364435ms to run NodePressure ...
	I0819 19:17:30.019791  438295 start.go:241] waiting for startup goroutines ...
	I0819 19:17:30.019798  438295 start.go:246] waiting for cluster config update ...
	I0819 19:17:30.019809  438295 start.go:255] writing updated cluster config ...
	I0819 19:17:30.020080  438295 ssh_runner.go:195] Run: rm -f paused
	I0819 19:17:30.071945  438295 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 19:17:30.073912  438295 out.go:177] * Done! kubectl is now configured to use "embed-certs-024748" cluster and "default" namespace by default
	I0819 19:17:27.901471  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:29.901560  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:32.401214  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:34.402184  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:36.901979  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:38.902132  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:41.401103  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:43.889122  438716 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0819 19:17:43.889226  438716 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 19:17:43.889441  438716 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 19:17:43.402531  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:45.402739  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:48.889647  438716 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 19:17:48.889896  438716 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 19:17:47.902033  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:48.402784  438001 pod_ready.go:82] duration metric: took 4m0.007573449s for pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace to be "Ready" ...
	E0819 19:17:48.402807  438001 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0819 19:17:48.402814  438001 pod_ready.go:39] duration metric: took 4m5.043625176s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 19:17:48.402837  438001 api_server.go:52] waiting for apiserver process to appear ...
	I0819 19:17:48.402866  438001 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:17:48.402916  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:17:48.465049  438001 cri.go:89] found id: "cdac290df2d44c9b30a9c4378f98137a73e603fccd18bc228cca5d017f0a7094"
	I0819 19:17:48.465072  438001 cri.go:89] found id: ""
	I0819 19:17:48.465081  438001 logs.go:276] 1 containers: [cdac290df2d44c9b30a9c4378f98137a73e603fccd18bc228cca5d017f0a7094]
	I0819 19:17:48.465157  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:48.469640  438001 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:17:48.469708  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:17:48.506800  438001 cri.go:89] found id: "27d104597d0ca1b418bd0cab630536ff2d859717c314b48ea994680b21a5bd9a"
	I0819 19:17:48.506825  438001 cri.go:89] found id: ""
	I0819 19:17:48.506836  438001 logs.go:276] 1 containers: [27d104597d0ca1b418bd0cab630536ff2d859717c314b48ea994680b21a5bd9a]
	I0819 19:17:48.506900  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:48.511810  438001 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:17:48.511899  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:17:48.558215  438001 cri.go:89] found id: "6ad390cacd3d89ad9a5e7af71dab26d472a67971ffda086057b7cf0e0a9560aa"
	I0819 19:17:48.558240  438001 cri.go:89] found id: ""
	I0819 19:17:48.558250  438001 logs.go:276] 1 containers: [6ad390cacd3d89ad9a5e7af71dab26d472a67971ffda086057b7cf0e0a9560aa]
	I0819 19:17:48.558308  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:48.562785  438001 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:17:48.562844  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:17:48.602715  438001 cri.go:89] found id: "123f84ccdc9cf1aa830891307b79d42c9166f018bff19b498a5107e428feb92f"
	I0819 19:17:48.602738  438001 cri.go:89] found id: ""
	I0819 19:17:48.602748  438001 logs.go:276] 1 containers: [123f84ccdc9cf1aa830891307b79d42c9166f018bff19b498a5107e428feb92f]
	I0819 19:17:48.602815  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:48.607456  438001 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:17:48.607512  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:17:48.648285  438001 cri.go:89] found id: "236b4296ad713b251ca958489ebfc4ce41bd2cb64d538cf0cf5f72cc9243e94a"
	I0819 19:17:48.648314  438001 cri.go:89] found id: ""
	I0819 19:17:48.648324  438001 logs.go:276] 1 containers: [236b4296ad713b251ca958489ebfc4ce41bd2cb64d538cf0cf5f72cc9243e94a]
	I0819 19:17:48.648374  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:48.653772  438001 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:17:48.653830  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:17:48.697336  438001 cri.go:89] found id: "390aeac356048873634022bb4093a927ddaf293b994b7316b79cfc2c4c329346"
	I0819 19:17:48.697365  438001 cri.go:89] found id: ""
	I0819 19:17:48.697376  438001 logs.go:276] 1 containers: [390aeac356048873634022bb4093a927ddaf293b994b7316b79cfc2c4c329346]
	I0819 19:17:48.697438  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:48.701661  438001 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:17:48.701726  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:17:48.737952  438001 cri.go:89] found id: ""
	I0819 19:17:48.737990  438001 logs.go:276] 0 containers: []
	W0819 19:17:48.738002  438001 logs.go:278] No container was found matching "kindnet"
	I0819 19:17:48.738010  438001 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 19:17:48.738076  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 19:17:48.780047  438001 cri.go:89] found id: "fd16c88623359ff9e44155c82c7e33b07dc040678d1d6f1915a25d80a5db0bbd"
	I0819 19:17:48.780076  438001 cri.go:89] found id: "482a17643a2dedc658bdc88ca54e2ffb40166833acfc42adf452364226e51dc6"
	I0819 19:17:48.780082  438001 cri.go:89] found id: ""
	I0819 19:17:48.780092  438001 logs.go:276] 2 containers: [fd16c88623359ff9e44155c82c7e33b07dc040678d1d6f1915a25d80a5db0bbd 482a17643a2dedc658bdc88ca54e2ffb40166833acfc42adf452364226e51dc6]
	I0819 19:17:48.780168  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:48.784558  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:48.788803  438001 logs.go:123] Gathering logs for kube-apiserver [cdac290df2d44c9b30a9c4378f98137a73e603fccd18bc228cca5d017f0a7094] ...
	I0819 19:17:48.788826  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cdac290df2d44c9b30a9c4378f98137a73e603fccd18bc228cca5d017f0a7094"
	I0819 19:17:48.843469  438001 logs.go:123] Gathering logs for kube-scheduler [123f84ccdc9cf1aa830891307b79d42c9166f018bff19b498a5107e428feb92f] ...
	I0819 19:17:48.843501  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 123f84ccdc9cf1aa830891307b79d42c9166f018bff19b498a5107e428feb92f"
	I0819 19:17:48.884461  438001 logs.go:123] Gathering logs for kube-proxy [236b4296ad713b251ca958489ebfc4ce41bd2cb64d538cf0cf5f72cc9243e94a] ...
	I0819 19:17:48.884495  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 236b4296ad713b251ca958489ebfc4ce41bd2cb64d538cf0cf5f72cc9243e94a"
	I0819 19:17:48.927064  438001 logs.go:123] Gathering logs for storage-provisioner [fd16c88623359ff9e44155c82c7e33b07dc040678d1d6f1915a25d80a5db0bbd] ...
	I0819 19:17:48.927093  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd16c88623359ff9e44155c82c7e33b07dc040678d1d6f1915a25d80a5db0bbd"
	I0819 19:17:48.963812  438001 logs.go:123] Gathering logs for container status ...
	I0819 19:17:48.963845  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:17:49.017381  438001 logs.go:123] Gathering logs for kubelet ...
	I0819 19:17:49.017420  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:17:49.093572  438001 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:17:49.093614  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 19:17:49.236680  438001 logs.go:123] Gathering logs for coredns [6ad390cacd3d89ad9a5e7af71dab26d472a67971ffda086057b7cf0e0a9560aa] ...
	I0819 19:17:49.236721  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ad390cacd3d89ad9a5e7af71dab26d472a67971ffda086057b7cf0e0a9560aa"
	I0819 19:17:49.274636  438001 logs.go:123] Gathering logs for kube-controller-manager [390aeac356048873634022bb4093a927ddaf293b994b7316b79cfc2c4c329346] ...
	I0819 19:17:49.274677  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 390aeac356048873634022bb4093a927ddaf293b994b7316b79cfc2c4c329346"
	I0819 19:17:49.326208  438001 logs.go:123] Gathering logs for storage-provisioner [482a17643a2dedc658bdc88ca54e2ffb40166833acfc42adf452364226e51dc6] ...
	I0819 19:17:49.326242  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 482a17643a2dedc658bdc88ca54e2ffb40166833acfc42adf452364226e51dc6"
	I0819 19:17:49.363589  438001 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:17:49.363628  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:17:49.841705  438001 logs.go:123] Gathering logs for dmesg ...
	I0819 19:17:49.841757  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:17:49.858466  438001 logs.go:123] Gathering logs for etcd [27d104597d0ca1b418bd0cab630536ff2d859717c314b48ea994680b21a5bd9a] ...
	I0819 19:17:49.858504  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27d104597d0ca1b418bd0cab630536ff2d859717c314b48ea994680b21a5bd9a"
	I0819 19:17:52.406197  438001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:17:52.422951  438001 api_server.go:72] duration metric: took 4m16.822246565s to wait for apiserver process to appear ...
	I0819 19:17:52.422981  438001 api_server.go:88] waiting for apiserver healthz status ...
	I0819 19:17:52.423019  438001 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:17:52.423075  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:17:52.464305  438001 cri.go:89] found id: "cdac290df2d44c9b30a9c4378f98137a73e603fccd18bc228cca5d017f0a7094"
	I0819 19:17:52.464327  438001 cri.go:89] found id: ""
	I0819 19:17:52.464335  438001 logs.go:276] 1 containers: [cdac290df2d44c9b30a9c4378f98137a73e603fccd18bc228cca5d017f0a7094]
	I0819 19:17:52.464387  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:52.468824  438001 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:17:52.468904  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:17:52.508907  438001 cri.go:89] found id: "27d104597d0ca1b418bd0cab630536ff2d859717c314b48ea994680b21a5bd9a"
	I0819 19:17:52.508929  438001 cri.go:89] found id: ""
	I0819 19:17:52.508937  438001 logs.go:276] 1 containers: [27d104597d0ca1b418bd0cab630536ff2d859717c314b48ea994680b21a5bd9a]
	I0819 19:17:52.508998  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:52.513206  438001 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:17:52.513281  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:17:52.553908  438001 cri.go:89] found id: "6ad390cacd3d89ad9a5e7af71dab26d472a67971ffda086057b7cf0e0a9560aa"
	I0819 19:17:52.553940  438001 cri.go:89] found id: ""
	I0819 19:17:52.553948  438001 logs.go:276] 1 containers: [6ad390cacd3d89ad9a5e7af71dab26d472a67971ffda086057b7cf0e0a9560aa]
	I0819 19:17:52.554007  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:52.558420  438001 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:17:52.558487  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:17:52.598450  438001 cri.go:89] found id: "123f84ccdc9cf1aa830891307b79d42c9166f018bff19b498a5107e428feb92f"
	I0819 19:17:52.598480  438001 cri.go:89] found id: ""
	I0819 19:17:52.598491  438001 logs.go:276] 1 containers: [123f84ccdc9cf1aa830891307b79d42c9166f018bff19b498a5107e428feb92f]
	I0819 19:17:52.598564  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:52.603421  438001 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:17:52.603485  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:17:52.639017  438001 cri.go:89] found id: "236b4296ad713b251ca958489ebfc4ce41bd2cb64d538cf0cf5f72cc9243e94a"
	I0819 19:17:52.639049  438001 cri.go:89] found id: ""
	I0819 19:17:52.639060  438001 logs.go:276] 1 containers: [236b4296ad713b251ca958489ebfc4ce41bd2cb64d538cf0cf5f72cc9243e94a]
	I0819 19:17:52.639129  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:52.645313  438001 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:17:52.645392  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:17:52.687266  438001 cri.go:89] found id: "390aeac356048873634022bb4093a927ddaf293b994b7316b79cfc2c4c329346"
	I0819 19:17:52.687296  438001 cri.go:89] found id: ""
	I0819 19:17:52.687305  438001 logs.go:276] 1 containers: [390aeac356048873634022bb4093a927ddaf293b994b7316b79cfc2c4c329346]
	I0819 19:17:52.687369  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:52.691770  438001 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:17:52.691830  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:17:52.734067  438001 cri.go:89] found id: ""
	I0819 19:17:52.734098  438001 logs.go:276] 0 containers: []
	W0819 19:17:52.734107  438001 logs.go:278] No container was found matching "kindnet"
	I0819 19:17:52.734113  438001 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 19:17:52.734171  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 19:17:52.781039  438001 cri.go:89] found id: "fd16c88623359ff9e44155c82c7e33b07dc040678d1d6f1915a25d80a5db0bbd"
	I0819 19:17:52.781062  438001 cri.go:89] found id: "482a17643a2dedc658bdc88ca54e2ffb40166833acfc42adf452364226e51dc6"
	I0819 19:17:52.781066  438001 cri.go:89] found id: ""
	I0819 19:17:52.781074  438001 logs.go:276] 2 containers: [fd16c88623359ff9e44155c82c7e33b07dc040678d1d6f1915a25d80a5db0bbd 482a17643a2dedc658bdc88ca54e2ffb40166833acfc42adf452364226e51dc6]
	I0819 19:17:52.781135  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:52.785730  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:52.789946  438001 logs.go:123] Gathering logs for kube-scheduler [123f84ccdc9cf1aa830891307b79d42c9166f018bff19b498a5107e428feb92f] ...
	I0819 19:17:52.789978  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 123f84ccdc9cf1aa830891307b79d42c9166f018bff19b498a5107e428feb92f"
	I0819 19:17:52.830509  438001 logs.go:123] Gathering logs for kube-controller-manager [390aeac356048873634022bb4093a927ddaf293b994b7316b79cfc2c4c329346] ...
	I0819 19:17:52.830541  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 390aeac356048873634022bb4093a927ddaf293b994b7316b79cfc2c4c329346"
	I0819 19:17:52.892964  438001 logs.go:123] Gathering logs for container status ...
	I0819 19:17:52.893017  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:17:52.947999  438001 logs.go:123] Gathering logs for kubelet ...
	I0819 19:17:52.948028  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:17:53.019377  438001 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:17:53.019423  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 19:17:53.134032  438001 logs.go:123] Gathering logs for kube-apiserver [cdac290df2d44c9b30a9c4378f98137a73e603fccd18bc228cca5d017f0a7094] ...
	I0819 19:17:53.134069  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cdac290df2d44c9b30a9c4378f98137a73e603fccd18bc228cca5d017f0a7094"
	I0819 19:17:53.186159  438001 logs.go:123] Gathering logs for etcd [27d104597d0ca1b418bd0cab630536ff2d859717c314b48ea994680b21a5bd9a] ...
	I0819 19:17:53.186193  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27d104597d0ca1b418bd0cab630536ff2d859717c314b48ea994680b21a5bd9a"
	I0819 19:17:53.236918  438001 logs.go:123] Gathering logs for storage-provisioner [482a17643a2dedc658bdc88ca54e2ffb40166833acfc42adf452364226e51dc6] ...
	I0819 19:17:53.236949  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 482a17643a2dedc658bdc88ca54e2ffb40166833acfc42adf452364226e51dc6"
	I0819 19:17:53.275211  438001 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:17:53.275242  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:17:53.710352  438001 logs.go:123] Gathering logs for dmesg ...
	I0819 19:17:53.710396  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:17:53.726691  438001 logs.go:123] Gathering logs for coredns [6ad390cacd3d89ad9a5e7af71dab26d472a67971ffda086057b7cf0e0a9560aa] ...
	I0819 19:17:53.726731  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ad390cacd3d89ad9a5e7af71dab26d472a67971ffda086057b7cf0e0a9560aa"
	I0819 19:17:53.768322  438001 logs.go:123] Gathering logs for kube-proxy [236b4296ad713b251ca958489ebfc4ce41bd2cb64d538cf0cf5f72cc9243e94a] ...
	I0819 19:17:53.768361  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 236b4296ad713b251ca958489ebfc4ce41bd2cb64d538cf0cf5f72cc9243e94a"
	I0819 19:17:53.808546  438001 logs.go:123] Gathering logs for storage-provisioner [fd16c88623359ff9e44155c82c7e33b07dc040678d1d6f1915a25d80a5db0bbd] ...
	I0819 19:17:53.808577  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd16c88623359ff9e44155c82c7e33b07dc040678d1d6f1915a25d80a5db0bbd"
	I0819 19:17:56.362339  438001 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8443/healthz ...
	I0819 19:17:56.366636  438001 api_server.go:279] https://192.168.39.106:8443/healthz returned 200:
	ok
	I0819 19:17:56.367838  438001 api_server.go:141] control plane version: v1.31.0
	I0819 19:17:56.367867  438001 api_server.go:131] duration metric: took 3.944877317s to wait for apiserver health ...
	I0819 19:17:56.367891  438001 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 19:17:56.367925  438001 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:17:56.367991  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:17:56.412151  438001 cri.go:89] found id: "cdac290df2d44c9b30a9c4378f98137a73e603fccd18bc228cca5d017f0a7094"
	I0819 19:17:56.412179  438001 cri.go:89] found id: ""
	I0819 19:17:56.412187  438001 logs.go:276] 1 containers: [cdac290df2d44c9b30a9c4378f98137a73e603fccd18bc228cca5d017f0a7094]
	I0819 19:17:56.412247  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:56.416620  438001 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:17:56.416795  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:17:56.456888  438001 cri.go:89] found id: "27d104597d0ca1b418bd0cab630536ff2d859717c314b48ea994680b21a5bd9a"
	I0819 19:17:56.456918  438001 cri.go:89] found id: ""
	I0819 19:17:56.456927  438001 logs.go:276] 1 containers: [27d104597d0ca1b418bd0cab630536ff2d859717c314b48ea994680b21a5bd9a]
	I0819 19:17:56.456984  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:56.461563  438001 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:17:56.461667  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:17:56.506990  438001 cri.go:89] found id: "6ad390cacd3d89ad9a5e7af71dab26d472a67971ffda086057b7cf0e0a9560aa"
	I0819 19:17:56.507018  438001 cri.go:89] found id: ""
	I0819 19:17:56.507028  438001 logs.go:276] 1 containers: [6ad390cacd3d89ad9a5e7af71dab26d472a67971ffda086057b7cf0e0a9560aa]
	I0819 19:17:56.507099  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:56.511547  438001 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:17:56.511616  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:17:56.551734  438001 cri.go:89] found id: "123f84ccdc9cf1aa830891307b79d42c9166f018bff19b498a5107e428feb92f"
	I0819 19:17:56.551761  438001 cri.go:89] found id: ""
	I0819 19:17:56.551772  438001 logs.go:276] 1 containers: [123f84ccdc9cf1aa830891307b79d42c9166f018bff19b498a5107e428feb92f]
	I0819 19:17:56.551837  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:56.556963  438001 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:17:56.557039  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:17:56.601862  438001 cri.go:89] found id: "236b4296ad713b251ca958489ebfc4ce41bd2cb64d538cf0cf5f72cc9243e94a"
	I0819 19:17:56.601892  438001 cri.go:89] found id: ""
	I0819 19:17:56.601902  438001 logs.go:276] 1 containers: [236b4296ad713b251ca958489ebfc4ce41bd2cb64d538cf0cf5f72cc9243e94a]
	I0819 19:17:56.601971  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:56.606618  438001 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:17:56.606706  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:17:56.649476  438001 cri.go:89] found id: "390aeac356048873634022bb4093a927ddaf293b994b7316b79cfc2c4c329346"
	I0819 19:17:56.649501  438001 cri.go:89] found id: ""
	I0819 19:17:56.649510  438001 logs.go:276] 1 containers: [390aeac356048873634022bb4093a927ddaf293b994b7316b79cfc2c4c329346]
	I0819 19:17:56.649561  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:56.654009  438001 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:17:56.654071  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:17:56.707479  438001 cri.go:89] found id: ""
	I0819 19:17:56.707506  438001 logs.go:276] 0 containers: []
	W0819 19:17:56.707518  438001 logs.go:278] No container was found matching "kindnet"
	I0819 19:17:56.707527  438001 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 19:17:56.707585  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 19:17:56.749937  438001 cri.go:89] found id: "fd16c88623359ff9e44155c82c7e33b07dc040678d1d6f1915a25d80a5db0bbd"
	I0819 19:17:56.749961  438001 cri.go:89] found id: "482a17643a2dedc658bdc88ca54e2ffb40166833acfc42adf452364226e51dc6"
	I0819 19:17:56.749966  438001 cri.go:89] found id: ""
	I0819 19:17:56.749973  438001 logs.go:276] 2 containers: [fd16c88623359ff9e44155c82c7e33b07dc040678d1d6f1915a25d80a5db0bbd 482a17643a2dedc658bdc88ca54e2ffb40166833acfc42adf452364226e51dc6]
	I0819 19:17:56.750026  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:56.754791  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:56.758672  438001 logs.go:123] Gathering logs for etcd [27d104597d0ca1b418bd0cab630536ff2d859717c314b48ea994680b21a5bd9a] ...
	I0819 19:17:56.758700  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27d104597d0ca1b418bd0cab630536ff2d859717c314b48ea994680b21a5bd9a"
	I0819 19:17:56.811420  438001 logs.go:123] Gathering logs for kube-controller-manager [390aeac356048873634022bb4093a927ddaf293b994b7316b79cfc2c4c329346] ...
	I0819 19:17:56.811461  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 390aeac356048873634022bb4093a927ddaf293b994b7316b79cfc2c4c329346"
	I0819 19:17:56.871550  438001 logs.go:123] Gathering logs for storage-provisioner [482a17643a2dedc658bdc88ca54e2ffb40166833acfc42adf452364226e51dc6] ...
	I0819 19:17:56.871588  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 482a17643a2dedc658bdc88ca54e2ffb40166833acfc42adf452364226e51dc6"
	I0819 19:17:56.918183  438001 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:17:56.918224  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:17:57.297614  438001 logs.go:123] Gathering logs for container status ...
	I0819 19:17:57.297653  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:17:57.339092  438001 logs.go:123] Gathering logs for dmesg ...
	I0819 19:17:57.339127  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:17:57.355787  438001 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:17:57.355820  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 19:17:57.486287  438001 logs.go:123] Gathering logs for kube-apiserver [cdac290df2d44c9b30a9c4378f98137a73e603fccd18bc228cca5d017f0a7094] ...
	I0819 19:17:57.486328  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cdac290df2d44c9b30a9c4378f98137a73e603fccd18bc228cca5d017f0a7094"
	I0819 19:17:57.535864  438001 logs.go:123] Gathering logs for coredns [6ad390cacd3d89ad9a5e7af71dab26d472a67971ffda086057b7cf0e0a9560aa] ...
	I0819 19:17:57.535903  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ad390cacd3d89ad9a5e7af71dab26d472a67971ffda086057b7cf0e0a9560aa"
	I0819 19:17:57.577211  438001 logs.go:123] Gathering logs for kube-scheduler [123f84ccdc9cf1aa830891307b79d42c9166f018bff19b498a5107e428feb92f] ...
	I0819 19:17:57.577248  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 123f84ccdc9cf1aa830891307b79d42c9166f018bff19b498a5107e428feb92f"
	I0819 19:17:57.615928  438001 logs.go:123] Gathering logs for kube-proxy [236b4296ad713b251ca958489ebfc4ce41bd2cb64d538cf0cf5f72cc9243e94a] ...
	I0819 19:17:57.615962  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 236b4296ad713b251ca958489ebfc4ce41bd2cb64d538cf0cf5f72cc9243e94a"
	I0819 19:17:57.655413  438001 logs.go:123] Gathering logs for storage-provisioner [fd16c88623359ff9e44155c82c7e33b07dc040678d1d6f1915a25d80a5db0bbd] ...
	I0819 19:17:57.655445  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd16c88623359ff9e44155c82c7e33b07dc040678d1d6f1915a25d80a5db0bbd"
	I0819 19:17:57.704470  438001 logs.go:123] Gathering logs for kubelet ...
	I0819 19:17:57.704502  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:18:00.281191  438001 system_pods.go:59] 8 kube-system pods found
	I0819 19:18:00.281223  438001 system_pods.go:61] "coredns-6f6b679f8f-22lbt" [c8a5cabd-41d4-41cb-91c1-2db1f3471db3] Running
	I0819 19:18:00.281228  438001 system_pods.go:61] "etcd-no-preload-278232" [36d555a1-33e4-4c6c-b24e-2fee4fd84f2b] Running
	I0819 19:18:00.281232  438001 system_pods.go:61] "kube-apiserver-no-preload-278232" [af7173e5-c4ac-4ece-b8b9-bb81cb6b9bfd] Running
	I0819 19:18:00.281235  438001 system_pods.go:61] "kube-controller-manager-no-preload-278232" [2463d97a-5221-40ce-8fd7-08151165d6f7] Running
	I0819 19:18:00.281238  438001 system_pods.go:61] "kube-proxy-rcf49" [85d5814a-1ba9-46be-ab11-17bf40c0f029] Running
	I0819 19:18:00.281241  438001 system_pods.go:61] "kube-scheduler-no-preload-278232" [3b327704-f70c-4d6f-a774-15427a305472] Running
	I0819 19:18:00.281247  438001 system_pods.go:61] "metrics-server-6867b74b74-vxwrs" [e8b74128-b393-4f0f-90fe-e05f20d54acd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 19:18:00.281252  438001 system_pods.go:61] "storage-provisioner" [24766475-1a5b-4f1a-9350-3e891b5272cc] Running
	I0819 19:18:00.281260  438001 system_pods.go:74] duration metric: took 3.913361626s to wait for pod list to return data ...
	I0819 19:18:00.281267  438001 default_sa.go:34] waiting for default service account to be created ...
	I0819 19:18:00.283873  438001 default_sa.go:45] found service account: "default"
	I0819 19:18:00.283898  438001 default_sa.go:55] duration metric: took 2.625775ms for default service account to be created ...
	I0819 19:18:00.283907  438001 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 19:18:00.288985  438001 system_pods.go:86] 8 kube-system pods found
	I0819 19:18:00.289012  438001 system_pods.go:89] "coredns-6f6b679f8f-22lbt" [c8a5cabd-41d4-41cb-91c1-2db1f3471db3] Running
	I0819 19:18:00.289018  438001 system_pods.go:89] "etcd-no-preload-278232" [36d555a1-33e4-4c6c-b24e-2fee4fd84f2b] Running
	I0819 19:18:00.289022  438001 system_pods.go:89] "kube-apiserver-no-preload-278232" [af7173e5-c4ac-4ece-b8b9-bb81cb6b9bfd] Running
	I0819 19:18:00.289028  438001 system_pods.go:89] "kube-controller-manager-no-preload-278232" [2463d97a-5221-40ce-8fd7-08151165d6f7] Running
	I0819 19:18:00.289033  438001 system_pods.go:89] "kube-proxy-rcf49" [85d5814a-1ba9-46be-ab11-17bf40c0f029] Running
	I0819 19:18:00.289038  438001 system_pods.go:89] "kube-scheduler-no-preload-278232" [3b327704-f70c-4d6f-a774-15427a305472] Running
	I0819 19:18:00.289047  438001 system_pods.go:89] "metrics-server-6867b74b74-vxwrs" [e8b74128-b393-4f0f-90fe-e05f20d54acd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 19:18:00.289056  438001 system_pods.go:89] "storage-provisioner" [24766475-1a5b-4f1a-9350-3e891b5272cc] Running
	I0819 19:18:00.289067  438001 system_pods.go:126] duration metric: took 5.154385ms to wait for k8s-apps to be running ...
	I0819 19:18:00.289081  438001 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 19:18:00.289132  438001 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 19:18:00.307128  438001 system_svc.go:56] duration metric: took 18.036826ms WaitForService to wait for kubelet
	I0819 19:18:00.307160  438001 kubeadm.go:582] duration metric: took 4m24.706461383s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 19:18:00.307183  438001 node_conditions.go:102] verifying NodePressure condition ...
	I0819 19:18:00.309818  438001 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 19:18:00.309866  438001 node_conditions.go:123] node cpu capacity is 2
	I0819 19:18:00.309879  438001 node_conditions.go:105] duration metric: took 2.691554ms to run NodePressure ...
	I0819 19:18:00.309892  438001 start.go:241] waiting for startup goroutines ...
	I0819 19:18:00.309901  438001 start.go:246] waiting for cluster config update ...
	I0819 19:18:00.309918  438001 start.go:255] writing updated cluster config ...
	I0819 19:18:00.310268  438001 ssh_runner.go:195] Run: rm -f paused
	I0819 19:18:00.366211  438001 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 19:18:00.368280  438001 out.go:177] * Done! kubectl is now configured to use "no-preload-278232" cluster and "default" namespace by default
	I0819 19:17:58.890611  438716 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 19:17:58.890832  438716 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 19:18:18.891960  438716 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 19:18:18.892243  438716 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 19:18:58.894609  438716 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 19:18:58.894854  438716 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 19:18:58.894869  438716 kubeadm.go:310] 
	I0819 19:18:58.894912  438716 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0819 19:18:58.894967  438716 kubeadm.go:310] 		timed out waiting for the condition
	I0819 19:18:58.894981  438716 kubeadm.go:310] 
	I0819 19:18:58.895024  438716 kubeadm.go:310] 	This error is likely caused by:
	I0819 19:18:58.895072  438716 kubeadm.go:310] 		- The kubelet is not running
	I0819 19:18:58.895344  438716 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0819 19:18:58.895388  438716 kubeadm.go:310] 
	I0819 19:18:58.895518  438716 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0819 19:18:58.895613  438716 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0819 19:18:58.895668  438716 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0819 19:18:58.895695  438716 kubeadm.go:310] 
	I0819 19:18:58.895839  438716 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0819 19:18:58.895959  438716 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0819 19:18:58.895972  438716 kubeadm.go:310] 
	I0819 19:18:58.896072  438716 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0819 19:18:58.896154  438716 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0819 19:18:58.896220  438716 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0819 19:18:58.896284  438716 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0819 19:18:58.896314  438716 kubeadm.go:310] 
	I0819 19:18:58.896819  438716 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 19:18:58.896946  438716 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0819 19:18:58.897028  438716 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0819 19:18:58.897193  438716 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0819 19:18:58.897249  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 19:18:59.361073  438716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 19:18:59.375791  438716 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 19:18:59.387650  438716 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 19:18:59.387697  438716 kubeadm.go:157] found existing configuration files:
	
	I0819 19:18:59.387756  438716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 19:18:59.397345  438716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 19:18:59.397409  438716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 19:18:59.408060  438716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 19:18:59.417658  438716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 19:18:59.417731  438716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 19:18:59.427765  438716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 19:18:59.437636  438716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 19:18:59.437712  438716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 19:18:59.447506  438716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 19:18:59.457100  438716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 19:18:59.457165  438716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 19:18:59.467185  438716 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 19:18:59.540706  438716 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0819 19:18:59.541005  438716 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 19:18:59.694109  438716 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 19:18:59.694238  438716 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 19:18:59.694350  438716 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0819 19:18:59.874268  438716 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 19:18:59.876259  438716 out.go:235]   - Generating certificates and keys ...
	I0819 19:18:59.876362  438716 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 19:18:59.876441  438716 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 19:18:59.876569  438716 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 19:18:59.876654  438716 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 19:18:59.876751  438716 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 19:18:59.876824  438716 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 19:18:59.876900  438716 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 19:18:59.877076  438716 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 19:18:59.877571  438716 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 19:18:59.877997  438716 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 19:18:59.878139  438716 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 19:18:59.878241  438716 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 19:19:00.153380  438716 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 19:19:00.359863  438716 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 19:19:00.470797  438716 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 19:19:00.590041  438716 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 19:19:00.614332  438716 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 19:19:00.615415  438716 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 19:19:00.615473  438716 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 19:19:00.756167  438716 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 19:19:00.757737  438716 out.go:235]   - Booting up control plane ...
	I0819 19:19:00.757873  438716 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 19:19:00.761484  438716 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 19:19:00.762431  438716 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 19:19:00.763241  438716 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 19:19:00.766155  438716 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0819 19:19:40.770166  438716 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0819 19:19:40.770378  438716 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 19:19:40.770543  438716 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 19:19:45.771352  438716 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 19:19:45.771587  438716 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 19:19:55.772027  438716 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 19:19:55.772243  438716 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 19:20:15.773008  438716 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 19:20:15.773238  438716 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 19:20:55.771311  438716 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 19:20:55.771517  438716 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 19:20:55.771530  438716 kubeadm.go:310] 
	I0819 19:20:55.771578  438716 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0819 19:20:55.771750  438716 kubeadm.go:310] 		timed out waiting for the condition
	I0819 19:20:55.771784  438716 kubeadm.go:310] 
	I0819 19:20:55.771845  438716 kubeadm.go:310] 	This error is likely caused by:
	I0819 19:20:55.771891  438716 kubeadm.go:310] 		- The kubelet is not running
	I0819 19:20:55.772014  438716 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0819 19:20:55.772027  438716 kubeadm.go:310] 
	I0819 19:20:55.772125  438716 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0819 19:20:55.772162  438716 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0819 19:20:55.772188  438716 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0819 19:20:55.772196  438716 kubeadm.go:310] 
	I0819 19:20:55.772272  438716 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0819 19:20:55.772336  438716 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0819 19:20:55.772343  438716 kubeadm.go:310] 
	I0819 19:20:55.772439  438716 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0819 19:20:55.772520  438716 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0819 19:20:55.772581  438716 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0819 19:20:55.772637  438716 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0819 19:20:55.772645  438716 kubeadm.go:310] 
	I0819 19:20:55.773758  438716 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 19:20:55.773880  438716 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0819 19:20:55.773971  438716 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0819 19:20:55.774067  438716 kubeadm.go:394] duration metric: took 7m57.361589371s to StartCluster
	I0819 19:20:55.774157  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:20:55.774243  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:20:55.818428  438716 cri.go:89] found id: ""
	I0819 19:20:55.818460  438716 logs.go:276] 0 containers: []
	W0819 19:20:55.818468  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:20:55.818475  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:20:55.818535  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:20:55.857714  438716 cri.go:89] found id: ""
	I0819 19:20:55.857747  438716 logs.go:276] 0 containers: []
	W0819 19:20:55.857758  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:20:55.857766  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:20:55.857841  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:20:55.891917  438716 cri.go:89] found id: ""
	I0819 19:20:55.891948  438716 logs.go:276] 0 containers: []
	W0819 19:20:55.891967  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:20:55.891976  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:20:55.892046  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:20:55.930608  438716 cri.go:89] found id: ""
	I0819 19:20:55.930643  438716 logs.go:276] 0 containers: []
	W0819 19:20:55.930656  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:20:55.930665  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:20:55.930734  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:20:55.966563  438716 cri.go:89] found id: ""
	I0819 19:20:55.966591  438716 logs.go:276] 0 containers: []
	W0819 19:20:55.966600  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:20:55.966607  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:20:55.966670  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:20:56.010392  438716 cri.go:89] found id: ""
	I0819 19:20:56.010421  438716 logs.go:276] 0 containers: []
	W0819 19:20:56.010430  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:20:56.010436  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:20:56.010491  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:20:56.066940  438716 cri.go:89] found id: ""
	I0819 19:20:56.066973  438716 logs.go:276] 0 containers: []
	W0819 19:20:56.066985  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:20:56.066994  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:20:56.067062  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:20:56.118852  438716 cri.go:89] found id: ""
	I0819 19:20:56.118881  438716 logs.go:276] 0 containers: []
	W0819 19:20:56.118894  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:20:56.118909  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:20:56.118925  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:20:56.158224  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:20:56.158263  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:20:56.211882  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:20:56.211925  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:20:56.228082  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:20:56.228124  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:20:56.307857  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:20:56.307880  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:20:56.307893  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0819 19:20:56.414797  438716 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0819 19:20:56.414885  438716 out.go:270] * 
	W0819 19:20:56.415020  438716 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0819 19:20:56.415039  438716 out.go:270] * 
	W0819 19:20:56.416031  438716 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 19:20:56.419869  438716 out.go:201] 
	W0819 19:20:56.421262  438716 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0819 19:20:56.421319  438716 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0819 19:20:56.421351  438716 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0819 19:20:56.422942  438716 out.go:201] 
	
	
	==> CRI-O <==
	Aug 19 19:20:58 old-k8s-version-104669 crio[655]: time="2024-08-19 19:20:58.397198926Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095258397171706,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9360763f-c02d-4214-93f9-0fb160736f8e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:20:58 old-k8s-version-104669 crio[655]: time="2024-08-19 19:20:58.397768750Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2ed1d87e-e3ab-4363-ad30-a84dd6da87f4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:20:58 old-k8s-version-104669 crio[655]: time="2024-08-19 19:20:58.397834799Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2ed1d87e-e3ab-4363-ad30-a84dd6da87f4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:20:58 old-k8s-version-104669 crio[655]: time="2024-08-19 19:20:58.397867557Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=2ed1d87e-e3ab-4363-ad30-a84dd6da87f4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:20:58 old-k8s-version-104669 crio[655]: time="2024-08-19 19:20:58.434225404Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6b84d44b-96f9-406b-86a1-db9149554934 name=/runtime.v1.RuntimeService/Version
	Aug 19 19:20:58 old-k8s-version-104669 crio[655]: time="2024-08-19 19:20:58.434312596Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6b84d44b-96f9-406b-86a1-db9149554934 name=/runtime.v1.RuntimeService/Version
	Aug 19 19:20:58 old-k8s-version-104669 crio[655]: time="2024-08-19 19:20:58.436135686Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=774bd812-d63a-46b1-ab5c-1b6dfd90d1b6 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:20:58 old-k8s-version-104669 crio[655]: time="2024-08-19 19:20:58.436555031Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095258436534145,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=774bd812-d63a-46b1-ab5c-1b6dfd90d1b6 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:20:58 old-k8s-version-104669 crio[655]: time="2024-08-19 19:20:58.437181987Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a1cb2fb9-6792-4944-ba16-efa66333544a name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:20:58 old-k8s-version-104669 crio[655]: time="2024-08-19 19:20:58.437279361Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a1cb2fb9-6792-4944-ba16-efa66333544a name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:20:58 old-k8s-version-104669 crio[655]: time="2024-08-19 19:20:58.437326423Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=a1cb2fb9-6792-4944-ba16-efa66333544a name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:20:58 old-k8s-version-104669 crio[655]: time="2024-08-19 19:20:58.468899450Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=81f34885-2357-4b14-8a68-1ec6977c4267 name=/runtime.v1.RuntimeService/Version
	Aug 19 19:20:58 old-k8s-version-104669 crio[655]: time="2024-08-19 19:20:58.468985937Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=81f34885-2357-4b14-8a68-1ec6977c4267 name=/runtime.v1.RuntimeService/Version
	Aug 19 19:20:58 old-k8s-version-104669 crio[655]: time="2024-08-19 19:20:58.470579071Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=44a3911c-888c-45f2-a48e-8b5a5067494f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:20:58 old-k8s-version-104669 crio[655]: time="2024-08-19 19:20:58.470922517Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095258470904032,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=44a3911c-888c-45f2-a48e-8b5a5067494f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:20:58 old-k8s-version-104669 crio[655]: time="2024-08-19 19:20:58.471490998Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9ac10d24-377f-47eb-b8c5-e2fb1c87f469 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:20:58 old-k8s-version-104669 crio[655]: time="2024-08-19 19:20:58.471552845Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9ac10d24-377f-47eb-b8c5-e2fb1c87f469 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:20:58 old-k8s-version-104669 crio[655]: time="2024-08-19 19:20:58.471590758Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=9ac10d24-377f-47eb-b8c5-e2fb1c87f469 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:20:58 old-k8s-version-104669 crio[655]: time="2024-08-19 19:20:58.514012453Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3efd9d29-d021-48ee-9d54-dc3f4fa1ce92 name=/runtime.v1.RuntimeService/Version
	Aug 19 19:20:58 old-k8s-version-104669 crio[655]: time="2024-08-19 19:20:58.514142442Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3efd9d29-d021-48ee-9d54-dc3f4fa1ce92 name=/runtime.v1.RuntimeService/Version
	Aug 19 19:20:58 old-k8s-version-104669 crio[655]: time="2024-08-19 19:20:58.515385800Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=004bbae2-6801-4285-b912-211d7a28e546 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:20:58 old-k8s-version-104669 crio[655]: time="2024-08-19 19:20:58.515789668Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095258515767069,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=004bbae2-6801-4285-b912-211d7a28e546 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:20:58 old-k8s-version-104669 crio[655]: time="2024-08-19 19:20:58.516352161Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4621ef0e-9c4c-4a56-8be9-cdd215b4b384 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:20:58 old-k8s-version-104669 crio[655]: time="2024-08-19 19:20:58.516434423Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4621ef0e-9c4c-4a56-8be9-cdd215b4b384 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:20:58 old-k8s-version-104669 crio[655]: time="2024-08-19 19:20:58.516482563Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=4621ef0e-9c4c-4a56-8be9-cdd215b4b384 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Aug19 19:12] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050789] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041369] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.978049] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.658614] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.655874] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.305057] systemd-fstab-generator[577]: Ignoring "noauto" option for root device
	[  +0.056862] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065398] systemd-fstab-generator[589]: Ignoring "noauto" option for root device
	[  +0.183560] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.167037] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +0.268786] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +6.546314] systemd-fstab-generator[904]: Ignoring "noauto" option for root device
	[  +0.062802] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.070433] systemd-fstab-generator[1029]: Ignoring "noauto" option for root device
	[Aug19 19:13] kauditd_printk_skb: 46 callbacks suppressed
	[Aug19 19:17] systemd-fstab-generator[5079]: Ignoring "noauto" option for root device
	[Aug19 19:18] systemd-fstab-generator[5368]: Ignoring "noauto" option for root device
	[  +0.069874] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 19:20:58 up 8 min,  0 users,  load average: 0.05, 0.13, 0.09
	Linux old-k8s-version-104669 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Aug 19 19:20:55 old-k8s-version-104669 kubelet[5551]:         /usr/local/go/src/net/ipsock.go:280 +0x4d4
	Aug 19 19:20:55 old-k8s-version-104669 kubelet[5551]: net.(*Resolver).resolveAddrList(0x70c5740, 0x4f7fe40, 0xc0009b74a0, 0x48abf6d, 0x4, 0x48ab5d6, 0x3, 0xc0009949c0, 0x24, 0x0, ...)
	Aug 19 19:20:55 old-k8s-version-104669 kubelet[5551]:         /usr/local/go/src/net/dial.go:221 +0x47d
	Aug 19 19:20:55 old-k8s-version-104669 kubelet[5551]: net.(*Dialer).DialContext(0xc0001730e0, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc0009949c0, 0x24, 0x0, 0x0, 0x0, ...)
	Aug 19 19:20:55 old-k8s-version-104669 kubelet[5551]:         /usr/local/go/src/net/dial.go:403 +0x22b
	Aug 19 19:20:55 old-k8s-version-104669 kubelet[5551]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc0008597a0, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc0009949c0, 0x24, 0x60, 0x7f38eca57668, 0x118, ...)
	Aug 19 19:20:55 old-k8s-version-104669 kubelet[5551]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Aug 19 19:20:55 old-k8s-version-104669 kubelet[5551]: net/http.(*Transport).dial(0xc000609040, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc0009949c0, 0x24, 0x0, 0x0, 0x0, ...)
	Aug 19 19:20:55 old-k8s-version-104669 kubelet[5551]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Aug 19 19:20:55 old-k8s-version-104669 kubelet[5551]: net/http.(*Transport).dialConn(0xc000609040, 0x4f7fe00, 0xc000052030, 0x0, 0xc0009e8300, 0x5, 0xc0009949c0, 0x24, 0x0, 0xc00098d200, ...)
	Aug 19 19:20:55 old-k8s-version-104669 kubelet[5551]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Aug 19 19:20:55 old-k8s-version-104669 kubelet[5551]: net/http.(*Transport).dialConnFor(0xc000609040, 0xc000992580)
	Aug 19 19:20:55 old-k8s-version-104669 kubelet[5551]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Aug 19 19:20:55 old-k8s-version-104669 kubelet[5551]: created by net/http.(*Transport).queueForDial
	Aug 19 19:20:55 old-k8s-version-104669 kubelet[5551]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Aug 19 19:20:55 old-k8s-version-104669 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Aug 19 19:20:55 old-k8s-version-104669 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Aug 19 19:20:55 old-k8s-version-104669 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Aug 19 19:20:55 old-k8s-version-104669 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Aug 19 19:20:55 old-k8s-version-104669 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Aug 19 19:20:56 old-k8s-version-104669 kubelet[5586]: I0819 19:20:56.087916    5586 server.go:416] Version: v1.20.0
	Aug 19 19:20:56 old-k8s-version-104669 kubelet[5586]: I0819 19:20:56.088299    5586 server.go:837] Client rotation is on, will bootstrap in background
	Aug 19 19:20:56 old-k8s-version-104669 kubelet[5586]: I0819 19:20:56.090720    5586 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Aug 19 19:20:56 old-k8s-version-104669 kubelet[5586]: W0819 19:20:56.092519    5586 manager.go:159] Cannot detect current cgroup on cgroup v2
	Aug 19 19:20:56 old-k8s-version-104669 kubelet[5586]: I0819 19:20:56.092994    5586 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-104669 -n old-k8s-version-104669
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-104669 -n old-k8s-version-104669: exit status 2 (234.172376ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-104669" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (749.65s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.41s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-982795 -n default-k8s-diff-port-982795
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-08-19 19:26:26.128386853 +0000 UTC m=+6133.328313659
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-982795 -n default-k8s-diff-port-982795
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-982795 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-982795 logs -n 25: (2.200278566s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-571803                           | enable-default-cni-571803    | jenkins | v1.33.1 | 19 Aug 24 19:03 UTC | 19 Aug 24 19:03 UTC |
	|         | sudo cat                                               |                              |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-571803                           | enable-default-cni-571803    | jenkins | v1.33.1 | 19 Aug 24 19:03 UTC | 19 Aug 24 19:03 UTC |
	|         | sudo containerd config dump                            |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-571803                           | enable-default-cni-571803    | jenkins | v1.33.1 | 19 Aug 24 19:03 UTC | 19 Aug 24 19:03 UTC |
	|         | sudo systemctl status crio                             |                              |         |         |                     |                     |
	|         | --all --full --no-pager                                |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-571803                           | enable-default-cni-571803    | jenkins | v1.33.1 | 19 Aug 24 19:03 UTC | 19 Aug 24 19:03 UTC |
	|         | sudo systemctl cat crio                                |                              |         |         |                     |                     |
	|         | --no-pager                                             |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-571803                           | enable-default-cni-571803    | jenkins | v1.33.1 | 19 Aug 24 19:03 UTC | 19 Aug 24 19:03 UTC |
	|         | sudo find /etc/crio -type f                            |                              |         |         |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                          |                              |         |         |                     |                     |
	|         | \;                                                     |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-571803                           | enable-default-cni-571803    | jenkins | v1.33.1 | 19 Aug 24 19:03 UTC | 19 Aug 24 19:03 UTC |
	|         | sudo crio config                                       |                              |         |         |                     |                     |
	| delete  | -p enable-default-cni-571803                           | enable-default-cni-571803    | jenkins | v1.33.1 | 19 Aug 24 19:03 UTC | 19 Aug 24 19:03 UTC |
	| delete  | -p                                                     | disable-driver-mounts-737091 | jenkins | v1.33.1 | 19 Aug 24 19:03 UTC | 19 Aug 24 19:03 UTC |
	|         | disable-driver-mounts-737091                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-982795 | jenkins | v1.33.1 | 19 Aug 24 19:03 UTC | 19 Aug 24 19:04 UTC |
	|         | default-k8s-diff-port-982795                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-278232             | no-preload-278232            | jenkins | v1.33.1 | 19 Aug 24 19:04 UTC | 19 Aug 24 19:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-278232                                   | no-preload-278232            | jenkins | v1.33.1 | 19 Aug 24 19:04 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-982795  | default-k8s-diff-port-982795 | jenkins | v1.33.1 | 19 Aug 24 19:04 UTC | 19 Aug 24 19:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-982795 | jenkins | v1.33.1 | 19 Aug 24 19:04 UTC |                     |
	|         | default-k8s-diff-port-982795                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-024748            | embed-certs-024748           | jenkins | v1.33.1 | 19 Aug 24 19:04 UTC | 19 Aug 24 19:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-024748                                  | embed-certs-024748           | jenkins | v1.33.1 | 19 Aug 24 19:04 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-104669        | old-k8s-version-104669       | jenkins | v1.33.1 | 19 Aug 24 19:06 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-278232                  | no-preload-278232            | jenkins | v1.33.1 | 19 Aug 24 19:07 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-278232                                   | no-preload-278232            | jenkins | v1.33.1 | 19 Aug 24 19:07 UTC | 19 Aug 24 19:18 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-982795       | default-k8s-diff-port-982795 | jenkins | v1.33.1 | 19 Aug 24 19:07 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-024748                 | embed-certs-024748           | jenkins | v1.33.1 | 19 Aug 24 19:07 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-982795 | jenkins | v1.33.1 | 19 Aug 24 19:07 UTC | 19 Aug 24 19:17 UTC |
	|         | default-k8s-diff-port-982795                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-024748                                  | embed-certs-024748           | jenkins | v1.33.1 | 19 Aug 24 19:07 UTC | 19 Aug 24 19:17 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-104669                              | old-k8s-version-104669       | jenkins | v1.33.1 | 19 Aug 24 19:08 UTC | 19 Aug 24 19:08 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-104669             | old-k8s-version-104669       | jenkins | v1.33.1 | 19 Aug 24 19:08 UTC | 19 Aug 24 19:08 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-104669                              | old-k8s-version-104669       | jenkins | v1.33.1 | 19 Aug 24 19:08 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 19:08:30
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 19:08:30.532545  438716 out.go:345] Setting OutFile to fd 1 ...
	I0819 19:08:30.532649  438716 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:08:30.532657  438716 out.go:358] Setting ErrFile to fd 2...
	I0819 19:08:30.532661  438716 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:08:30.532811  438716 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19468-372744/.minikube/bin
	I0819 19:08:30.533379  438716 out.go:352] Setting JSON to false
	I0819 19:08:30.534373  438716 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":10253,"bootTime":1724084257,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 19:08:30.534451  438716 start.go:139] virtualization: kvm guest
	I0819 19:08:30.536658  438716 out.go:177] * [old-k8s-version-104669] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 19:08:30.537921  438716 out.go:177]   - MINIKUBE_LOCATION=19468
	I0819 19:08:30.537959  438716 notify.go:220] Checking for updates...
	I0819 19:08:30.540501  438716 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 19:08:30.541864  438716 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19468-372744/kubeconfig
	I0819 19:08:30.543170  438716 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19468-372744/.minikube
	I0819 19:08:30.544395  438716 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 19:08:30.545614  438716 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 19:08:30.547072  438716 config.go:182] Loaded profile config "old-k8s-version-104669": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0819 19:08:30.547468  438716 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:08:30.547570  438716 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:08:30.563059  438716 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34139
	I0819 19:08:30.563506  438716 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:08:30.564068  438716 main.go:141] libmachine: Using API Version  1
	I0819 19:08:30.564091  438716 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:08:30.564474  438716 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:08:30.564719  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .DriverName
	I0819 19:08:30.566599  438716 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0819 19:08:30.568124  438716 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 19:08:30.568503  438716 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:08:30.568541  438716 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:08:30.583805  438716 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35313
	I0819 19:08:30.584314  438716 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:08:30.584805  438716 main.go:141] libmachine: Using API Version  1
	I0819 19:08:30.584827  438716 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:08:30.585131  438716 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:08:30.585320  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .DriverName
	I0819 19:08:30.621020  438716 out.go:177] * Using the kvm2 driver based on existing profile
	I0819 19:08:30.622137  438716 start.go:297] selected driver: kvm2
	I0819 19:08:30.622158  438716 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-104669 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-104669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.32 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 19:08:30.622252  438716 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 19:08:30.622998  438716 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 19:08:30.623082  438716 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19468-372744/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 19:08:30.638616  438716 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 19:08:30.638998  438716 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 19:08:30.639047  438716 cni.go:84] Creating CNI manager for ""
	I0819 19:08:30.639059  438716 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 19:08:30.639097  438716 start.go:340] cluster config:
	{Name:old-k8s-version-104669 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-104669 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.32 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 19:08:30.639243  438716 iso.go:125] acquiring lock: {Name:mk4c0ac1c3202b1a296739df622960e7a0bd8566 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 19:08:30.641823  438716 out.go:177] * Starting "old-k8s-version-104669" primary control-plane node in "old-k8s-version-104669" cluster
	I0819 19:08:30.915976  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:08:30.643167  438716 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0819 19:08:30.643197  438716 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0819 19:08:30.643205  438716 cache.go:56] Caching tarball of preloaded images
	I0819 19:08:30.643300  438716 preload.go:172] Found /home/jenkins/minikube-integration/19468-372744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 19:08:30.643311  438716 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0819 19:08:30.643409  438716 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/config.json ...
	I0819 19:08:30.643583  438716 start.go:360] acquireMachinesLock for old-k8s-version-104669: {Name:mk24ba67a747357e9ce40f1e460d2bb0bc59cc75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 19:08:33.988031  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:08:40.067999  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:08:43.140051  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:08:49.219991  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:08:52.292013  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:08:58.371952  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:09:01.444061  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:09:07.523958  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:09:10.595977  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:09:16.675955  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:09:19.748037  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:09:25.828064  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:09:28.899972  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:09:34.980044  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:09:38.052066  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:09:44.131960  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:09:47.203926  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:09:53.283992  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:09:56.355952  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:10:02.435994  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:10:05.508042  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:10:11.587960  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:10:14.660027  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:10:20.740007  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:10:23.811991  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:10:29.891998  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:10:32.963959  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:10:39.043942  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:10:42.116029  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:10:48.195984  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:10:51.267954  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:10:57.347922  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:11:00.419952  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:11:06.499978  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:11:09.572013  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:11:15.652066  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:11:18.724012  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:11:24.804001  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:11:27.875961  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:11:33.956046  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:11:37.027998  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:11:43.108014  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:11:46.179987  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:11:49.184190  438245 start.go:364] duration metric: took 4m21.835882225s to acquireMachinesLock for "default-k8s-diff-port-982795"
	I0819 19:11:49.184280  438245 start.go:96] Skipping create...Using existing machine configuration
	I0819 19:11:49.184296  438245 fix.go:54] fixHost starting: 
	I0819 19:11:49.184628  438245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:11:49.184661  438245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:11:49.200544  438245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38241
	I0819 19:11:49.200994  438245 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:11:49.201530  438245 main.go:141] libmachine: Using API Version  1
	I0819 19:11:49.201560  438245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:11:49.201953  438245 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:11:49.202151  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .DriverName
	I0819 19:11:49.202296  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetState
	I0819 19:11:49.203841  438245 fix.go:112] recreateIfNeeded on default-k8s-diff-port-982795: state=Stopped err=<nil>
	I0819 19:11:49.203875  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .DriverName
	W0819 19:11:49.204042  438245 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 19:11:49.205721  438245 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-982795" ...
	I0819 19:11:49.181717  438001 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 19:11:49.181755  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetMachineName
	I0819 19:11:49.182097  438001 buildroot.go:166] provisioning hostname "no-preload-278232"
	I0819 19:11:49.182131  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetMachineName
	I0819 19:11:49.182392  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHHostname
	I0819 19:11:49.184006  438001 machine.go:96] duration metric: took 4m37.423775019s to provisionDockerMachine
	I0819 19:11:49.184078  438001 fix.go:56] duration metric: took 4m37.445408913s for fixHost
	I0819 19:11:49.184091  438001 start.go:83] releasing machines lock for "no-preload-278232", held for 4m37.44544277s
	W0819 19:11:49.184116  438001 start.go:714] error starting host: provision: host is not running
	W0819 19:11:49.184274  438001 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0819 19:11:49.184288  438001 start.go:729] Will try again in 5 seconds ...
	I0819 19:11:49.206739  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .Start
	I0819 19:11:49.206892  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Ensuring networks are active...
	I0819 19:11:49.207586  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Ensuring network default is active
	I0819 19:11:49.207947  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Ensuring network mk-default-k8s-diff-port-982795 is active
	I0819 19:11:49.208368  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Getting domain xml...
	I0819 19:11:49.209114  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Creating domain...
	I0819 19:11:50.421290  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting to get IP...
	I0819 19:11:50.422082  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:11:50.422490  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | unable to find current IP address of domain default-k8s-diff-port-982795 in network mk-default-k8s-diff-port-982795
	I0819 19:11:50.422562  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | I0819 19:11:50.422473  439403 retry.go:31] will retry after 273.434317ms: waiting for machine to come up
	I0819 19:11:50.698167  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:11:50.698598  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | unable to find current IP address of domain default-k8s-diff-port-982795 in network mk-default-k8s-diff-port-982795
	I0819 19:11:50.698635  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | I0819 19:11:50.698569  439403 retry.go:31] will retry after 367.841325ms: waiting for machine to come up
	I0819 19:11:51.068401  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:11:51.068996  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | unable to find current IP address of domain default-k8s-diff-port-982795 in network mk-default-k8s-diff-port-982795
	I0819 19:11:51.069019  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | I0819 19:11:51.068942  439403 retry.go:31] will retry after 460.053559ms: waiting for machine to come up
	I0819 19:11:51.530228  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:11:51.530700  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | unable to find current IP address of domain default-k8s-diff-port-982795 in network mk-default-k8s-diff-port-982795
	I0819 19:11:51.530730  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | I0819 19:11:51.530636  439403 retry.go:31] will retry after 498.222116ms: waiting for machine to come up
	I0819 19:11:52.030322  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:11:52.030771  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | unable to find current IP address of domain default-k8s-diff-port-982795 in network mk-default-k8s-diff-port-982795
	I0819 19:11:52.030808  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | I0819 19:11:52.030710  439403 retry.go:31] will retry after 750.75175ms: waiting for machine to come up
	I0819 19:11:54.186765  438001 start.go:360] acquireMachinesLock for no-preload-278232: {Name:mk24ba67a747357e9ce40f1e460d2bb0bc59cc75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 19:11:52.782638  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:11:52.783001  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | unable to find current IP address of domain default-k8s-diff-port-982795 in network mk-default-k8s-diff-port-982795
	I0819 19:11:52.783027  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | I0819 19:11:52.782952  439403 retry.go:31] will retry after 576.883195ms: waiting for machine to come up
	I0819 19:11:53.361702  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:11:53.362105  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | unable to find current IP address of domain default-k8s-diff-port-982795 in network mk-default-k8s-diff-port-982795
	I0819 19:11:53.362138  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | I0819 19:11:53.362035  439403 retry.go:31] will retry after 900.512446ms: waiting for machine to come up
	I0819 19:11:54.264656  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:11:54.265032  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | unable to find current IP address of domain default-k8s-diff-port-982795 in network mk-default-k8s-diff-port-982795
	I0819 19:11:54.265052  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | I0819 19:11:54.264984  439403 retry.go:31] will retry after 1.339005367s: waiting for machine to come up
	I0819 19:11:55.605816  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:11:55.606348  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | unable to find current IP address of domain default-k8s-diff-port-982795 in network mk-default-k8s-diff-port-982795
	I0819 19:11:55.606378  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | I0819 19:11:55.606304  439403 retry.go:31] will retry after 1.517824531s: waiting for machine to come up
	I0819 19:11:57.126027  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:11:57.126400  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | unable to find current IP address of domain default-k8s-diff-port-982795 in network mk-default-k8s-diff-port-982795
	I0819 19:11:57.126426  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | I0819 19:11:57.126340  439403 retry.go:31] will retry after 2.220939365s: waiting for machine to come up
	I0819 19:11:59.348649  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:11:59.349041  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | unable to find current IP address of domain default-k8s-diff-port-982795 in network mk-default-k8s-diff-port-982795
	I0819 19:11:59.349072  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | I0819 19:11:59.348987  439403 retry.go:31] will retry after 2.830298687s: waiting for machine to come up
	I0819 19:12:02.182934  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:02.183398  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | unable to find current IP address of domain default-k8s-diff-port-982795 in network mk-default-k8s-diff-port-982795
	I0819 19:12:02.183422  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | I0819 19:12:02.183348  439403 retry.go:31] will retry after 2.302725829s: waiting for machine to come up
	I0819 19:12:04.487648  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:04.488074  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | unable to find current IP address of domain default-k8s-diff-port-982795 in network mk-default-k8s-diff-port-982795
	I0819 19:12:04.488108  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | I0819 19:12:04.488016  439403 retry.go:31] will retry after 2.932250361s: waiting for machine to come up
	I0819 19:12:08.736669  438295 start.go:364] duration metric: took 4m39.596501254s to acquireMachinesLock for "embed-certs-024748"
	I0819 19:12:08.736755  438295 start.go:96] Skipping create...Using existing machine configuration
	I0819 19:12:08.736776  438295 fix.go:54] fixHost starting: 
	I0819 19:12:08.737277  438295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:08.737326  438295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:08.754873  438295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36829
	I0819 19:12:08.755301  438295 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:08.755839  438295 main.go:141] libmachine: Using API Version  1
	I0819 19:12:08.755866  438295 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:08.756184  438295 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:08.756383  438295 main.go:141] libmachine: (embed-certs-024748) Calling .DriverName
	I0819 19:12:08.756525  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetState
	I0819 19:12:08.758092  438295 fix.go:112] recreateIfNeeded on embed-certs-024748: state=Stopped err=<nil>
	I0819 19:12:08.758134  438295 main.go:141] libmachine: (embed-certs-024748) Calling .DriverName
	W0819 19:12:08.758299  438295 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 19:12:08.760922  438295 out.go:177] * Restarting existing kvm2 VM for "embed-certs-024748" ...
	I0819 19:12:08.762335  438295 main.go:141] libmachine: (embed-certs-024748) Calling .Start
	I0819 19:12:08.762509  438295 main.go:141] libmachine: (embed-certs-024748) Ensuring networks are active...
	I0819 19:12:08.763274  438295 main.go:141] libmachine: (embed-certs-024748) Ensuring network default is active
	I0819 19:12:08.763647  438295 main.go:141] libmachine: (embed-certs-024748) Ensuring network mk-embed-certs-024748 is active
	I0819 19:12:08.764057  438295 main.go:141] libmachine: (embed-certs-024748) Getting domain xml...
	I0819 19:12:08.764765  438295 main.go:141] libmachine: (embed-certs-024748) Creating domain...
	I0819 19:12:07.424132  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.424589  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Found IP for machine: 192.168.61.48
	I0819 19:12:07.424615  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Reserving static IP address...
	I0819 19:12:07.424634  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has current primary IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.425178  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Reserved static IP address: 192.168.61.48
	I0819 19:12:07.425205  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for SSH to be available...
	I0819 19:12:07.425237  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-982795", mac: "52:54:00:d4:19:cd", ip: "192.168.61.48"} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:07.425283  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | skip adding static IP to network mk-default-k8s-diff-port-982795 - found existing host DHCP lease matching {name: "default-k8s-diff-port-982795", mac: "52:54:00:d4:19:cd", ip: "192.168.61.48"}
	I0819 19:12:07.425304  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | Getting to WaitForSSH function...
	I0819 19:12:07.427600  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.427969  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:07.428001  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.428179  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | Using SSH client type: external
	I0819 19:12:07.428245  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | Using SSH private key: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/default-k8s-diff-port-982795/id_rsa (-rw-------)
	I0819 19:12:07.428297  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.48 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19468-372744/.minikube/machines/default-k8s-diff-port-982795/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 19:12:07.428321  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | About to run SSH command:
	I0819 19:12:07.428339  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | exit 0
	I0819 19:12:07.547727  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | SSH cmd err, output: <nil>: 
	I0819 19:12:07.548095  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetConfigRaw
	I0819 19:12:07.548741  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetIP
	I0819 19:12:07.551308  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.551700  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:07.551733  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.551967  438245 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/default-k8s-diff-port-982795/config.json ...
	I0819 19:12:07.552164  438245 machine.go:93] provisionDockerMachine start ...
	I0819 19:12:07.552186  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .DriverName
	I0819 19:12:07.552427  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHHostname
	I0819 19:12:07.554782  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.555062  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:07.555080  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.555219  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHPort
	I0819 19:12:07.555427  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:12:07.555586  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:12:07.555767  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHUsername
	I0819 19:12:07.555912  438245 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:07.556152  438245 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.48 22 <nil> <nil>}
	I0819 19:12:07.556168  438245 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 19:12:07.655996  438245 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 19:12:07.656027  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetMachineName
	I0819 19:12:07.656301  438245 buildroot.go:166] provisioning hostname "default-k8s-diff-port-982795"
	I0819 19:12:07.656329  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetMachineName
	I0819 19:12:07.656530  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHHostname
	I0819 19:12:07.658956  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.659311  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:07.659344  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.659439  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHPort
	I0819 19:12:07.659617  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:12:07.659813  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:12:07.659937  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHUsername
	I0819 19:12:07.660112  438245 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:07.660291  438245 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.48 22 <nil> <nil>}
	I0819 19:12:07.660302  438245 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-982795 && echo "default-k8s-diff-port-982795" | sudo tee /etc/hostname
	I0819 19:12:07.773590  438245 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-982795
	
	I0819 19:12:07.773615  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHHostname
	I0819 19:12:07.776994  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.777360  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:07.777399  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.777580  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHPort
	I0819 19:12:07.777860  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:12:07.778060  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:12:07.778273  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHUsername
	I0819 19:12:07.778457  438245 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:07.778665  438245 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.48 22 <nil> <nil>}
	I0819 19:12:07.778687  438245 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-982795' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-982795/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-982795' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 19:12:07.884662  438245 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 19:12:07.884718  438245 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19468-372744/.minikube CaCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19468-372744/.minikube}
	I0819 19:12:07.884751  438245 buildroot.go:174] setting up certificates
	I0819 19:12:07.884768  438245 provision.go:84] configureAuth start
	I0819 19:12:07.884782  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetMachineName
	I0819 19:12:07.885101  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetIP
	I0819 19:12:07.887844  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.888262  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:07.888293  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.888439  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHHostname
	I0819 19:12:07.890581  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.890977  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:07.891005  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.891136  438245 provision.go:143] copyHostCerts
	I0819 19:12:07.891219  438245 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem, removing ...
	I0819 19:12:07.891240  438245 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem
	I0819 19:12:07.891306  438245 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem (1082 bytes)
	I0819 19:12:07.891398  438245 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem, removing ...
	I0819 19:12:07.891406  438245 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem
	I0819 19:12:07.891430  438245 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem (1123 bytes)
	I0819 19:12:07.891487  438245 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem, removing ...
	I0819 19:12:07.891494  438245 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem
	I0819 19:12:07.891517  438245 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem (1675 bytes)
	I0819 19:12:07.891570  438245 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-982795 san=[127.0.0.1 192.168.61.48 default-k8s-diff-port-982795 localhost minikube]
	I0819 19:12:08.083963  438245 provision.go:177] copyRemoteCerts
	I0819 19:12:08.084024  438245 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 19:12:08.084086  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHHostname
	I0819 19:12:08.086637  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:08.086961  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:08.087005  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:08.087144  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHPort
	I0819 19:12:08.087357  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:12:08.087507  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHUsername
	I0819 19:12:08.087694  438245 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/default-k8s-diff-port-982795/id_rsa Username:docker}
	I0819 19:12:08.166312  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 19:12:08.194124  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0819 19:12:08.221817  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 19:12:08.249674  438245 provision.go:87] duration metric: took 364.885827ms to configureAuth
	I0819 19:12:08.249709  438245 buildroot.go:189] setting minikube options for container-runtime
	I0819 19:12:08.249891  438245 config.go:182] Loaded profile config "default-k8s-diff-port-982795": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:12:08.249983  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHHostname
	I0819 19:12:08.253045  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:08.253438  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:08.253469  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:08.253647  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHPort
	I0819 19:12:08.253856  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:12:08.254071  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:12:08.254266  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHUsername
	I0819 19:12:08.254481  438245 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:08.254700  438245 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.48 22 <nil> <nil>}
	I0819 19:12:08.254722  438245 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 19:12:08.508775  438245 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 19:12:08.508808  438245 machine.go:96] duration metric: took 956.629475ms to provisionDockerMachine
	I0819 19:12:08.508824  438245 start.go:293] postStartSetup for "default-k8s-diff-port-982795" (driver="kvm2")
	I0819 19:12:08.508838  438245 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 19:12:08.508868  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .DriverName
	I0819 19:12:08.509214  438245 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 19:12:08.509259  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHHostname
	I0819 19:12:08.512004  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:08.512341  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:08.512378  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:08.512517  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHPort
	I0819 19:12:08.512688  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:12:08.512867  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHUsername
	I0819 19:12:08.513059  438245 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/default-k8s-diff-port-982795/id_rsa Username:docker}
	I0819 19:12:08.594287  438245 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 19:12:08.598742  438245 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 19:12:08.598774  438245 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-372744/.minikube/addons for local assets ...
	I0819 19:12:08.598849  438245 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-372744/.minikube/files for local assets ...
	I0819 19:12:08.598943  438245 filesync.go:149] local asset: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem -> 3800092.pem in /etc/ssl/certs
	I0819 19:12:08.599029  438245 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 19:12:08.608416  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem --> /etc/ssl/certs/3800092.pem (1708 bytes)
	I0819 19:12:08.633880  438245 start.go:296] duration metric: took 125.036785ms for postStartSetup
	I0819 19:12:08.633930  438245 fix.go:56] duration metric: took 19.449641939s for fixHost
	I0819 19:12:08.633955  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHHostname
	I0819 19:12:08.636729  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:08.637006  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:08.637030  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:08.637248  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHPort
	I0819 19:12:08.637483  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:12:08.637672  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:12:08.637791  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHUsername
	I0819 19:12:08.637954  438245 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:08.638170  438245 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.48 22 <nil> <nil>}
	I0819 19:12:08.638186  438245 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 19:12:08.736519  438245 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724094728.710064462
	
	I0819 19:12:08.736540  438245 fix.go:216] guest clock: 1724094728.710064462
	I0819 19:12:08.736548  438245 fix.go:229] Guest: 2024-08-19 19:12:08.710064462 +0000 UTC Remote: 2024-08-19 19:12:08.633934039 +0000 UTC m=+281.422189217 (delta=76.130423ms)
	I0819 19:12:08.736568  438245 fix.go:200] guest clock delta is within tolerance: 76.130423ms
	I0819 19:12:08.736580  438245 start.go:83] releasing machines lock for "default-k8s-diff-port-982795", held for 19.552337255s
	I0819 19:12:08.736604  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .DriverName
	I0819 19:12:08.736918  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetIP
	I0819 19:12:08.739570  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:08.740030  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:08.740057  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:08.740222  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .DriverName
	I0819 19:12:08.740762  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .DriverName
	I0819 19:12:08.740960  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .DriverName
	I0819 19:12:08.741037  438245 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 19:12:08.741100  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHHostname
	I0819 19:12:08.741185  438245 ssh_runner.go:195] Run: cat /version.json
	I0819 19:12:08.741206  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHHostname
	I0819 19:12:08.743899  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:08.744037  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:08.744282  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:08.744304  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:08.744439  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHPort
	I0819 19:12:08.744576  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:08.744599  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:12:08.744607  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:08.744689  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHPort
	I0819 19:12:08.744786  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHUsername
	I0819 19:12:08.744858  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:12:08.744923  438245 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/default-k8s-diff-port-982795/id_rsa Username:docker}
	I0819 19:12:08.744997  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHUsername
	I0819 19:12:08.745143  438245 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/default-k8s-diff-port-982795/id_rsa Username:docker}
	I0819 19:12:08.820672  438245 ssh_runner.go:195] Run: systemctl --version
	I0819 19:12:08.847046  438245 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 19:12:08.989725  438245 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 19:12:08.996607  438245 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 19:12:08.996680  438245 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 19:12:09.013017  438245 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 19:12:09.013067  438245 start.go:495] detecting cgroup driver to use...
	I0819 19:12:09.013144  438245 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 19:12:09.030338  438245 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 19:12:09.044580  438245 docker.go:217] disabling cri-docker service (if available) ...
	I0819 19:12:09.044635  438245 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 19:12:09.058825  438245 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 19:12:09.073358  438245 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 19:12:09.194611  438245 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 19:12:09.333368  438245 docker.go:233] disabling docker service ...
	I0819 19:12:09.333446  438245 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 19:12:09.348775  438245 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 19:12:09.362911  438245 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 19:12:09.503015  438245 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 19:12:09.621246  438245 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 19:12:09.638480  438245 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 19:12:09.659346  438245 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 19:12:09.659406  438245 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:09.672088  438245 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 19:12:09.672166  438245 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:09.683704  438245 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:09.694847  438245 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:09.706339  438245 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 19:12:09.718658  438245 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:09.730645  438245 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:09.750843  438245 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:09.762551  438245 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 19:12:09.772960  438245 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 19:12:09.773037  438245 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 19:12:09.788362  438245 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 19:12:09.798695  438245 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:12:09.923389  438245 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 19:12:10.063317  438245 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 19:12:10.063413  438245 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 19:12:10.068449  438245 start.go:563] Will wait 60s for crictl version
	I0819 19:12:10.068540  438245 ssh_runner.go:195] Run: which crictl
	I0819 19:12:10.072807  438245 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 19:12:10.114058  438245 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 19:12:10.114151  438245 ssh_runner.go:195] Run: crio --version
	I0819 19:12:10.147919  438245 ssh_runner.go:195] Run: crio --version
	I0819 19:12:10.180009  438245 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 19:12:10.181218  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetIP
	I0819 19:12:10.184626  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:10.185015  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:10.185049  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:10.185243  438245 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0819 19:12:10.189653  438245 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 19:12:10.203439  438245 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-982795 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-982795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.48 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 19:12:10.203608  438245 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 19:12:10.203668  438245 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 19:12:10.241427  438245 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0819 19:12:10.241511  438245 ssh_runner.go:195] Run: which lz4
	I0819 19:12:10.245734  438245 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 19:12:10.250082  438245 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 19:12:10.250112  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0819 19:12:11.694285  438245 crio.go:462] duration metric: took 1.448590086s to copy over tarball
	I0819 19:12:11.694371  438245 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 19:12:10.028225  438295 main.go:141] libmachine: (embed-certs-024748) Waiting to get IP...
	I0819 19:12:10.029208  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:10.029696  438295 main.go:141] libmachine: (embed-certs-024748) DBG | unable to find current IP address of domain embed-certs-024748 in network mk-embed-certs-024748
	I0819 19:12:10.029752  438295 main.go:141] libmachine: (embed-certs-024748) DBG | I0819 19:12:10.029666  439540 retry.go:31] will retry after 276.66184ms: waiting for machine to come up
	I0819 19:12:10.308339  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:10.308762  438295 main.go:141] libmachine: (embed-certs-024748) DBG | unable to find current IP address of domain embed-certs-024748 in network mk-embed-certs-024748
	I0819 19:12:10.308804  438295 main.go:141] libmachine: (embed-certs-024748) DBG | I0819 19:12:10.308710  439540 retry.go:31] will retry after 279.376198ms: waiting for machine to come up
	I0819 19:12:10.590326  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:10.591084  438295 main.go:141] libmachine: (embed-certs-024748) DBG | unable to find current IP address of domain embed-certs-024748 in network mk-embed-certs-024748
	I0819 19:12:10.591117  438295 main.go:141] libmachine: (embed-certs-024748) DBG | I0819 19:12:10.590861  439540 retry.go:31] will retry after 364.735563ms: waiting for machine to come up
	I0819 19:12:10.957592  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:10.958075  438295 main.go:141] libmachine: (embed-certs-024748) DBG | unable to find current IP address of domain embed-certs-024748 in network mk-embed-certs-024748
	I0819 19:12:10.958100  438295 main.go:141] libmachine: (embed-certs-024748) DBG | I0819 19:12:10.958033  439540 retry.go:31] will retry after 384.275284ms: waiting for machine to come up
	I0819 19:12:11.343631  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:11.344169  438295 main.go:141] libmachine: (embed-certs-024748) DBG | unable to find current IP address of domain embed-certs-024748 in network mk-embed-certs-024748
	I0819 19:12:11.344192  438295 main.go:141] libmachine: (embed-certs-024748) DBG | I0819 19:12:11.344125  439540 retry.go:31] will retry after 572.182522ms: waiting for machine to come up
	I0819 19:12:11.917660  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:11.918150  438295 main.go:141] libmachine: (embed-certs-024748) DBG | unable to find current IP address of domain embed-certs-024748 in network mk-embed-certs-024748
	I0819 19:12:11.918179  438295 main.go:141] libmachine: (embed-certs-024748) DBG | I0819 19:12:11.918093  439540 retry.go:31] will retry after 767.807058ms: waiting for machine to come up
	I0819 19:12:12.687256  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:12.687782  438295 main.go:141] libmachine: (embed-certs-024748) DBG | unable to find current IP address of domain embed-certs-024748 in network mk-embed-certs-024748
	I0819 19:12:12.687815  438295 main.go:141] libmachine: (embed-certs-024748) DBG | I0819 19:12:12.687728  439540 retry.go:31] will retry after 715.897037ms: waiting for machine to come up
	I0819 19:12:13.406041  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:13.406653  438295 main.go:141] libmachine: (embed-certs-024748) DBG | unable to find current IP address of domain embed-certs-024748 in network mk-embed-certs-024748
	I0819 19:12:13.406690  438295 main.go:141] libmachine: (embed-certs-024748) DBG | I0819 19:12:13.406577  439540 retry.go:31] will retry after 1.301579737s: waiting for machine to come up
	I0819 19:12:13.847779  438245 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.153373496s)
	I0819 19:12:13.847810  438245 crio.go:469] duration metric: took 2.153488101s to extract the tarball
	I0819 19:12:13.847817  438245 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 19:12:13.885520  438245 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 19:12:13.929775  438245 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 19:12:13.929809  438245 cache_images.go:84] Images are preloaded, skipping loading
	I0819 19:12:13.929838  438245 kubeadm.go:934] updating node { 192.168.61.48 8444 v1.31.0 crio true true} ...
	I0819 19:12:13.930019  438245 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-982795 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.48
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-982795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 19:12:13.930113  438245 ssh_runner.go:195] Run: crio config
	I0819 19:12:13.977098  438245 cni.go:84] Creating CNI manager for ""
	I0819 19:12:13.977123  438245 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 19:12:13.977136  438245 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 19:12:13.977176  438245 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.48 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-982795 NodeName:default-k8s-diff-port-982795 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.48"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.48 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 19:12:13.977382  438245 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.48
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-982795"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.48
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.48"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 19:12:13.977461  438245 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 19:12:13.987276  438245 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 19:12:13.987381  438245 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 19:12:13.996666  438245 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0819 19:12:14.013822  438245 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 19:12:14.030936  438245 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0819 19:12:14.048575  438245 ssh_runner.go:195] Run: grep 192.168.61.48	control-plane.minikube.internal$ /etc/hosts
	I0819 19:12:14.052809  438245 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.48	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 19:12:14.065177  438245 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:12:14.185159  438245 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 19:12:14.202906  438245 certs.go:68] Setting up /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/default-k8s-diff-port-982795 for IP: 192.168.61.48
	I0819 19:12:14.202934  438245 certs.go:194] generating shared ca certs ...
	I0819 19:12:14.202966  438245 certs.go:226] acquiring lock for ca certs: {Name:mk639e03f593e0bccac045f6e9f5ba3b96cc81e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:12:14.203184  438245 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key
	I0819 19:12:14.203266  438245 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key
	I0819 19:12:14.203282  438245 certs.go:256] generating profile certs ...
	I0819 19:12:14.203399  438245 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/default-k8s-diff-port-982795/client.key
	I0819 19:12:14.203487  438245 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/default-k8s-diff-port-982795/apiserver.key.a3c7a519
	I0819 19:12:14.203552  438245 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/default-k8s-diff-port-982795/proxy-client.key
	I0819 19:12:14.203757  438245 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem (1338 bytes)
	W0819 19:12:14.203820  438245 certs.go:480] ignoring /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009_empty.pem, impossibly tiny 0 bytes
	I0819 19:12:14.203834  438245 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 19:12:14.203866  438245 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem (1082 bytes)
	I0819 19:12:14.203899  438245 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem (1123 bytes)
	I0819 19:12:14.203929  438245 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem (1675 bytes)
	I0819 19:12:14.203994  438245 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem (1708 bytes)
	I0819 19:12:14.205025  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 19:12:14.258243  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 19:12:14.295380  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 19:12:14.330511  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 19:12:14.358547  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/default-k8s-diff-port-982795/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0819 19:12:14.386938  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/default-k8s-diff-port-982795/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 19:12:14.415021  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/default-k8s-diff-port-982795/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 19:12:14.439531  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/default-k8s-diff-port-982795/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 19:12:14.463969  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 19:12:14.487638  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem --> /usr/share/ca-certificates/380009.pem (1338 bytes)
	I0819 19:12:14.511571  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem --> /usr/share/ca-certificates/3800092.pem (1708 bytes)
	I0819 19:12:14.535223  438245 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 19:12:14.552922  438245 ssh_runner.go:195] Run: openssl version
	I0819 19:12:14.559078  438245 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 19:12:14.570605  438245 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:12:14.575411  438245 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:12:14.575484  438245 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:12:14.581714  438245 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 19:12:14.592896  438245 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/380009.pem && ln -fs /usr/share/ca-certificates/380009.pem /etc/ssl/certs/380009.pem"
	I0819 19:12:14.604306  438245 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/380009.pem
	I0819 19:12:14.609139  438245 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 17:56 /usr/share/ca-certificates/380009.pem
	I0819 19:12:14.609212  438245 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/380009.pem
	I0819 19:12:14.615160  438245 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/380009.pem /etc/ssl/certs/51391683.0"
	I0819 19:12:14.626010  438245 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3800092.pem && ln -fs /usr/share/ca-certificates/3800092.pem /etc/ssl/certs/3800092.pem"
	I0819 19:12:14.636821  438245 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3800092.pem
	I0819 19:12:14.641308  438245 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 17:56 /usr/share/ca-certificates/3800092.pem
	I0819 19:12:14.641358  438245 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3800092.pem
	I0819 19:12:14.646898  438245 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3800092.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 19:12:14.657905  438245 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 19:12:14.662780  438245 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 19:12:14.668934  438245 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 19:12:14.674693  438245 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 19:12:14.680683  438245 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 19:12:14.686689  438245 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 19:12:14.692678  438245 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 19:12:14.698784  438245 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-982795 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-982795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.48 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 19:12:14.698930  438245 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 19:12:14.699006  438245 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 19:12:14.740881  438245 cri.go:89] found id: ""
	I0819 19:12:14.740964  438245 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 19:12:14.751589  438245 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 19:12:14.751613  438245 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 19:12:14.751665  438245 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 19:12:14.761837  438245 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 19:12:14.762870  438245 kubeconfig.go:125] found "default-k8s-diff-port-982795" server: "https://192.168.61.48:8444"
	I0819 19:12:14.765176  438245 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 19:12:14.775114  438245 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.48
	I0819 19:12:14.775147  438245 kubeadm.go:1160] stopping kube-system containers ...
	I0819 19:12:14.775161  438245 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0819 19:12:14.775228  438245 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 19:12:14.811373  438245 cri.go:89] found id: ""
	I0819 19:12:14.811442  438245 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0819 19:12:14.829656  438245 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 19:12:14.840215  438245 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 19:12:14.840236  438245 kubeadm.go:157] found existing configuration files:
	
	I0819 19:12:14.840288  438245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0819 19:12:14.850017  438245 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 19:12:14.850075  438245 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 19:12:14.860060  438245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0819 19:12:14.869589  438245 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 19:12:14.869645  438245 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 19:12:14.879249  438245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0819 19:12:14.888475  438245 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 19:12:14.888532  438245 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 19:12:14.898151  438245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0819 19:12:14.907628  438245 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 19:12:14.907737  438245 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 19:12:14.917581  438245 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 19:12:14.927119  438245 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:12:15.037162  438245 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:12:16.355430  438245 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.318225023s)
	I0819 19:12:16.355461  438245 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:12:16.566565  438245 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:12:16.649402  438245 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:12:16.775956  438245 api_server.go:52] waiting for apiserver process to appear ...
	I0819 19:12:16.776067  438245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:12:14.709988  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:14.710397  438295 main.go:141] libmachine: (embed-certs-024748) DBG | unable to find current IP address of domain embed-certs-024748 in network mk-embed-certs-024748
	I0819 19:12:14.710429  438295 main.go:141] libmachine: (embed-certs-024748) DBG | I0819 19:12:14.710338  439540 retry.go:31] will retry after 1.420823505s: waiting for machine to come up
	I0819 19:12:16.133160  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:16.133558  438295 main.go:141] libmachine: (embed-certs-024748) DBG | unable to find current IP address of domain embed-certs-024748 in network mk-embed-certs-024748
	I0819 19:12:16.133587  438295 main.go:141] libmachine: (embed-certs-024748) DBG | I0819 19:12:16.133531  439540 retry.go:31] will retry after 1.71697779s: waiting for machine to come up
	I0819 19:12:17.852342  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:17.852884  438295 main.go:141] libmachine: (embed-certs-024748) DBG | unable to find current IP address of domain embed-certs-024748 in network mk-embed-certs-024748
	I0819 19:12:17.852922  438295 main.go:141] libmachine: (embed-certs-024748) DBG | I0819 19:12:17.852836  439540 retry.go:31] will retry after 2.816782354s: waiting for machine to come up
	I0819 19:12:17.277067  438245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:12:17.777027  438245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:12:17.797513  438245 api_server.go:72] duration metric: took 1.021572879s to wait for apiserver process to appear ...
	I0819 19:12:17.797554  438245 api_server.go:88] waiting for apiserver healthz status ...
	I0819 19:12:17.797596  438245 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8444/healthz ...
	I0819 19:12:17.798191  438245 api_server.go:269] stopped: https://192.168.61.48:8444/healthz: Get "https://192.168.61.48:8444/healthz": dial tcp 192.168.61.48:8444: connect: connection refused
	I0819 19:12:18.297907  438245 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8444/healthz ...
	I0819 19:12:20.177305  438245 api_server.go:279] https://192.168.61.48:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 19:12:20.177345  438245 api_server.go:103] status: https://192.168.61.48:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 19:12:20.177367  438245 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8444/healthz ...
	I0819 19:12:20.244091  438245 api_server.go:279] https://192.168.61.48:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 19:12:20.244140  438245 api_server.go:103] status: https://192.168.61.48:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 19:12:20.298403  438245 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8444/healthz ...
	I0819 19:12:20.304289  438245 api_server.go:279] https://192.168.61.48:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 19:12:20.304325  438245 api_server.go:103] status: https://192.168.61.48:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 19:12:20.797876  438245 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8444/healthz ...
	I0819 19:12:20.803894  438245 api_server.go:279] https://192.168.61.48:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 19:12:20.803935  438245 api_server.go:103] status: https://192.168.61.48:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 19:12:21.298284  438245 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8444/healthz ...
	I0819 19:12:21.320292  438245 api_server.go:279] https://192.168.61.48:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 19:12:21.320320  438245 api_server.go:103] status: https://192.168.61.48:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 19:12:21.797829  438245 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8444/healthz ...
	I0819 19:12:21.802183  438245 api_server.go:279] https://192.168.61.48:8444/healthz returned 200:
	ok
	I0819 19:12:21.809866  438245 api_server.go:141] control plane version: v1.31.0
	I0819 19:12:21.809902  438245 api_server.go:131] duration metric: took 4.012339897s to wait for apiserver health ...
	I0819 19:12:21.809914  438245 cni.go:84] Creating CNI manager for ""
	I0819 19:12:21.809944  438245 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 19:12:21.811668  438245 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 19:12:21.813183  438245 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 19:12:21.826170  438245 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 19:12:21.850473  438245 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 19:12:21.865379  438245 system_pods.go:59] 8 kube-system pods found
	I0819 19:12:21.865422  438245 system_pods.go:61] "coredns-6f6b679f8f-dwbnt" [9b8d7ee3-15ca-475b-b659-d5c3b10890fe] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0819 19:12:21.865442  438245 system_pods.go:61] "etcd-default-k8s-diff-port-982795" [6686e6f6-485d-4c57-89a1-af4f27b6216e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0819 19:12:21.865455  438245 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-982795" [fcfb5a0d-6d6c-4c30-a17f-43106f3dd5ae] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0819 19:12:21.865475  438245 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-982795" [346bf3b5-57e7-4f30-a6ed-959dc9e8941d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0819 19:12:21.865485  438245 system_pods.go:61] "kube-proxy-wrczx" [acabdc8e-5397-4531-afcb-57a8f4c48618] Running
	I0819 19:12:21.865493  438245 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-982795" [82de0c57-e712-4c0c-b751-a17cb0dd75b2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0819 19:12:21.865503  438245 system_pods.go:61] "metrics-server-6867b74b74-5hlnx" [394c87af-a198-4fea-8a30-32a8c3e80884] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 19:12:21.865522  438245 system_pods.go:61] "storage-provisioner" [35f70989-846d-4ec5-b879-a22625ee94ce] Running
	I0819 19:12:21.865534  438245 system_pods.go:74] duration metric: took 15.035147ms to wait for pod list to return data ...
	I0819 19:12:21.865545  438245 node_conditions.go:102] verifying NodePressure condition ...
	I0819 19:12:21.870314  438245 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 19:12:21.870350  438245 node_conditions.go:123] node cpu capacity is 2
	I0819 19:12:21.870366  438245 node_conditions.go:105] duration metric: took 4.813819ms to run NodePressure ...
	I0819 19:12:21.870390  438245 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:12:22.130916  438245 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0819 19:12:22.134889  438245 kubeadm.go:739] kubelet initialised
	I0819 19:12:22.134912  438245 kubeadm.go:740] duration metric: took 3.970465ms waiting for restarted kubelet to initialise ...
	I0819 19:12:22.134920  438245 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 19:12:22.139345  438245 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-dwbnt" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:20.672189  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:20.672655  438295 main.go:141] libmachine: (embed-certs-024748) DBG | unable to find current IP address of domain embed-certs-024748 in network mk-embed-certs-024748
	I0819 19:12:20.672682  438295 main.go:141] libmachine: (embed-certs-024748) DBG | I0819 19:12:20.672613  439540 retry.go:31] will retry after 2.76896974s: waiting for machine to come up
	I0819 19:12:23.442804  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:23.443223  438295 main.go:141] libmachine: (embed-certs-024748) DBG | unable to find current IP address of domain embed-certs-024748 in network mk-embed-certs-024748
	I0819 19:12:23.443268  438295 main.go:141] libmachine: (embed-certs-024748) DBG | I0819 19:12:23.443170  439540 retry.go:31] will retry after 4.199459292s: waiting for machine to come up
	I0819 19:12:24.145329  438245 pod_ready.go:103] pod "coredns-6f6b679f8f-dwbnt" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:26.645695  438245 pod_ready.go:103] pod "coredns-6f6b679f8f-dwbnt" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:27.644842  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:27.645376  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has current primary IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:27.645403  438295 main.go:141] libmachine: (embed-certs-024748) Found IP for machine: 192.168.72.96
	I0819 19:12:27.645417  438295 main.go:141] libmachine: (embed-certs-024748) Reserving static IP address...
	I0819 19:12:27.645874  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "embed-certs-024748", mac: "52:54:00:f0:8b:43", ip: "192.168.72.96"} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:27.645902  438295 main.go:141] libmachine: (embed-certs-024748) Reserved static IP address: 192.168.72.96
	I0819 19:12:27.645919  438295 main.go:141] libmachine: (embed-certs-024748) DBG | skip adding static IP to network mk-embed-certs-024748 - found existing host DHCP lease matching {name: "embed-certs-024748", mac: "52:54:00:f0:8b:43", ip: "192.168.72.96"}
	I0819 19:12:27.645952  438295 main.go:141] libmachine: (embed-certs-024748) Waiting for SSH to be available...
	I0819 19:12:27.645974  438295 main.go:141] libmachine: (embed-certs-024748) DBG | Getting to WaitForSSH function...
	I0819 19:12:27.648195  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:27.648471  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:27.648496  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:27.648717  438295 main.go:141] libmachine: (embed-certs-024748) DBG | Using SSH client type: external
	I0819 19:12:27.648744  438295 main.go:141] libmachine: (embed-certs-024748) DBG | Using SSH private key: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/embed-certs-024748/id_rsa (-rw-------)
	I0819 19:12:27.648773  438295 main.go:141] libmachine: (embed-certs-024748) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.96 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19468-372744/.minikube/machines/embed-certs-024748/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 19:12:27.648792  438295 main.go:141] libmachine: (embed-certs-024748) DBG | About to run SSH command:
	I0819 19:12:27.648808  438295 main.go:141] libmachine: (embed-certs-024748) DBG | exit 0
	I0819 19:12:27.775964  438295 main.go:141] libmachine: (embed-certs-024748) DBG | SSH cmd err, output: <nil>: 
	I0819 19:12:27.776344  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetConfigRaw
	I0819 19:12:27.777100  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetIP
	I0819 19:12:27.780096  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:27.780535  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:27.780570  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:27.780936  438295 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/embed-certs-024748/config.json ...
	I0819 19:12:27.781721  438295 machine.go:93] provisionDockerMachine start ...
	I0819 19:12:27.781748  438295 main.go:141] libmachine: (embed-certs-024748) Calling .DriverName
	I0819 19:12:27.781974  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHHostname
	I0819 19:12:27.784482  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:27.784838  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:27.784868  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:27.785066  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHPort
	I0819 19:12:27.785254  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:27.785452  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:27.785617  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHUsername
	I0819 19:12:27.785789  438295 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:27.786038  438295 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.96 22 <nil> <nil>}
	I0819 19:12:27.786059  438295 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 19:12:27.904337  438295 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 19:12:27.904375  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetMachineName
	I0819 19:12:27.904675  438295 buildroot.go:166] provisioning hostname "embed-certs-024748"
	I0819 19:12:27.904711  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetMachineName
	I0819 19:12:27.904932  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHHostname
	I0819 19:12:27.907960  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:27.908325  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:27.908354  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:27.908446  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHPort
	I0819 19:12:27.908659  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:27.908825  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:27.909012  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHUsername
	I0819 19:12:27.909234  438295 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:27.909441  438295 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.96 22 <nil> <nil>}
	I0819 19:12:27.909458  438295 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-024748 && echo "embed-certs-024748" | sudo tee /etc/hostname
	I0819 19:12:28.036564  438295 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-024748
	
	I0819 19:12:28.036597  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHHostname
	I0819 19:12:28.039385  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:28.039798  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:28.039827  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:28.040071  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHPort
	I0819 19:12:28.040327  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:28.040493  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:28.040652  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHUsername
	I0819 19:12:28.040882  438295 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:28.041113  438295 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.96 22 <nil> <nil>}
	I0819 19:12:28.041138  438295 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-024748' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-024748/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-024748' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 19:12:28.162311  438295 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 19:12:28.162348  438295 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19468-372744/.minikube CaCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19468-372744/.minikube}
	I0819 19:12:28.162368  438295 buildroot.go:174] setting up certificates
	I0819 19:12:28.162376  438295 provision.go:84] configureAuth start
	I0819 19:12:28.162385  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetMachineName
	I0819 19:12:28.162703  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetIP
	I0819 19:12:28.165171  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:28.165563  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:28.165593  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:28.165727  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHHostname
	I0819 19:12:28.167917  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:28.168199  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:28.168221  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:28.168411  438295 provision.go:143] copyHostCerts
	I0819 19:12:28.168469  438295 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem, removing ...
	I0819 19:12:28.168491  438295 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem
	I0819 19:12:28.168560  438295 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem (1082 bytes)
	I0819 19:12:28.168693  438295 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem, removing ...
	I0819 19:12:28.168704  438295 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem
	I0819 19:12:28.168736  438295 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem (1123 bytes)
	I0819 19:12:28.168814  438295 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem, removing ...
	I0819 19:12:28.168824  438295 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem
	I0819 19:12:28.168853  438295 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem (1675 bytes)
	I0819 19:12:28.168942  438295 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem org=jenkins.embed-certs-024748 san=[127.0.0.1 192.168.72.96 embed-certs-024748 localhost minikube]
	I0819 19:12:28.447064  438295 provision.go:177] copyRemoteCerts
	I0819 19:12:28.447129  438295 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 19:12:28.447158  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHHostname
	I0819 19:12:28.449851  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:28.450138  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:28.450163  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:28.450344  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHPort
	I0819 19:12:28.450541  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:28.450713  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHUsername
	I0819 19:12:28.450832  438295 sshutil.go:53] new ssh client: &{IP:192.168.72.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/embed-certs-024748/id_rsa Username:docker}
	I0819 19:12:28.537815  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 19:12:28.562408  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0819 19:12:28.586728  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 19:12:28.611119  438295 provision.go:87] duration metric: took 448.726133ms to configureAuth
	I0819 19:12:28.611158  438295 buildroot.go:189] setting minikube options for container-runtime
	I0819 19:12:28.611351  438295 config.go:182] Loaded profile config "embed-certs-024748": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:12:28.611428  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHHostname
	I0819 19:12:28.614168  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:28.614543  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:28.614571  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:28.614736  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHPort
	I0819 19:12:28.614941  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:28.615083  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:28.615192  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHUsername
	I0819 19:12:28.615302  438295 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:28.615454  438295 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.96 22 <nil> <nil>}
	I0819 19:12:28.615469  438295 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 19:12:28.890054  438295 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 19:12:28.890086  438295 machine.go:96] duration metric: took 1.10834874s to provisionDockerMachine
	I0819 19:12:28.890100  438295 start.go:293] postStartSetup for "embed-certs-024748" (driver="kvm2")
	I0819 19:12:28.890120  438295 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 19:12:28.890146  438295 main.go:141] libmachine: (embed-certs-024748) Calling .DriverName
	I0819 19:12:28.890469  438295 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 19:12:28.890499  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHHostname
	I0819 19:12:28.893251  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:28.893579  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:28.893605  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:28.893733  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHPort
	I0819 19:12:28.893895  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:28.894102  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHUsername
	I0819 19:12:28.894220  438295 sshutil.go:53] new ssh client: &{IP:192.168.72.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/embed-certs-024748/id_rsa Username:docker}
	I0819 19:12:28.979381  438295 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 19:12:28.983921  438295 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 19:12:28.983952  438295 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-372744/.minikube/addons for local assets ...
	I0819 19:12:28.984048  438295 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-372744/.minikube/files for local assets ...
	I0819 19:12:28.984156  438295 filesync.go:149] local asset: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem -> 3800092.pem in /etc/ssl/certs
	I0819 19:12:28.984250  438295 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 19:12:28.994964  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem --> /etc/ssl/certs/3800092.pem (1708 bytes)
	I0819 19:12:29.018801  438295 start.go:296] duration metric: took 128.685446ms for postStartSetup
	I0819 19:12:29.018843  438295 fix.go:56] duration metric: took 20.282076509s for fixHost
	I0819 19:12:29.018870  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHHostname
	I0819 19:12:29.021554  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:29.021848  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:29.021875  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:29.022066  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHPort
	I0819 19:12:29.022261  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:29.022428  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:29.022526  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHUsername
	I0819 19:12:29.022678  438295 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:29.022900  438295 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.96 22 <nil> <nil>}
	I0819 19:12:29.022915  438295 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 19:12:29.132976  438716 start.go:364] duration metric: took 3m58.489348567s to acquireMachinesLock for "old-k8s-version-104669"
	I0819 19:12:29.133047  438716 start.go:96] Skipping create...Using existing machine configuration
	I0819 19:12:29.133055  438716 fix.go:54] fixHost starting: 
	I0819 19:12:29.133485  438716 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:29.133524  438716 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:29.151330  438716 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39213
	I0819 19:12:29.151778  438716 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:29.152271  438716 main.go:141] libmachine: Using API Version  1
	I0819 19:12:29.152301  438716 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:29.152682  438716 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:29.152883  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .DriverName
	I0819 19:12:29.153065  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetState
	I0819 19:12:29.154399  438716 fix.go:112] recreateIfNeeded on old-k8s-version-104669: state=Stopped err=<nil>
	I0819 19:12:29.154444  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .DriverName
	W0819 19:12:29.154684  438716 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 19:12:29.156349  438716 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-104669" ...
	I0819 19:12:29.157631  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .Start
	I0819 19:12:29.157825  438716 main.go:141] libmachine: (old-k8s-version-104669) Ensuring networks are active...
	I0819 19:12:29.158635  438716 main.go:141] libmachine: (old-k8s-version-104669) Ensuring network default is active
	I0819 19:12:29.159041  438716 main.go:141] libmachine: (old-k8s-version-104669) Ensuring network mk-old-k8s-version-104669 is active
	I0819 19:12:29.159509  438716 main.go:141] libmachine: (old-k8s-version-104669) Getting domain xml...
	I0819 19:12:29.160383  438716 main.go:141] libmachine: (old-k8s-version-104669) Creating domain...
	I0819 19:12:30.452488  438716 main.go:141] libmachine: (old-k8s-version-104669) Waiting to get IP...
	I0819 19:12:30.453743  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:30.454237  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:30.454323  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:30.454193  439728 retry.go:31] will retry after 197.440033ms: waiting for machine to come up
	I0819 19:12:29.132812  438295 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724094749.105537362
	
	I0819 19:12:29.132839  438295 fix.go:216] guest clock: 1724094749.105537362
	I0819 19:12:29.132850  438295 fix.go:229] Guest: 2024-08-19 19:12:29.105537362 +0000 UTC Remote: 2024-08-19 19:12:29.018848957 +0000 UTC m=+300.015027560 (delta=86.688405ms)
	I0819 19:12:29.132877  438295 fix.go:200] guest clock delta is within tolerance: 86.688405ms
	I0819 19:12:29.132884  438295 start.go:83] releasing machines lock for "embed-certs-024748", held for 20.396159242s
	I0819 19:12:29.132912  438295 main.go:141] libmachine: (embed-certs-024748) Calling .DriverName
	I0819 19:12:29.133179  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetIP
	I0819 19:12:29.136110  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:29.136532  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:29.136565  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:29.136750  438295 main.go:141] libmachine: (embed-certs-024748) Calling .DriverName
	I0819 19:12:29.137307  438295 main.go:141] libmachine: (embed-certs-024748) Calling .DriverName
	I0819 19:12:29.137532  438295 main.go:141] libmachine: (embed-certs-024748) Calling .DriverName
	I0819 19:12:29.137616  438295 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 19:12:29.137690  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHHostname
	I0819 19:12:29.137758  438295 ssh_runner.go:195] Run: cat /version.json
	I0819 19:12:29.137781  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHHostname
	I0819 19:12:29.140500  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:29.140820  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:29.140870  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:29.140903  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:29.141067  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHPort
	I0819 19:12:29.141266  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:29.141385  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:29.141430  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:29.141443  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHUsername
	I0819 19:12:29.141586  438295 sshutil.go:53] new ssh client: &{IP:192.168.72.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/embed-certs-024748/id_rsa Username:docker}
	I0819 19:12:29.141639  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHPort
	I0819 19:12:29.141790  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:29.141957  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHUsername
	I0819 19:12:29.142123  438295 sshutil.go:53] new ssh client: &{IP:192.168.72.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/embed-certs-024748/id_rsa Username:docker}
	I0819 19:12:29.242886  438295 ssh_runner.go:195] Run: systemctl --version
	I0819 19:12:29.249276  438295 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 19:12:29.393872  438295 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 19:12:29.401874  438295 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 19:12:29.401954  438295 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 19:12:29.421973  438295 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 19:12:29.422004  438295 start.go:495] detecting cgroup driver to use...
	I0819 19:12:29.422081  438295 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 19:12:29.442823  438295 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 19:12:29.462663  438295 docker.go:217] disabling cri-docker service (if available) ...
	I0819 19:12:29.462720  438295 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 19:12:29.477896  438295 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 19:12:29.492591  438295 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 19:12:29.613759  438295 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 19:12:29.770719  438295 docker.go:233] disabling docker service ...
	I0819 19:12:29.770805  438295 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 19:12:29.785787  438295 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 19:12:29.802879  438295 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 19:12:29.947633  438295 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 19:12:30.082602  438295 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 19:12:30.097628  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 19:12:30.118671  438295 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 19:12:30.118735  438295 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:30.131287  438295 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 19:12:30.131354  438295 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:30.143008  438295 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:30.156358  438295 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:30.172123  438295 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 19:12:30.188196  438295 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:30.201487  438295 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:30.219887  438295 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:30.235685  438295 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 19:12:30.246112  438295 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 19:12:30.246202  438295 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 19:12:30.259732  438295 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 19:12:30.269866  438295 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:12:30.397522  438295 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 19:12:30.545249  438295 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 19:12:30.545349  438295 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 19:12:30.550473  438295 start.go:563] Will wait 60s for crictl version
	I0819 19:12:30.550528  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:12:30.554782  438295 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 19:12:30.597634  438295 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 19:12:30.597736  438295 ssh_runner.go:195] Run: crio --version
	I0819 19:12:30.628137  438295 ssh_runner.go:195] Run: crio --version
	I0819 19:12:30.660912  438295 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 19:12:29.146475  438245 pod_ready.go:103] pod "coredns-6f6b679f8f-dwbnt" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:31.147618  438245 pod_ready.go:93] pod "coredns-6f6b679f8f-dwbnt" in "kube-system" namespace has status "Ready":"True"
	I0819 19:12:31.147651  438245 pod_ready.go:82] duration metric: took 9.00827926s for pod "coredns-6f6b679f8f-dwbnt" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:31.147665  438245 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:31.153305  438245 pod_ready.go:93] pod "etcd-default-k8s-diff-port-982795" in "kube-system" namespace has status "Ready":"True"
	I0819 19:12:31.153331  438245 pod_ready.go:82] duration metric: took 5.657625ms for pod "etcd-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:31.153347  438245 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:31.159009  438245 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-982795" in "kube-system" namespace has status "Ready":"True"
	I0819 19:12:31.159037  438245 pod_ready.go:82] duration metric: took 5.680194ms for pod "kube-apiserver-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:31.159050  438245 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:31.165478  438245 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-982795" in "kube-system" namespace has status "Ready":"True"
	I0819 19:12:31.165504  438245 pod_ready.go:82] duration metric: took 6.444529ms for pod "kube-controller-manager-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:31.165517  438245 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-wrczx" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:31.180293  438245 pod_ready.go:93] pod "kube-proxy-wrczx" in "kube-system" namespace has status "Ready":"True"
	I0819 19:12:31.180324  438245 pod_ready.go:82] duration metric: took 14.798883ms for pod "kube-proxy-wrczx" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:31.180337  438245 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:30.662168  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetIP
	I0819 19:12:30.665057  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:30.665455  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:30.665486  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:30.665660  438295 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0819 19:12:30.669911  438295 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 19:12:30.682755  438295 kubeadm.go:883] updating cluster {Name:embed-certs-024748 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-024748 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.96 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 19:12:30.682883  438295 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 19:12:30.682936  438295 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 19:12:30.724160  438295 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0819 19:12:30.724233  438295 ssh_runner.go:195] Run: which lz4
	I0819 19:12:30.728710  438295 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 19:12:30.733279  438295 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 19:12:30.733317  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0819 19:12:32.178568  438295 crio.go:462] duration metric: took 1.449881121s to copy over tarball
	I0819 19:12:32.178642  438295 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 19:12:30.653917  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:30.654521  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:30.654566  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:30.654436  439728 retry.go:31] will retry after 317.038756ms: waiting for machine to come up
	I0819 19:12:30.973003  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:30.973530  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:30.973560  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:30.973487  439728 retry.go:31] will retry after 486.945032ms: waiting for machine to come up
	I0819 19:12:31.461937  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:31.462438  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:31.462470  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:31.462389  439728 retry.go:31] will retry after 441.288745ms: waiting for machine to come up
	I0819 19:12:31.904947  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:31.905564  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:31.905617  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:31.905472  439728 retry.go:31] will retry after 752.583403ms: waiting for machine to come up
	I0819 19:12:32.659642  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:32.660175  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:32.660207  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:32.660128  439728 retry.go:31] will retry after 932.705928ms: waiting for machine to come up
	I0819 19:12:33.594983  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:33.595529  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:33.595556  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:33.595466  439728 retry.go:31] will retry after 936.558157ms: waiting for machine to come up
	I0819 19:12:34.533158  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:34.533717  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:34.533743  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:34.533656  439728 retry.go:31] will retry after 1.435945188s: waiting for machine to come up
	I0819 19:12:33.186835  438245 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-982795" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:35.187500  438245 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-982795" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:35.686905  438245 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-982795" in "kube-system" namespace has status "Ready":"True"
	I0819 19:12:35.686932  438245 pod_ready.go:82] duration metric: took 4.50658625s for pod "kube-scheduler-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:35.686945  438245 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:34.321347  438295 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.14267077s)
	I0819 19:12:34.321379  438295 crio.go:469] duration metric: took 2.142777016s to extract the tarball
	I0819 19:12:34.321390  438295 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 19:12:34.357670  438295 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 19:12:34.403313  438295 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 19:12:34.403344  438295 cache_images.go:84] Images are preloaded, skipping loading
	I0819 19:12:34.403358  438295 kubeadm.go:934] updating node { 192.168.72.96 8443 v1.31.0 crio true true} ...
	I0819 19:12:34.403495  438295 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-024748 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.96
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-024748 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 19:12:34.403576  438295 ssh_runner.go:195] Run: crio config
	I0819 19:12:34.450415  438295 cni.go:84] Creating CNI manager for ""
	I0819 19:12:34.450443  438295 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 19:12:34.450461  438295 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 19:12:34.450490  438295 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.96 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-024748 NodeName:embed-certs-024748 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.96"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.96 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 19:12:34.450646  438295 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.96
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-024748"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.96
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.96"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 19:12:34.450723  438295 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 19:12:34.461183  438295 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 19:12:34.461313  438295 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 19:12:34.470516  438295 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0819 19:12:34.488844  438295 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 19:12:34.505450  438295 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0819 19:12:34.522456  438295 ssh_runner.go:195] Run: grep 192.168.72.96	control-plane.minikube.internal$ /etc/hosts
	I0819 19:12:34.526272  438295 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.96	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 19:12:34.539079  438295 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:12:34.665665  438295 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 19:12:34.683237  438295 certs.go:68] Setting up /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/embed-certs-024748 for IP: 192.168.72.96
	I0819 19:12:34.683265  438295 certs.go:194] generating shared ca certs ...
	I0819 19:12:34.683287  438295 certs.go:226] acquiring lock for ca certs: {Name:mk639e03f593e0bccac045f6e9f5ba3b96cc81e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:12:34.683471  438295 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key
	I0819 19:12:34.683536  438295 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key
	I0819 19:12:34.683550  438295 certs.go:256] generating profile certs ...
	I0819 19:12:34.683687  438295 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/embed-certs-024748/client.key
	I0819 19:12:34.683776  438295 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/embed-certs-024748/apiserver.key.89193d03
	I0819 19:12:34.683828  438295 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/embed-certs-024748/proxy-client.key
	I0819 19:12:34.683991  438295 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem (1338 bytes)
	W0819 19:12:34.684035  438295 certs.go:480] ignoring /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009_empty.pem, impossibly tiny 0 bytes
	I0819 19:12:34.684047  438295 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 19:12:34.684074  438295 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem (1082 bytes)
	I0819 19:12:34.684112  438295 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem (1123 bytes)
	I0819 19:12:34.684159  438295 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem (1675 bytes)
	I0819 19:12:34.684224  438295 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem (1708 bytes)
	I0819 19:12:34.685127  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 19:12:34.718591  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 19:12:34.758439  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 19:12:34.790143  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 19:12:34.828113  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/embed-certs-024748/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0819 19:12:34.860389  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/embed-certs-024748/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 19:12:34.898361  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/embed-certs-024748/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 19:12:34.924677  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/embed-certs-024748/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 19:12:34.951630  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem --> /usr/share/ca-certificates/3800092.pem (1708 bytes)
	I0819 19:12:34.977435  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 19:12:35.002048  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem --> /usr/share/ca-certificates/380009.pem (1338 bytes)
	I0819 19:12:35.026934  438295 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 19:12:35.044476  438295 ssh_runner.go:195] Run: openssl version
	I0819 19:12:35.050174  438295 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3800092.pem && ln -fs /usr/share/ca-certificates/3800092.pem /etc/ssl/certs/3800092.pem"
	I0819 19:12:35.061299  438295 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3800092.pem
	I0819 19:12:35.065978  438295 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 17:56 /usr/share/ca-certificates/3800092.pem
	I0819 19:12:35.066047  438295 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3800092.pem
	I0819 19:12:35.072572  438295 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3800092.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 19:12:35.083760  438295 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 19:12:35.094492  438295 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:12:35.099152  438295 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:12:35.099229  438295 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:12:35.105124  438295 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 19:12:35.115950  438295 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/380009.pem && ln -fs /usr/share/ca-certificates/380009.pem /etc/ssl/certs/380009.pem"
	I0819 19:12:35.126845  438295 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/380009.pem
	I0819 19:12:35.131568  438295 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 17:56 /usr/share/ca-certificates/380009.pem
	I0819 19:12:35.131650  438295 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/380009.pem
	I0819 19:12:35.137851  438295 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/380009.pem /etc/ssl/certs/51391683.0"
	I0819 19:12:35.148818  438295 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 19:12:35.153800  438295 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 19:12:35.159720  438295 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 19:12:35.165740  438295 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 19:12:35.171705  438295 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 19:12:35.177574  438295 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 19:12:35.183935  438295 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 19:12:35.192681  438295 kubeadm.go:392] StartCluster: {Name:embed-certs-024748 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-024748 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.96 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 19:12:35.192845  438295 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 19:12:35.192908  438295 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 19:12:35.231688  438295 cri.go:89] found id: ""
	I0819 19:12:35.231791  438295 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 19:12:35.242835  438295 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 19:12:35.242859  438295 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 19:12:35.242944  438295 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 19:12:35.255695  438295 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 19:12:35.257036  438295 kubeconfig.go:125] found "embed-certs-024748" server: "https://192.168.72.96:8443"
	I0819 19:12:35.259422  438295 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 19:12:35.271730  438295 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.96
	I0819 19:12:35.271758  438295 kubeadm.go:1160] stopping kube-system containers ...
	I0819 19:12:35.271772  438295 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0819 19:12:35.271820  438295 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 19:12:35.321065  438295 cri.go:89] found id: ""
	I0819 19:12:35.321155  438295 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0819 19:12:35.337802  438295 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 19:12:35.347699  438295 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 19:12:35.347726  438295 kubeadm.go:157] found existing configuration files:
	
	I0819 19:12:35.347785  438295 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 19:12:35.357108  438295 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 19:12:35.357178  438295 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 19:12:35.366805  438295 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 19:12:35.376864  438295 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 19:12:35.376938  438295 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 19:12:35.387018  438295 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 19:12:35.396966  438295 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 19:12:35.397045  438295 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 19:12:35.406192  438295 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 19:12:35.415325  438295 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 19:12:35.415401  438295 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 19:12:35.424450  438295 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 19:12:35.433931  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:12:35.549294  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:12:36.306930  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:12:36.517086  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:12:36.587680  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:12:36.680728  438295 api_server.go:52] waiting for apiserver process to appear ...
	I0819 19:12:36.680825  438295 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:12:37.181054  438295 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:12:37.681059  438295 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:12:38.181588  438295 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:12:38.197155  438295 api_server.go:72] duration metric: took 1.516436456s to wait for apiserver process to appear ...
	I0819 19:12:38.197184  438295 api_server.go:88] waiting for apiserver healthz status ...
	I0819 19:12:38.197212  438295 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0819 19:12:35.971138  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:35.971576  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:35.971607  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:35.971514  439728 retry.go:31] will retry after 1.521077744s: waiting for machine to come up
	I0819 19:12:37.493931  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:37.494389  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:37.494415  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:37.494361  439728 retry.go:31] will retry after 1.632508579s: waiting for machine to come up
	I0819 19:12:39.128939  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:39.129429  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:39.129456  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:39.129392  439728 retry.go:31] will retry after 2.634061376s: waiting for machine to come up
	I0819 19:12:40.567608  438295 api_server.go:279] https://192.168.72.96:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 19:12:40.567654  438295 api_server.go:103] status: https://192.168.72.96:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 19:12:40.567669  438295 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0819 19:12:40.593405  438295 api_server.go:279] https://192.168.72.96:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 19:12:40.593456  438295 api_server.go:103] status: https://192.168.72.96:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 19:12:40.697607  438295 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0819 19:12:40.713767  438295 api_server.go:279] https://192.168.72.96:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 19:12:40.713806  438295 api_server.go:103] status: https://192.168.72.96:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 19:12:41.197299  438295 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0819 19:12:41.203307  438295 api_server.go:279] https://192.168.72.96:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 19:12:41.203338  438295 api_server.go:103] status: https://192.168.72.96:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 19:12:41.697903  438295 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0819 19:12:41.705142  438295 api_server.go:279] https://192.168.72.96:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 19:12:41.705174  438295 api_server.go:103] status: https://192.168.72.96:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 19:12:42.197361  438295 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0819 19:12:42.202272  438295 api_server.go:279] https://192.168.72.96:8443/healthz returned 200:
	ok
	I0819 19:12:42.209788  438295 api_server.go:141] control plane version: v1.31.0
	I0819 19:12:42.209819  438295 api_server.go:131] duration metric: took 4.012627755s to wait for apiserver health ...
	I0819 19:12:42.209829  438295 cni.go:84] Creating CNI manager for ""
	I0819 19:12:42.209836  438295 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 19:12:42.211612  438295 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 19:12:37.693171  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:39.693397  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:41.693523  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:42.212889  438295 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 19:12:42.223277  438295 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 19:12:42.242392  438295 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 19:12:42.256273  438295 system_pods.go:59] 8 kube-system pods found
	I0819 19:12:42.256321  438295 system_pods.go:61] "coredns-6f6b679f8f-7ww4z" [bbde00d4-6027-4d8d-b51e-bd68915da166] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0819 19:12:42.256331  438295 system_pods.go:61] "etcd-embed-certs-024748" [846ff0f0-5399-43fd-8e7b-1f64997cd291] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0819 19:12:42.256348  438295 system_pods.go:61] "kube-apiserver-embed-certs-024748" [3ff558d6-e82e-47a0-bb81-15244bee6470] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0819 19:12:42.256366  438295 system_pods.go:61] "kube-controller-manager-embed-certs-024748" [993b82ba-e8e7-4896-a06b-87c4f08d5985] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0819 19:12:42.256383  438295 system_pods.go:61] "kube-proxy-bmmbh" [1f77f152-f5f4-40f6-9632-1eaa36b9ea31] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0819 19:12:42.256393  438295 system_pods.go:61] "kube-scheduler-embed-certs-024748" [34684d4c-2479-45c5-883b-158cf9f974f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0819 19:12:42.256403  438295 system_pods.go:61] "metrics-server-6867b74b74-kxcwh" [15f86629-d916-4fdc-9ecf-9cb1b6c83f85] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 19:12:42.256409  438295 system_pods.go:61] "storage-provisioner" [7acb6ce1-21b6-4cdd-a5cb-76d694fc0a38] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0819 19:12:42.256418  438295 system_pods.go:74] duration metric: took 14.004598ms to wait for pod list to return data ...
	I0819 19:12:42.256428  438295 node_conditions.go:102] verifying NodePressure condition ...
	I0819 19:12:42.263308  438295 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 19:12:42.263340  438295 node_conditions.go:123] node cpu capacity is 2
	I0819 19:12:42.263354  438295 node_conditions.go:105] duration metric: took 6.920993ms to run NodePressure ...
	I0819 19:12:42.263376  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:12:42.533917  438295 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0819 19:12:42.545853  438295 kubeadm.go:739] kubelet initialised
	I0819 19:12:42.545886  438295 kubeadm.go:740] duration metric: took 11.931664ms waiting for restarted kubelet to initialise ...
	I0819 19:12:42.545899  438295 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 19:12:42.553125  438295 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-7ww4z" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:42.559120  438295 pod_ready.go:98] node "embed-certs-024748" hosting pod "coredns-6f6b679f8f-7ww4z" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:42.559148  438295 pod_ready.go:82] duration metric: took 5.984169ms for pod "coredns-6f6b679f8f-7ww4z" in "kube-system" namespace to be "Ready" ...
	E0819 19:12:42.559158  438295 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-024748" hosting pod "coredns-6f6b679f8f-7ww4z" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:42.559164  438295 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:42.564830  438295 pod_ready.go:98] node "embed-certs-024748" hosting pod "etcd-embed-certs-024748" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:42.564852  438295 pod_ready.go:82] duration metric: took 5.681326ms for pod "etcd-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	E0819 19:12:42.564860  438295 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-024748" hosting pod "etcd-embed-certs-024748" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:42.564867  438295 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:42.571982  438295 pod_ready.go:98] node "embed-certs-024748" hosting pod "kube-apiserver-embed-certs-024748" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:42.572027  438295 pod_ready.go:82] duration metric: took 7.150945ms for pod "kube-apiserver-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	E0819 19:12:42.572038  438295 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-024748" hosting pod "kube-apiserver-embed-certs-024748" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:42.572045  438295 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:42.648692  438295 pod_ready.go:98] node "embed-certs-024748" hosting pod "kube-controller-manager-embed-certs-024748" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:42.648721  438295 pod_ready.go:82] duration metric: took 76.665633ms for pod "kube-controller-manager-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	E0819 19:12:42.648730  438295 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-024748" hosting pod "kube-controller-manager-embed-certs-024748" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:42.648737  438295 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-bmmbh" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:43.045619  438295 pod_ready.go:98] node "embed-certs-024748" hosting pod "kube-proxy-bmmbh" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:43.045648  438295 pod_ready.go:82] duration metric: took 396.90414ms for pod "kube-proxy-bmmbh" in "kube-system" namespace to be "Ready" ...
	E0819 19:12:43.045658  438295 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-024748" hosting pod "kube-proxy-bmmbh" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:43.045665  438295 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:43.446302  438295 pod_ready.go:98] node "embed-certs-024748" hosting pod "kube-scheduler-embed-certs-024748" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:43.446331  438295 pod_ready.go:82] duration metric: took 400.658861ms for pod "kube-scheduler-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	E0819 19:12:43.446342  438295 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-024748" hosting pod "kube-scheduler-embed-certs-024748" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:43.446359  438295 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:43.845457  438295 pod_ready.go:98] node "embed-certs-024748" hosting pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:43.845488  438295 pod_ready.go:82] duration metric: took 399.120328ms for pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace to be "Ready" ...
	E0819 19:12:43.845499  438295 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-024748" hosting pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:43.845506  438295 pod_ready.go:39] duration metric: took 1.299593775s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 19:12:43.845526  438295 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 19:12:43.864357  438295 ops.go:34] apiserver oom_adj: -16
	I0819 19:12:43.864384  438295 kubeadm.go:597] duration metric: took 8.621518076s to restartPrimaryControlPlane
	I0819 19:12:43.864394  438295 kubeadm.go:394] duration metric: took 8.671725617s to StartCluster
	I0819 19:12:43.864414  438295 settings.go:142] acquiring lock: {Name:mk396fcf49a1d0e69583cf37ff3c819e37118163 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:12:43.864495  438295 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19468-372744/kubeconfig
	I0819 19:12:43.866775  438295 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/kubeconfig: {Name:mk8e7b4e1bb7da665111d2acd83eb48882c66853 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:12:43.867073  438295 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.96 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 19:12:43.867296  438295 config.go:182] Loaded profile config "embed-certs-024748": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:12:43.867195  438295 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 19:12:43.867354  438295 addons.go:69] Setting metrics-server=true in profile "embed-certs-024748"
	I0819 19:12:43.867362  438295 addons.go:69] Setting default-storageclass=true in profile "embed-certs-024748"
	I0819 19:12:43.867397  438295 addons.go:234] Setting addon metrics-server=true in "embed-certs-024748"
	I0819 19:12:43.867402  438295 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-024748"
	W0819 19:12:43.867409  438295 addons.go:243] addon metrics-server should already be in state true
	I0819 19:12:43.867437  438295 host.go:66] Checking if "embed-certs-024748" exists ...
	I0819 19:12:43.867354  438295 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-024748"
	I0819 19:12:43.867502  438295 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-024748"
	W0819 19:12:43.867514  438295 addons.go:243] addon storage-provisioner should already be in state true
	I0819 19:12:43.867538  438295 host.go:66] Checking if "embed-certs-024748" exists ...
	I0819 19:12:43.867761  438295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:43.867796  438295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:43.867839  438295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:43.867873  438295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:43.867889  438295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:43.867908  438295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:43.869989  438295 out.go:177] * Verifying Kubernetes components...
	I0819 19:12:43.871464  438295 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:12:43.883655  438295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33557
	I0819 19:12:43.883871  438295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33763
	I0819 19:12:43.884279  438295 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:43.884323  438295 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:43.884790  438295 main.go:141] libmachine: Using API Version  1
	I0819 19:12:43.884809  438295 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:43.884935  438295 main.go:141] libmachine: Using API Version  1
	I0819 19:12:43.884953  438295 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:43.885204  438295 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:43.885275  438295 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:43.885380  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetState
	I0819 19:12:43.885886  438295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:43.885928  438295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:43.886840  438295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40467
	I0819 19:12:43.887309  438295 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:43.887792  438295 main.go:141] libmachine: Using API Version  1
	I0819 19:12:43.887802  438295 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:43.888109  438295 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:43.888670  438295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:43.888697  438295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:43.888973  438295 addons.go:234] Setting addon default-storageclass=true in "embed-certs-024748"
	W0819 19:12:43.888988  438295 addons.go:243] addon default-storageclass should already be in state true
	I0819 19:12:43.889020  438295 host.go:66] Checking if "embed-certs-024748" exists ...
	I0819 19:12:43.889270  438295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:43.889304  438295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:43.905278  438295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40907
	I0819 19:12:43.905278  438295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41133
	I0819 19:12:43.905734  438295 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:43.905877  438295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33393
	I0819 19:12:43.905983  438295 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:43.906299  438295 main.go:141] libmachine: Using API Version  1
	I0819 19:12:43.906320  438295 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:43.906366  438295 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:43.906443  438295 main.go:141] libmachine: Using API Version  1
	I0819 19:12:43.906457  438295 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:43.906822  438295 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:43.906898  438295 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:43.906995  438295 main.go:141] libmachine: Using API Version  1
	I0819 19:12:43.907006  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetState
	I0819 19:12:43.907012  438295 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:43.907371  438295 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:43.907473  438295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:43.907523  438295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:43.907534  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetState
	I0819 19:12:43.909443  438295 main.go:141] libmachine: (embed-certs-024748) Calling .DriverName
	I0819 19:12:43.909529  438295 main.go:141] libmachine: (embed-certs-024748) Calling .DriverName
	I0819 19:12:43.911431  438295 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0819 19:12:43.911437  438295 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:12:43.913061  438295 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0819 19:12:43.913090  438295 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0819 19:12:43.913115  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHHostname
	I0819 19:12:43.913180  438295 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 19:12:43.913199  438295 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 19:12:43.913216  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHHostname
	I0819 19:12:43.916642  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:43.916813  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:43.917110  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:43.917135  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:43.917166  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:43.917193  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:43.917463  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHPort
	I0819 19:12:43.917668  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHPort
	I0819 19:12:43.917671  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:43.917846  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:43.917867  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHUsername
	I0819 19:12:43.918014  438295 sshutil.go:53] new ssh client: &{IP:192.168.72.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/embed-certs-024748/id_rsa Username:docker}
	I0819 19:12:43.918032  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHUsername
	I0819 19:12:43.918148  438295 sshutil.go:53] new ssh client: &{IP:192.168.72.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/embed-certs-024748/id_rsa Username:docker}
	I0819 19:12:43.926337  438295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46687
	I0819 19:12:43.926813  438295 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:43.927333  438295 main.go:141] libmachine: Using API Version  1
	I0819 19:12:43.927354  438295 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:43.927762  438295 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:43.927965  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetState
	I0819 19:12:43.929591  438295 main.go:141] libmachine: (embed-certs-024748) Calling .DriverName
	I0819 19:12:43.929910  438295 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 19:12:43.929926  438295 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 19:12:43.929942  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHHostname
	I0819 19:12:43.933032  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:43.933387  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:43.933406  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:43.933626  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHPort
	I0819 19:12:43.933850  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:43.933992  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHUsername
	I0819 19:12:43.934118  438295 sshutil.go:53] new ssh client: &{IP:192.168.72.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/embed-certs-024748/id_rsa Username:docker}
	I0819 19:12:44.078901  438295 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 19:12:44.098542  438295 node_ready.go:35] waiting up to 6m0s for node "embed-certs-024748" to be "Ready" ...
	I0819 19:12:44.180050  438295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 19:12:44.196186  438295 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0819 19:12:44.196210  438295 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0819 19:12:44.220001  438295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 19:12:44.231145  438295 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0819 19:12:44.231180  438295 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0819 19:12:44.267800  438295 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 19:12:44.267831  438295 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0819 19:12:44.323078  438295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 19:12:45.276298  438295 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.096199779s)
	I0819 19:12:45.276336  438295 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.056298773s)
	I0819 19:12:45.276383  438295 main.go:141] libmachine: Making call to close driver server
	I0819 19:12:45.276395  438295 main.go:141] libmachine: (embed-certs-024748) Calling .Close
	I0819 19:12:45.276385  438295 main.go:141] libmachine: Making call to close driver server
	I0819 19:12:45.276462  438295 main.go:141] libmachine: (embed-certs-024748) Calling .Close
	I0819 19:12:45.276714  438295 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:12:45.276757  438295 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:12:45.276777  438295 main.go:141] libmachine: Making call to close driver server
	I0819 19:12:45.276793  438295 main.go:141] libmachine: (embed-certs-024748) Calling .Close
	I0819 19:12:45.276860  438295 main.go:141] libmachine: (embed-certs-024748) DBG | Closing plugin on server side
	I0819 19:12:45.276874  438295 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:12:45.276940  438295 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:12:45.276956  438295 main.go:141] libmachine: Making call to close driver server
	I0819 19:12:45.276964  438295 main.go:141] libmachine: (embed-certs-024748) Calling .Close
	I0819 19:12:45.277134  438295 main.go:141] libmachine: (embed-certs-024748) DBG | Closing plugin on server side
	I0819 19:12:45.277195  438295 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:12:45.277239  438295 main.go:141] libmachine: (embed-certs-024748) DBG | Closing plugin on server side
	I0819 19:12:45.277258  438295 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:12:45.277277  438295 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:12:45.277304  438295 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:12:45.284982  438295 main.go:141] libmachine: Making call to close driver server
	I0819 19:12:45.285007  438295 main.go:141] libmachine: (embed-certs-024748) Calling .Close
	I0819 19:12:45.285304  438295 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:12:45.285324  438295 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:12:45.293973  438295 main.go:141] libmachine: Making call to close driver server
	I0819 19:12:45.293994  438295 main.go:141] libmachine: (embed-certs-024748) Calling .Close
	I0819 19:12:45.294247  438295 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:12:45.294265  438295 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:12:45.294274  438295 main.go:141] libmachine: Making call to close driver server
	I0819 19:12:45.294282  438295 main.go:141] libmachine: (embed-certs-024748) Calling .Close
	I0819 19:12:45.295704  438295 main.go:141] libmachine: (embed-certs-024748) DBG | Closing plugin on server side
	I0819 19:12:45.295787  438295 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:12:45.295813  438295 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:12:45.295828  438295 addons.go:475] Verifying addon metrics-server=true in "embed-certs-024748"
	I0819 19:12:45.297684  438295 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0819 19:12:41.765706  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:41.766129  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:41.766182  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:41.766093  439728 retry.go:31] will retry after 3.464758587s: waiting for machine to come up
	I0819 19:12:45.232640  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:45.233118  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:45.233151  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:45.233066  439728 retry.go:31] will retry after 3.551527195s: waiting for machine to come up
	I0819 19:12:43.694387  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:46.194627  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:45.298844  438295 addons.go:510] duration metric: took 1.431699078s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0819 19:12:46.103096  438295 node_ready.go:53] node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:48.603205  438295 node_ready.go:53] node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:50.084809  438001 start.go:364] duration metric: took 55.89796214s to acquireMachinesLock for "no-preload-278232"
	I0819 19:12:50.084884  438001 start.go:96] Skipping create...Using existing machine configuration
	I0819 19:12:50.084895  438001 fix.go:54] fixHost starting: 
	I0819 19:12:50.085416  438001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:50.085459  438001 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:50.103796  438001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41569
	I0819 19:12:50.104278  438001 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:50.104900  438001 main.go:141] libmachine: Using API Version  1
	I0819 19:12:50.104934  438001 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:50.105335  438001 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:50.105544  438001 main.go:141] libmachine: (no-preload-278232) Calling .DriverName
	I0819 19:12:50.105703  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetState
	I0819 19:12:50.107422  438001 fix.go:112] recreateIfNeeded on no-preload-278232: state=Stopped err=<nil>
	I0819 19:12:50.107444  438001 main.go:141] libmachine: (no-preload-278232) Calling .DriverName
	W0819 19:12:50.107602  438001 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 19:12:50.109328  438001 out.go:177] * Restarting existing kvm2 VM for "no-preload-278232" ...
	I0819 19:12:48.787197  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:48.787586  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has current primary IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:48.787611  438716 main.go:141] libmachine: (old-k8s-version-104669) Found IP for machine: 192.168.50.32
	I0819 19:12:48.787625  438716 main.go:141] libmachine: (old-k8s-version-104669) Reserving static IP address...
	I0819 19:12:48.788104  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "old-k8s-version-104669", mac: "52:54:00:8c:ff:a3", ip: "192.168.50.32"} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:48.788140  438716 main.go:141] libmachine: (old-k8s-version-104669) Reserved static IP address: 192.168.50.32
	I0819 19:12:48.788164  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | skip adding static IP to network mk-old-k8s-version-104669 - found existing host DHCP lease matching {name: "old-k8s-version-104669", mac: "52:54:00:8c:ff:a3", ip: "192.168.50.32"}
	I0819 19:12:48.788186  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | Getting to WaitForSSH function...
	I0819 19:12:48.788202  438716 main.go:141] libmachine: (old-k8s-version-104669) Waiting for SSH to be available...
	I0819 19:12:48.790365  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:48.790765  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:48.790793  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:48.790994  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | Using SSH client type: external
	I0819 19:12:48.791034  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | Using SSH private key: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/old-k8s-version-104669/id_rsa (-rw-------)
	I0819 19:12:48.791073  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.32 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19468-372744/.minikube/machines/old-k8s-version-104669/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 19:12:48.791087  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | About to run SSH command:
	I0819 19:12:48.791103  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | exit 0
	I0819 19:12:48.920087  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | SSH cmd err, output: <nil>: 
	I0819 19:12:48.920464  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetConfigRaw
	I0819 19:12:48.921105  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetIP
	I0819 19:12:48.923637  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:48.924022  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:48.924053  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:48.924242  438716 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/config.json ...
	I0819 19:12:48.924429  438716 machine.go:93] provisionDockerMachine start ...
	I0819 19:12:48.924447  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .DriverName
	I0819 19:12:48.924655  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHHostname
	I0819 19:12:48.926885  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:48.927345  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:48.927376  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:48.927527  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHPort
	I0819 19:12:48.927723  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:48.927846  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:48.927968  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHUsername
	I0819 19:12:48.928241  438716 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:48.928453  438716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I0819 19:12:48.928475  438716 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 19:12:49.039908  438716 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 19:12:49.039944  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetMachineName
	I0819 19:12:49.040200  438716 buildroot.go:166] provisioning hostname "old-k8s-version-104669"
	I0819 19:12:49.040236  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetMachineName
	I0819 19:12:49.040454  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHHostname
	I0819 19:12:49.043462  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.043860  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:49.043892  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.044061  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHPort
	I0819 19:12:49.044256  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:49.044472  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:49.044613  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHUsername
	I0819 19:12:49.044837  438716 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:49.045014  438716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I0819 19:12:49.045027  438716 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-104669 && echo "old-k8s-version-104669" | sudo tee /etc/hostname
	I0819 19:12:49.170660  438716 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-104669
	
	I0819 19:12:49.170695  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHHostname
	I0819 19:12:49.173564  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.173855  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:49.173882  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.174059  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHPort
	I0819 19:12:49.174239  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:49.174432  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:49.174564  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHUsername
	I0819 19:12:49.174732  438716 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:49.174923  438716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I0819 19:12:49.174941  438716 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-104669' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-104669/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-104669' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 19:12:49.298689  438716 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 19:12:49.298731  438716 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19468-372744/.minikube CaCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19468-372744/.minikube}
	I0819 19:12:49.298764  438716 buildroot.go:174] setting up certificates
	I0819 19:12:49.298778  438716 provision.go:84] configureAuth start
	I0819 19:12:49.298793  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetMachineName
	I0819 19:12:49.299157  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetIP
	I0819 19:12:49.301897  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.302290  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:49.302326  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.302462  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHHostname
	I0819 19:12:49.304592  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.304960  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:49.304987  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.305150  438716 provision.go:143] copyHostCerts
	I0819 19:12:49.305219  438716 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem, removing ...
	I0819 19:12:49.305243  438716 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem
	I0819 19:12:49.305310  438716 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem (1082 bytes)
	I0819 19:12:49.305437  438716 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem, removing ...
	I0819 19:12:49.305449  438716 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem
	I0819 19:12:49.305477  438716 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem (1123 bytes)
	I0819 19:12:49.305571  438716 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem, removing ...
	I0819 19:12:49.305583  438716 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem
	I0819 19:12:49.305612  438716 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem (1675 bytes)
	I0819 19:12:49.305699  438716 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-104669 san=[127.0.0.1 192.168.50.32 localhost minikube old-k8s-version-104669]
	I0819 19:12:49.394004  438716 provision.go:177] copyRemoteCerts
	I0819 19:12:49.394074  438716 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 19:12:49.394112  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHHostname
	I0819 19:12:49.396645  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.396906  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:49.396951  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.397108  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHPort
	I0819 19:12:49.397321  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:49.397504  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHUsername
	I0819 19:12:49.397709  438716 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/old-k8s-version-104669/id_rsa Username:docker}
	I0819 19:12:49.483061  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 19:12:49.508297  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 19:12:49.533821  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0819 19:12:49.560064  438716 provision.go:87] duration metric: took 261.270909ms to configureAuth
	I0819 19:12:49.560093  438716 buildroot.go:189] setting minikube options for container-runtime
	I0819 19:12:49.560310  438716 config.go:182] Loaded profile config "old-k8s-version-104669": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0819 19:12:49.560409  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHHostname
	I0819 19:12:49.563173  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.563604  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:49.563633  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.563882  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHPort
	I0819 19:12:49.564075  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:49.564274  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:49.564479  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHUsername
	I0819 19:12:49.564707  438716 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:49.564925  438716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I0819 19:12:49.564948  438716 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 19:12:49.837237  438716 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 19:12:49.837267  438716 machine.go:96] duration metric: took 912.825625ms to provisionDockerMachine
	I0819 19:12:49.837281  438716 start.go:293] postStartSetup for "old-k8s-version-104669" (driver="kvm2")
	I0819 19:12:49.837297  438716 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 19:12:49.837341  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .DriverName
	I0819 19:12:49.837716  438716 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 19:12:49.837757  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHHostname
	I0819 19:12:49.840409  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.840759  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:49.840789  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.840988  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHPort
	I0819 19:12:49.841183  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:49.841345  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHUsername
	I0819 19:12:49.841473  438716 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/old-k8s-version-104669/id_rsa Username:docker}
	I0819 19:12:49.931067  438716 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 19:12:49.935562  438716 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 19:12:49.935590  438716 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-372744/.minikube/addons for local assets ...
	I0819 19:12:49.935694  438716 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-372744/.minikube/files for local assets ...
	I0819 19:12:49.935815  438716 filesync.go:149] local asset: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem -> 3800092.pem in /etc/ssl/certs
	I0819 19:12:49.935941  438716 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 19:12:49.945418  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem --> /etc/ssl/certs/3800092.pem (1708 bytes)
	I0819 19:12:49.969454  438716 start.go:296] duration metric: took 132.15677ms for postStartSetup
	I0819 19:12:49.969494  438716 fix.go:56] duration metric: took 20.836438665s for fixHost
	I0819 19:12:49.969517  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHHostname
	I0819 19:12:49.972127  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.972502  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:49.972542  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.972758  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHPort
	I0819 19:12:49.973000  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:49.973190  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:49.973355  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHUsername
	I0819 19:12:49.973548  438716 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:49.973753  438716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I0819 19:12:49.973766  438716 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 19:12:50.084645  438716 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724094770.056929881
	
	I0819 19:12:50.084672  438716 fix.go:216] guest clock: 1724094770.056929881
	I0819 19:12:50.084681  438716 fix.go:229] Guest: 2024-08-19 19:12:50.056929881 +0000 UTC Remote: 2024-08-19 19:12:49.969497734 +0000 UTC m=+259.472837552 (delta=87.432147ms)
	I0819 19:12:50.084711  438716 fix.go:200] guest clock delta is within tolerance: 87.432147ms
	I0819 19:12:50.084718  438716 start.go:83] releasing machines lock for "old-k8s-version-104669", held for 20.951701853s
	I0819 19:12:50.084752  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .DriverName
	I0819 19:12:50.085050  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetIP
	I0819 19:12:50.087976  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:50.088363  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:50.088391  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:50.088572  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .DriverName
	I0819 19:12:50.089141  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .DriverName
	I0819 19:12:50.089360  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .DriverName
	I0819 19:12:50.089460  438716 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 19:12:50.089526  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHHostname
	I0819 19:12:50.089572  438716 ssh_runner.go:195] Run: cat /version.json
	I0819 19:12:50.089599  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHHostname
	I0819 19:12:50.092427  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:50.092591  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:50.092772  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:50.092797  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:50.092933  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:50.092965  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:50.092965  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHPort
	I0819 19:12:50.093147  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:50.093248  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHPort
	I0819 19:12:50.093328  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHUsername
	I0819 19:12:50.093409  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:50.093503  438716 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/old-k8s-version-104669/id_rsa Username:docker}
	I0819 19:12:50.093532  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHUsername
	I0819 19:12:50.093650  438716 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/old-k8s-version-104669/id_rsa Username:docker}
	I0819 19:12:50.177322  438716 ssh_runner.go:195] Run: systemctl --version
	I0819 19:12:50.200999  438716 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 19:12:50.349276  438716 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 19:12:50.357011  438716 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 19:12:50.357090  438716 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 19:12:50.377691  438716 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 19:12:50.377721  438716 start.go:495] detecting cgroup driver to use...
	I0819 19:12:50.377790  438716 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 19:12:50.394502  438716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 19:12:50.408481  438716 docker.go:217] disabling cri-docker service (if available) ...
	I0819 19:12:50.408556  438716 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 19:12:50.421818  438716 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 19:12:50.434899  438716 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 19:12:50.559399  438716 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 19:12:50.708621  438716 docker.go:233] disabling docker service ...
	I0819 19:12:50.708695  438716 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 19:12:50.726699  438716 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 19:12:50.740605  438716 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 19:12:50.896815  438716 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 19:12:51.037560  438716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 19:12:51.052554  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 19:12:51.072292  438716 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0819 19:12:51.072360  438716 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:51.083248  438716 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 19:12:51.083334  438716 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:51.093721  438716 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:51.105212  438716 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:51.119349  438716 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 19:12:51.134647  438716 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 19:12:51.144553  438716 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 19:12:51.144598  438716 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 19:12:51.159151  438716 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 19:12:51.171260  438716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:12:51.328931  438716 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 19:12:51.500761  438716 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 19:12:51.500831  438716 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 19:12:51.505982  438716 start.go:563] Will wait 60s for crictl version
	I0819 19:12:51.506057  438716 ssh_runner.go:195] Run: which crictl
	I0819 19:12:51.510447  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 19:12:51.552892  438716 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 19:12:51.552982  438716 ssh_runner.go:195] Run: crio --version
	I0819 19:12:51.581931  438716 ssh_runner.go:195] Run: crio --version
	I0819 19:12:51.614565  438716 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0819 19:12:50.110718  438001 main.go:141] libmachine: (no-preload-278232) Calling .Start
	I0819 19:12:50.110888  438001 main.go:141] libmachine: (no-preload-278232) Ensuring networks are active...
	I0819 19:12:50.111809  438001 main.go:141] libmachine: (no-preload-278232) Ensuring network default is active
	I0819 19:12:50.112149  438001 main.go:141] libmachine: (no-preload-278232) Ensuring network mk-no-preload-278232 is active
	I0819 19:12:50.112709  438001 main.go:141] libmachine: (no-preload-278232) Getting domain xml...
	I0819 19:12:50.113441  438001 main.go:141] libmachine: (no-preload-278232) Creating domain...
	I0819 19:12:51.494803  438001 main.go:141] libmachine: (no-preload-278232) Waiting to get IP...
	I0819 19:12:51.495733  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:12:51.496203  438001 main.go:141] libmachine: (no-preload-278232) DBG | unable to find current IP address of domain no-preload-278232 in network mk-no-preload-278232
	I0819 19:12:51.496302  438001 main.go:141] libmachine: (no-preload-278232) DBG | I0819 19:12:51.496187  439925 retry.go:31] will retry after 190.334257ms: waiting for machine to come up
	I0819 19:12:48.694017  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:50.694533  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:51.102764  438295 node_ready.go:49] node "embed-certs-024748" has status "Ready":"True"
	I0819 19:12:51.102791  438295 node_ready.go:38] duration metric: took 7.004204889s for node "embed-certs-024748" to be "Ready" ...
	I0819 19:12:51.102814  438295 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 19:12:51.109122  438295 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-7ww4z" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:51.114649  438295 pod_ready.go:93] pod "coredns-6f6b679f8f-7ww4z" in "kube-system" namespace has status "Ready":"True"
	I0819 19:12:51.114679  438295 pod_ready.go:82] duration metric: took 5.529339ms for pod "coredns-6f6b679f8f-7ww4z" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:51.114692  438295 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:51.121699  438295 pod_ready.go:93] pod "etcd-embed-certs-024748" in "kube-system" namespace has status "Ready":"True"
	I0819 19:12:51.121729  438295 pod_ready.go:82] duration metric: took 7.027906ms for pod "etcd-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:51.121742  438295 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:51.129040  438295 pod_ready.go:93] pod "kube-apiserver-embed-certs-024748" in "kube-system" namespace has status "Ready":"True"
	I0819 19:12:51.129066  438295 pod_ready.go:82] duration metric: took 7.315166ms for pod "kube-apiserver-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:51.129078  438295 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:51.636173  438295 pod_ready.go:93] pod "kube-controller-manager-embed-certs-024748" in "kube-system" namespace has status "Ready":"True"
	I0819 19:12:51.636226  438295 pod_ready.go:82] duration metric: took 507.130455ms for pod "kube-controller-manager-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:51.636243  438295 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bmmbh" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:51.904734  438295 pod_ready.go:93] pod "kube-proxy-bmmbh" in "kube-system" namespace has status "Ready":"True"
	I0819 19:12:51.904776  438295 pod_ready.go:82] duration metric: took 268.522999ms for pod "kube-proxy-bmmbh" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:51.904806  438295 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:53.911857  438295 pod_ready.go:103] pod "kube-scheduler-embed-certs-024748" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:51.615865  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetIP
	I0819 19:12:51.618782  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:51.619238  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:51.619268  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:51.619508  438716 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0819 19:12:51.624020  438716 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 19:12:51.640765  438716 kubeadm.go:883] updating cluster {Name:old-k8s-version-104669 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-104669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.32 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 19:12:51.640905  438716 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0819 19:12:51.640982  438716 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 19:12:51.696872  438716 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0819 19:12:51.696931  438716 ssh_runner.go:195] Run: which lz4
	I0819 19:12:51.702194  438716 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 19:12:51.707228  438716 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 19:12:51.707265  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0819 19:12:53.435062  438716 crio.go:462] duration metric: took 1.732918912s to copy over tarball
	I0819 19:12:53.435149  438716 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 19:12:51.688680  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:12:51.689287  438001 main.go:141] libmachine: (no-preload-278232) DBG | unable to find current IP address of domain no-preload-278232 in network mk-no-preload-278232
	I0819 19:12:51.689326  438001 main.go:141] libmachine: (no-preload-278232) DBG | I0819 19:12:51.689222  439925 retry.go:31] will retry after 351.943478ms: waiting for machine to come up
	I0819 19:12:52.042810  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:12:52.043142  438001 main.go:141] libmachine: (no-preload-278232) DBG | unable to find current IP address of domain no-preload-278232 in network mk-no-preload-278232
	I0819 19:12:52.043163  438001 main.go:141] libmachine: (no-preload-278232) DBG | I0819 19:12:52.043070  439925 retry.go:31] will retry after 332.731922ms: waiting for machine to come up
	I0819 19:12:52.377750  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:12:52.378418  438001 main.go:141] libmachine: (no-preload-278232) DBG | unable to find current IP address of domain no-preload-278232 in network mk-no-preload-278232
	I0819 19:12:52.378442  438001 main.go:141] libmachine: (no-preload-278232) DBG | I0819 19:12:52.378377  439925 retry.go:31] will retry after 601.079013ms: waiting for machine to come up
	I0819 19:12:52.980930  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:12:52.981446  438001 main.go:141] libmachine: (no-preload-278232) DBG | unable to find current IP address of domain no-preload-278232 in network mk-no-preload-278232
	I0819 19:12:52.981474  438001 main.go:141] libmachine: (no-preload-278232) DBG | I0819 19:12:52.981396  439925 retry.go:31] will retry after 621.686612ms: waiting for machine to come up
	I0819 19:12:53.605240  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:12:53.605716  438001 main.go:141] libmachine: (no-preload-278232) DBG | unable to find current IP address of domain no-preload-278232 in network mk-no-preload-278232
	I0819 19:12:53.605751  438001 main.go:141] libmachine: (no-preload-278232) DBG | I0819 19:12:53.605666  439925 retry.go:31] will retry after 627.115747ms: waiting for machine to come up
	I0819 19:12:54.234095  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:12:54.234590  438001 main.go:141] libmachine: (no-preload-278232) DBG | unable to find current IP address of domain no-preload-278232 in network mk-no-preload-278232
	I0819 19:12:54.234613  438001 main.go:141] libmachine: (no-preload-278232) DBG | I0819 19:12:54.234541  439925 retry.go:31] will retry after 1.137953362s: waiting for machine to come up
	I0819 19:12:55.373941  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:12:55.374412  438001 main.go:141] libmachine: (no-preload-278232) DBG | unable to find current IP address of domain no-preload-278232 in network mk-no-preload-278232
	I0819 19:12:55.374440  438001 main.go:141] libmachine: (no-preload-278232) DBG | I0819 19:12:55.374368  439925 retry.go:31] will retry after 1.437610965s: waiting for machine to come up
	I0819 19:12:52.696277  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:54.704463  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:57.195001  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:55.412162  438295 pod_ready.go:93] pod "kube-scheduler-embed-certs-024748" in "kube-system" namespace has status "Ready":"True"
	I0819 19:12:55.412198  438295 pod_ready.go:82] duration metric: took 3.507380249s for pod "kube-scheduler-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:55.412214  438295 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:57.419600  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:56.399941  438716 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.96472478s)
	I0819 19:12:56.399971  438716 crio.go:469] duration metric: took 2.964877539s to extract the tarball
	I0819 19:12:56.399986  438716 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 19:12:56.447075  438716 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 19:12:56.491773  438716 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0819 19:12:56.491800  438716 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0819 19:12:56.491876  438716 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:12:56.491876  438716 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0819 19:12:56.491956  438716 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0819 19:12:56.491961  438716 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 19:12:56.492041  438716 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0819 19:12:56.492059  438716 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0819 19:12:56.492280  438716 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0819 19:12:56.492494  438716 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0819 19:12:56.493750  438716 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 19:12:56.493762  438716 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0819 19:12:56.493756  438716 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:12:56.493762  438716 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0819 19:12:56.493765  438716 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0819 19:12:56.493831  438716 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0819 19:12:56.493806  438716 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0819 19:12:56.494099  438716 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0819 19:12:56.694872  438716 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0819 19:12:56.711504  438716 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0819 19:12:56.754045  438716 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0819 19:12:56.754096  438716 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0819 19:12:56.754136  438716 ssh_runner.go:195] Run: which crictl
	I0819 19:12:56.770451  438716 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0819 19:12:56.770510  438716 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0819 19:12:56.770574  438716 ssh_runner.go:195] Run: which crictl
	I0819 19:12:56.770573  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0819 19:12:56.804839  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0819 19:12:56.804872  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0819 19:12:56.825837  438716 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0819 19:12:56.832063  438716 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0819 19:12:56.834072  438716 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0819 19:12:56.837029  438716 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0819 19:12:56.837697  438716 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 19:12:56.902843  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0819 19:12:56.902930  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0819 19:12:57.020902  438716 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0819 19:12:57.020962  438716 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0819 19:12:57.020988  438716 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0819 19:12:57.021017  438716 ssh_runner.go:195] Run: which crictl
	I0819 19:12:57.021025  438716 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0819 19:12:57.021098  438716 ssh_runner.go:195] Run: which crictl
	I0819 19:12:57.023363  438716 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0819 19:12:57.023411  438716 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0819 19:12:57.023457  438716 ssh_runner.go:195] Run: which crictl
	I0819 19:12:57.023541  438716 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0819 19:12:57.023569  438716 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0819 19:12:57.023605  438716 ssh_runner.go:195] Run: which crictl
	I0819 19:12:57.034648  438716 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0819 19:12:57.034698  438716 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 19:12:57.034719  438716 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0819 19:12:57.034748  438716 ssh_runner.go:195] Run: which crictl
	I0819 19:12:57.039577  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0819 19:12:57.039648  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0819 19:12:57.039715  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0819 19:12:57.041644  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0819 19:12:57.041983  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0819 19:12:57.045383  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 19:12:57.149677  438716 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0819 19:12:57.164701  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0819 19:12:57.164821  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0819 19:12:57.202353  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0819 19:12:57.202434  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0819 19:12:57.202465  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 19:12:57.258824  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0819 19:12:57.258858  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0819 19:12:57.285756  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0819 19:12:57.326148  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 19:12:57.326237  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0819 19:12:57.378322  438716 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0819 19:12:57.378369  438716 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0819 19:12:57.390369  438716 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0819 19:12:57.419554  438716 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0819 19:12:57.419627  438716 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0819 19:12:57.438485  438716 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:12:57.583634  438716 cache_images.go:92] duration metric: took 1.091812972s to LoadCachedImages
	W0819 19:12:57.583757  438716 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0819 19:12:57.583777  438716 kubeadm.go:934] updating node { 192.168.50.32 8443 v1.20.0 crio true true} ...
	I0819 19:12:57.583915  438716 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-104669 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.32
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-104669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 19:12:57.584007  438716 ssh_runner.go:195] Run: crio config
	I0819 19:12:57.636714  438716 cni.go:84] Creating CNI manager for ""
	I0819 19:12:57.636738  438716 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 19:12:57.636752  438716 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 19:12:57.636776  438716 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.32 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-104669 NodeName:old-k8s-version-104669 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.32"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.32 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0819 19:12:57.636951  438716 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.32
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-104669"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.32
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.32"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 19:12:57.637028  438716 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0819 19:12:57.648002  438716 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 19:12:57.648093  438716 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 19:12:57.658889  438716 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0819 19:12:57.677316  438716 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 19:12:57.695825  438716 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0819 19:12:57.715396  438716 ssh_runner.go:195] Run: grep 192.168.50.32	control-plane.minikube.internal$ /etc/hosts
	I0819 19:12:57.719886  438716 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.32	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 19:12:57.733179  438716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:12:57.854139  438716 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 19:12:57.871590  438716 certs.go:68] Setting up /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669 for IP: 192.168.50.32
	I0819 19:12:57.871619  438716 certs.go:194] generating shared ca certs ...
	I0819 19:12:57.871642  438716 certs.go:226] acquiring lock for ca certs: {Name:mk639e03f593e0bccac045f6e9f5ba3b96cc81e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:12:57.871850  438716 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key
	I0819 19:12:57.871916  438716 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key
	I0819 19:12:57.871930  438716 certs.go:256] generating profile certs ...
	I0819 19:12:57.872060  438716 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/client.key
	I0819 19:12:57.872131  438716 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/apiserver.key.7101f8a0
	I0819 19:12:57.872197  438716 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/proxy-client.key
	I0819 19:12:57.872336  438716 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem (1338 bytes)
	W0819 19:12:57.872365  438716 certs.go:480] ignoring /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009_empty.pem, impossibly tiny 0 bytes
	I0819 19:12:57.872371  438716 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 19:12:57.872390  438716 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem (1082 bytes)
	I0819 19:12:57.872419  438716 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem (1123 bytes)
	I0819 19:12:57.872441  438716 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem (1675 bytes)
	I0819 19:12:57.872488  438716 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem (1708 bytes)
	I0819 19:12:57.873259  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 19:12:57.907576  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 19:12:57.943535  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 19:12:57.977770  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 19:12:58.021213  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0819 19:12:58.051043  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 19:12:58.080442  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 19:12:58.110888  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 19:12:58.158635  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 19:12:58.184168  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem --> /usr/share/ca-certificates/380009.pem (1338 bytes)
	I0819 19:12:58.210064  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem --> /usr/share/ca-certificates/3800092.pem (1708 bytes)
	I0819 19:12:58.235366  438716 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 19:12:58.254667  438716 ssh_runner.go:195] Run: openssl version
	I0819 19:12:58.260977  438716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3800092.pem && ln -fs /usr/share/ca-certificates/3800092.pem /etc/ssl/certs/3800092.pem"
	I0819 19:12:58.272995  438716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3800092.pem
	I0819 19:12:58.278056  438716 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 17:56 /usr/share/ca-certificates/3800092.pem
	I0819 19:12:58.278154  438716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3800092.pem
	I0819 19:12:58.284420  438716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3800092.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 19:12:58.296945  438716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 19:12:58.309288  438716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:12:58.314695  438716 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:12:58.314774  438716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:12:58.321016  438716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 19:12:58.332728  438716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/380009.pem && ln -fs /usr/share/ca-certificates/380009.pem /etc/ssl/certs/380009.pem"
	I0819 19:12:58.344766  438716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/380009.pem
	I0819 19:12:58.349610  438716 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 17:56 /usr/share/ca-certificates/380009.pem
	I0819 19:12:58.349681  438716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/380009.pem
	I0819 19:12:58.355942  438716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/380009.pem /etc/ssl/certs/51391683.0"
	I0819 19:12:58.368869  438716 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 19:12:58.373681  438716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 19:12:58.380415  438716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 19:12:58.386741  438716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 19:12:58.393362  438716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 19:12:58.399665  438716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 19:12:58.406108  438716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 19:12:58.412486  438716 kubeadm.go:392] StartCluster: {Name:old-k8s-version-104669 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-104669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.32 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 19:12:58.412606  438716 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 19:12:58.412655  438716 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 19:12:58.462379  438716 cri.go:89] found id: ""
	I0819 19:12:58.462463  438716 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 19:12:58.474029  438716 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 19:12:58.474054  438716 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 19:12:58.474112  438716 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 19:12:58.485755  438716 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 19:12:58.486762  438716 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-104669" does not appear in /home/jenkins/minikube-integration/19468-372744/kubeconfig
	I0819 19:12:58.487464  438716 kubeconfig.go:62] /home/jenkins/minikube-integration/19468-372744/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-104669" cluster setting kubeconfig missing "old-k8s-version-104669" context setting]
	I0819 19:12:58.489361  438716 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/kubeconfig: {Name:mk8e7b4e1bb7da665111d2acd83eb48882c66853 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:12:58.508865  438716 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 19:12:58.520577  438716 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.32
	I0819 19:12:58.520622  438716 kubeadm.go:1160] stopping kube-system containers ...
	I0819 19:12:58.520637  438716 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0819 19:12:58.520728  438716 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 19:12:58.561900  438716 cri.go:89] found id: ""
	I0819 19:12:58.561984  438716 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0819 19:12:58.580483  438716 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 19:12:58.591734  438716 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 19:12:58.591754  438716 kubeadm.go:157] found existing configuration files:
	
	I0819 19:12:58.591804  438716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 19:12:58.601694  438716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 19:12:58.601771  438716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 19:12:58.612132  438716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 19:12:58.621911  438716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 19:12:58.621984  438716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 19:12:58.631525  438716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 19:12:58.640802  438716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 19:12:58.640872  438716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 19:12:58.650216  438716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 19:12:58.660647  438716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 19:12:58.660720  438716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 19:12:58.669992  438716 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 19:12:58.679709  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:12:58.809302  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:12:59.757994  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:13:00.006386  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:13:00.136752  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:13:00.222424  438716 api_server.go:52] waiting for apiserver process to appear ...
	I0819 19:13:00.222542  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:12:56.813279  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:12:56.813777  438001 main.go:141] libmachine: (no-preload-278232) DBG | unable to find current IP address of domain no-preload-278232 in network mk-no-preload-278232
	I0819 19:12:56.813807  438001 main.go:141] libmachine: (no-preload-278232) DBG | I0819 19:12:56.813725  439925 retry.go:31] will retry after 1.504132921s: waiting for machine to come up
	I0819 19:12:58.319408  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:12:58.319880  438001 main.go:141] libmachine: (no-preload-278232) DBG | unable to find current IP address of domain no-preload-278232 in network mk-no-preload-278232
	I0819 19:12:58.319910  438001 main.go:141] libmachine: (no-preload-278232) DBG | I0819 19:12:58.319832  439925 retry.go:31] will retry after 1.921699926s: waiting for machine to come up
	I0819 19:13:00.243504  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:00.243995  438001 main.go:141] libmachine: (no-preload-278232) DBG | unable to find current IP address of domain no-preload-278232 in network mk-no-preload-278232
	I0819 19:13:00.244021  438001 main.go:141] libmachine: (no-preload-278232) DBG | I0819 19:13:00.243952  439925 retry.go:31] will retry after 2.040704792s: waiting for machine to come up
	I0819 19:12:59.195084  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:01.693648  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:59.419644  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:01.918769  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:00.723213  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:01.222908  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:01.723081  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:02.223465  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:02.722589  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:03.222706  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:03.722930  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:04.222826  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:04.722638  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:05.222666  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:02.287044  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:02.287490  438001 main.go:141] libmachine: (no-preload-278232) DBG | unable to find current IP address of domain no-preload-278232 in network mk-no-preload-278232
	I0819 19:13:02.287526  438001 main.go:141] libmachine: (no-preload-278232) DBG | I0819 19:13:02.287416  439925 retry.go:31] will retry after 2.562055052s: waiting for machine to come up
	I0819 19:13:04.852682  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:04.853097  438001 main.go:141] libmachine: (no-preload-278232) DBG | unable to find current IP address of domain no-preload-278232 in network mk-no-preload-278232
	I0819 19:13:04.853125  438001 main.go:141] libmachine: (no-preload-278232) DBG | I0819 19:13:04.853062  439925 retry.go:31] will retry after 3.627213972s: waiting for machine to come up
	I0819 19:13:04.194149  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:06.194831  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:04.418550  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:06.919083  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:05.723627  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:06.222663  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:06.723230  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:07.222666  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:07.722653  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:08.222861  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:08.723248  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:09.222831  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:09.722738  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:10.223069  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:08.484125  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.484586  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has current primary IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.484612  438001 main.go:141] libmachine: (no-preload-278232) Found IP for machine: 192.168.39.106
	I0819 19:13:08.484642  438001 main.go:141] libmachine: (no-preload-278232) Reserving static IP address...
	I0819 19:13:08.485049  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "no-preload-278232", mac: "52:54:00:14:f3:b1", ip: "192.168.39.106"} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:08.485091  438001 main.go:141] libmachine: (no-preload-278232) Reserved static IP address: 192.168.39.106
	I0819 19:13:08.485112  438001 main.go:141] libmachine: (no-preload-278232) DBG | skip adding static IP to network mk-no-preload-278232 - found existing host DHCP lease matching {name: "no-preload-278232", mac: "52:54:00:14:f3:b1", ip: "192.168.39.106"}
	I0819 19:13:08.485129  438001 main.go:141] libmachine: (no-preload-278232) DBG | Getting to WaitForSSH function...
	I0819 19:13:08.485145  438001 main.go:141] libmachine: (no-preload-278232) Waiting for SSH to be available...
	I0819 19:13:08.486998  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.487266  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:08.487290  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.487402  438001 main.go:141] libmachine: (no-preload-278232) DBG | Using SSH client type: external
	I0819 19:13:08.487429  438001 main.go:141] libmachine: (no-preload-278232) DBG | Using SSH private key: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/no-preload-278232/id_rsa (-rw-------)
	I0819 19:13:08.487463  438001 main.go:141] libmachine: (no-preload-278232) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.106 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19468-372744/.minikube/machines/no-preload-278232/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 19:13:08.487476  438001 main.go:141] libmachine: (no-preload-278232) DBG | About to run SSH command:
	I0819 19:13:08.487487  438001 main.go:141] libmachine: (no-preload-278232) DBG | exit 0
	I0819 19:13:08.611459  438001 main.go:141] libmachine: (no-preload-278232) DBG | SSH cmd err, output: <nil>: 
	I0819 19:13:08.611934  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetConfigRaw
	I0819 19:13:08.612610  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetIP
	I0819 19:13:08.615212  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.615564  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:08.615594  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.615919  438001 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/no-preload-278232/config.json ...
	I0819 19:13:08.616140  438001 machine.go:93] provisionDockerMachine start ...
	I0819 19:13:08.616162  438001 main.go:141] libmachine: (no-preload-278232) Calling .DriverName
	I0819 19:13:08.616387  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHHostname
	I0819 19:13:08.618650  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.618956  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:08.618988  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.619098  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHPort
	I0819 19:13:08.619291  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:08.619433  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:08.619569  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHUsername
	I0819 19:13:08.619727  438001 main.go:141] libmachine: Using SSH client type: native
	I0819 19:13:08.619893  438001 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I0819 19:13:08.619903  438001 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 19:13:08.724912  438001 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 19:13:08.724955  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetMachineName
	I0819 19:13:08.725264  438001 buildroot.go:166] provisioning hostname "no-preload-278232"
	I0819 19:13:08.725291  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetMachineName
	I0819 19:13:08.725486  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHHostname
	I0819 19:13:08.728810  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.729237  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:08.729274  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.729434  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHPort
	I0819 19:13:08.729667  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:08.729887  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:08.730067  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHUsername
	I0819 19:13:08.730244  438001 main.go:141] libmachine: Using SSH client type: native
	I0819 19:13:08.730490  438001 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I0819 19:13:08.730511  438001 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-278232 && echo "no-preload-278232" | sudo tee /etc/hostname
	I0819 19:13:08.854474  438001 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-278232
	
	I0819 19:13:08.854499  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHHostname
	I0819 19:13:08.857179  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.857511  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:08.857540  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.857713  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHPort
	I0819 19:13:08.857912  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:08.858075  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:08.858189  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHUsername
	I0819 19:13:08.858356  438001 main.go:141] libmachine: Using SSH client type: native
	I0819 19:13:08.858556  438001 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I0819 19:13:08.858579  438001 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-278232' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-278232/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-278232' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 19:13:08.973053  438001 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 19:13:08.973090  438001 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19468-372744/.minikube CaCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19468-372744/.minikube}
	I0819 19:13:08.973115  438001 buildroot.go:174] setting up certificates
	I0819 19:13:08.973125  438001 provision.go:84] configureAuth start
	I0819 19:13:08.973135  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetMachineName
	I0819 19:13:08.973417  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetIP
	I0819 19:13:08.976100  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.976459  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:08.976487  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.976690  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHHostname
	I0819 19:13:08.978902  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.979342  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:08.979370  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.979530  438001 provision.go:143] copyHostCerts
	I0819 19:13:08.979605  438001 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem, removing ...
	I0819 19:13:08.979628  438001 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem
	I0819 19:13:08.979717  438001 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem (1082 bytes)
	I0819 19:13:08.979830  438001 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem, removing ...
	I0819 19:13:08.979842  438001 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem
	I0819 19:13:08.979874  438001 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem (1123 bytes)
	I0819 19:13:08.979963  438001 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem, removing ...
	I0819 19:13:08.979974  438001 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem
	I0819 19:13:08.980002  438001 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem (1675 bytes)
	I0819 19:13:08.980075  438001 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem org=jenkins.no-preload-278232 san=[127.0.0.1 192.168.39.106 localhost minikube no-preload-278232]
	I0819 19:13:09.092643  438001 provision.go:177] copyRemoteCerts
	I0819 19:13:09.092707  438001 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 19:13:09.092739  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHHostname
	I0819 19:13:09.095542  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:09.095929  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:09.095960  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:09.096099  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHPort
	I0819 19:13:09.096318  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:09.096481  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHUsername
	I0819 19:13:09.096635  438001 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/no-preload-278232/id_rsa Username:docker}
	I0819 19:13:09.179713  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 19:13:09.206363  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0819 19:13:09.231180  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 19:13:09.256764  438001 provision.go:87] duration metric: took 283.626537ms to configureAuth
	I0819 19:13:09.256810  438001 buildroot.go:189] setting minikube options for container-runtime
	I0819 19:13:09.256993  438001 config.go:182] Loaded profile config "no-preload-278232": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:13:09.257079  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHHostname
	I0819 19:13:09.259661  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:09.260061  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:09.260094  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:09.260253  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHPort
	I0819 19:13:09.260461  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:09.260640  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:09.260796  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHUsername
	I0819 19:13:09.260973  438001 main.go:141] libmachine: Using SSH client type: native
	I0819 19:13:09.261150  438001 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I0819 19:13:09.261166  438001 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 19:13:09.534325  438001 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 19:13:09.534357  438001 machine.go:96] duration metric: took 918.201944ms to provisionDockerMachine
	I0819 19:13:09.534371  438001 start.go:293] postStartSetup for "no-preload-278232" (driver="kvm2")
	I0819 19:13:09.534387  438001 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 19:13:09.534412  438001 main.go:141] libmachine: (no-preload-278232) Calling .DriverName
	I0819 19:13:09.534794  438001 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 19:13:09.534826  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHHostname
	I0819 19:13:09.537623  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:09.537974  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:09.538002  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:09.538138  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHPort
	I0819 19:13:09.538349  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:09.538534  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHUsername
	I0819 19:13:09.538669  438001 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/no-preload-278232/id_rsa Username:docker}
	I0819 19:13:09.627085  438001 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 19:13:09.631714  438001 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 19:13:09.631740  438001 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-372744/.minikube/addons for local assets ...
	I0819 19:13:09.631817  438001 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-372744/.minikube/files for local assets ...
	I0819 19:13:09.631911  438001 filesync.go:149] local asset: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem -> 3800092.pem in /etc/ssl/certs
	I0819 19:13:09.632035  438001 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 19:13:09.642942  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem --> /etc/ssl/certs/3800092.pem (1708 bytes)
	I0819 19:13:09.669242  438001 start.go:296] duration metric: took 134.853886ms for postStartSetup
	I0819 19:13:09.669294  438001 fix.go:56] duration metric: took 19.584399031s for fixHost
	I0819 19:13:09.669325  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHHostname
	I0819 19:13:09.672072  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:09.672461  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:09.672494  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:09.672635  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHPort
	I0819 19:13:09.672937  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:09.673116  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:09.673331  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHUsername
	I0819 19:13:09.673517  438001 main.go:141] libmachine: Using SSH client type: native
	I0819 19:13:09.673699  438001 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I0819 19:13:09.673717  438001 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 19:13:09.780601  438001 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724094789.749951838
	
	I0819 19:13:09.780628  438001 fix.go:216] guest clock: 1724094789.749951838
	I0819 19:13:09.780640  438001 fix.go:229] Guest: 2024-08-19 19:13:09.749951838 +0000 UTC Remote: 2024-08-19 19:13:09.669301343 +0000 UTC m=+358.073543000 (delta=80.650495ms)
	I0819 19:13:09.780668  438001 fix.go:200] guest clock delta is within tolerance: 80.650495ms
	I0819 19:13:09.780676  438001 start.go:83] releasing machines lock for "no-preload-278232", held for 19.69582363s
	I0819 19:13:09.780703  438001 main.go:141] libmachine: (no-preload-278232) Calling .DriverName
	I0819 19:13:09.781042  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetIP
	I0819 19:13:09.783578  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:09.783967  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:09.783996  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:09.784149  438001 main.go:141] libmachine: (no-preload-278232) Calling .DriverName
	I0819 19:13:09.784649  438001 main.go:141] libmachine: (no-preload-278232) Calling .DriverName
	I0819 19:13:09.784855  438001 main.go:141] libmachine: (no-preload-278232) Calling .DriverName
	I0819 19:13:09.784946  438001 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 19:13:09.785037  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHHostname
	I0819 19:13:09.785073  438001 ssh_runner.go:195] Run: cat /version.json
	I0819 19:13:09.785107  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHHostname
	I0819 19:13:09.787346  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:09.787706  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:09.787763  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:09.787788  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:09.787977  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHPort
	I0819 19:13:09.788162  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:09.788226  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:09.788251  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:09.788327  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHUsername
	I0819 19:13:09.788447  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHPort
	I0819 19:13:09.788500  438001 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/no-preload-278232/id_rsa Username:docker}
	I0819 19:13:09.788622  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:09.788805  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHUsername
	I0819 19:13:09.788994  438001 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/no-preload-278232/id_rsa Username:docker}
	I0819 19:13:09.864596  438001 ssh_runner.go:195] Run: systemctl --version
	I0819 19:13:09.890038  438001 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 19:13:10.039016  438001 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 19:13:10.045269  438001 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 19:13:10.045352  438001 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 19:13:10.061345  438001 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 19:13:10.061380  438001 start.go:495] detecting cgroup driver to use...
	I0819 19:13:10.061467  438001 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 19:13:10.079229  438001 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 19:13:10.094396  438001 docker.go:217] disabling cri-docker service (if available) ...
	I0819 19:13:10.094471  438001 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 19:13:10.109307  438001 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 19:13:10.123389  438001 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 19:13:10.241132  438001 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 19:13:10.395346  438001 docker.go:233] disabling docker service ...
	I0819 19:13:10.395444  438001 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 19:13:10.409604  438001 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 19:13:10.424149  438001 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 19:13:10.544180  438001 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 19:13:10.671038  438001 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 19:13:10.685563  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 19:13:10.704754  438001 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 19:13:10.704819  438001 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:13:10.716002  438001 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 19:13:10.716077  438001 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:13:10.728085  438001 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:13:10.739292  438001 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:13:10.750083  438001 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 19:13:10.760832  438001 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:13:10.771231  438001 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:13:10.788807  438001 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:13:10.799472  438001 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 19:13:10.809354  438001 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 19:13:10.809432  438001 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 19:13:10.824339  438001 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 19:13:10.833761  438001 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:13:10.953587  438001 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 19:13:11.091264  438001 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 19:13:11.091336  438001 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 19:13:11.096092  438001 start.go:563] Will wait 60s for crictl version
	I0819 19:13:11.096161  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:13:11.100040  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 19:13:11.142512  438001 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 19:13:11.142612  438001 ssh_runner.go:195] Run: crio --version
	I0819 19:13:11.176967  438001 ssh_runner.go:195] Run: crio --version
	I0819 19:13:11.208687  438001 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 19:13:11.209819  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetIP
	I0819 19:13:11.212533  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:11.212876  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:11.212900  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:11.213098  438001 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 19:13:11.217234  438001 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 19:13:11.229995  438001 kubeadm.go:883] updating cluster {Name:no-preload-278232 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-278232 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.106 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 19:13:11.230124  438001 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 19:13:11.230168  438001 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 19:13:11.265699  438001 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0819 19:13:11.265730  438001 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0819 19:13:11.265816  438001 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0819 19:13:11.265836  438001 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0819 19:13:11.265843  438001 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0819 19:13:11.265816  438001 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:13:11.265875  438001 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 19:13:11.265941  438001 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0819 19:13:11.265955  438001 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0819 19:13:11.266027  438001 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0819 19:13:11.267344  438001 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0819 19:13:11.267364  438001 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0819 19:13:11.267344  438001 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0819 19:13:11.267408  438001 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0819 19:13:11.267349  438001 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0819 19:13:11.267445  438001 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0819 19:13:11.267408  438001 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 19:13:11.267407  438001 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:13:11.411117  438001 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0819 19:13:11.435022  438001 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0819 19:13:11.437707  438001 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0819 19:13:11.439226  438001 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 19:13:11.446384  438001 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0819 19:13:11.448011  438001 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0819 19:13:11.463921  438001 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0819 19:13:11.476902  438001 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0819 19:13:11.476956  438001 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0819 19:13:11.477011  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:13:11.561762  438001 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0819 19:13:11.561827  438001 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0819 19:13:11.561889  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:13:08.694513  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:11.193505  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:09.419409  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:11.919413  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:13.931174  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:10.722882  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:11.223650  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:11.722917  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:12.223146  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:12.723410  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:13.222692  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:13.722636  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:14.223152  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:14.722661  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:15.223297  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:11.657022  438001 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0819 19:13:11.657071  438001 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0819 19:13:11.657092  438001 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0819 19:13:11.657123  438001 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 19:13:11.657127  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:13:11.657164  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:13:11.657176  438001 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0819 19:13:11.657195  438001 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0819 19:13:11.657217  438001 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0819 19:13:11.657216  438001 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0819 19:13:11.657254  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:13:11.657260  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:13:11.729671  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0819 19:13:11.729903  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0819 19:13:11.730476  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 19:13:11.730489  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0819 19:13:11.730510  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0819 19:13:11.730544  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0819 19:13:11.853411  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0819 19:13:11.853647  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0819 19:13:11.872296  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0819 19:13:11.872370  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0819 19:13:11.876801  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 19:13:11.877002  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0819 19:13:11.982642  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0819 19:13:12.007940  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0819 19:13:12.031132  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0819 19:13:12.031150  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0819 19:13:12.031163  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 19:13:12.031275  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0819 19:13:12.130991  438001 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0819 19:13:12.131099  438001 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0819 19:13:12.130994  438001 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0819 19:13:12.131231  438001 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1
	I0819 19:13:12.162852  438001 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0819 19:13:12.162911  438001 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0819 19:13:12.162916  438001 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0819 19:13:12.162967  438001 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0819 19:13:12.162984  438001 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0819 19:13:12.162984  438001 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0819 19:13:12.163035  438001 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0819 19:13:12.163044  438001 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0819 19:13:12.163053  438001 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0819 19:13:12.163055  438001 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0819 19:13:12.163086  438001 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0819 19:13:12.163095  438001 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0819 19:13:12.177377  438001 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0819 19:13:12.177438  438001 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0819 19:13:12.177438  438001 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0819 19:13:12.229301  438001 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:13:14.745129  438001 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (2.582015913s)
	I0819 19:13:14.745162  438001 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0819 19:13:14.745196  438001 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0: (2.582131532s)
	I0819 19:13:14.745215  438001 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.515891614s)
	I0819 19:13:14.745232  438001 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0819 19:13:14.745200  438001 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0819 19:13:14.745247  438001 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0819 19:13:14.745285  438001 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:13:14.745298  438001 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0819 19:13:14.745325  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:13:13.693752  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:15.693871  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:16.419552  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:18.920189  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:15.723053  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:16.223486  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:16.722740  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:17.223337  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:17.723160  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:18.222651  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:18.723509  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:19.223686  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:19.723376  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:20.222953  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:16.728557  438001 ssh_runner.go:235] Completed: which crictl: (1.983204878s)
	I0819 19:13:16.728614  438001 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.983294709s)
	I0819 19:13:16.728635  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:13:16.728642  438001 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0819 19:13:16.728673  438001 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0819 19:13:16.728714  438001 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0819 19:13:16.771574  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:13:20.532388  438001 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.760772797s)
	I0819 19:13:20.532421  438001 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.80368813s)
	I0819 19:13:20.532437  438001 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0819 19:13:20.532469  438001 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0819 19:13:20.532480  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:13:20.532500  438001 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0819 19:13:18.193852  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:20.692752  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:21.419154  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:23.419271  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:20.723620  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:21.223286  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:21.723663  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:22.223594  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:22.723415  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:23.223643  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:23.723395  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:24.223476  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:24.723236  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:25.223620  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:22.500967  438001 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.968455152s)
	I0819 19:13:22.501030  438001 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0819 19:13:22.501036  438001 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (1.968509024s)
	I0819 19:13:22.501068  438001 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0819 19:13:22.501108  438001 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0819 19:13:22.501138  438001 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0819 19:13:22.501175  438001 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0819 19:13:22.506796  438001 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0819 19:13:23.962797  438001 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.461519717s)
	I0819 19:13:23.962838  438001 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0819 19:13:23.962876  438001 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0819 19:13:23.962959  438001 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0819 19:13:25.927805  438001 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.964816993s)
	I0819 19:13:25.927836  438001 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0819 19:13:25.927868  438001 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0819 19:13:25.927922  438001 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0819 19:13:26.572310  438001 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0819 19:13:26.572368  438001 cache_images.go:123] Successfully loaded all cached images
	I0819 19:13:26.572376  438001 cache_images.go:92] duration metric: took 15.306632126s to LoadCachedImages
	I0819 19:13:26.572397  438001 kubeadm.go:934] updating node { 192.168.39.106 8443 v1.31.0 crio true true} ...
	I0819 19:13:26.572549  438001 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-278232 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.106
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-278232 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 19:13:26.572635  438001 ssh_runner.go:195] Run: crio config
	I0819 19:13:26.623839  438001 cni.go:84] Creating CNI manager for ""
	I0819 19:13:26.623862  438001 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 19:13:26.623872  438001 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 19:13:26.623896  438001 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.106 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-278232 NodeName:no-preload-278232 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.106"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.106 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 19:13:26.624138  438001 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.106
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-278232"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.106
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.106"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 19:13:26.624226  438001 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 19:13:22.693093  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:24.694313  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:26.695312  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:25.918793  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:27.919721  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:25.722593  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:26.223582  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:26.722927  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:27.223364  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:27.723223  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:28.223458  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:28.723262  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:29.222823  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:29.722837  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:30.223196  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:26.634770  438001 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 19:13:26.634844  438001 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 19:13:26.644193  438001 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0819 19:13:26.661226  438001 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 19:13:26.677413  438001 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0819 19:13:26.696260  438001 ssh_runner.go:195] Run: grep 192.168.39.106	control-plane.minikube.internal$ /etc/hosts
	I0819 19:13:26.700029  438001 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.106	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 19:13:26.711667  438001 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:13:26.849658  438001 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 19:13:26.867185  438001 certs.go:68] Setting up /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/no-preload-278232 for IP: 192.168.39.106
	I0819 19:13:26.867216  438001 certs.go:194] generating shared ca certs ...
	I0819 19:13:26.867240  438001 certs.go:226] acquiring lock for ca certs: {Name:mk639e03f593e0bccac045f6e9f5ba3b96cc81e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:13:26.867431  438001 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key
	I0819 19:13:26.867489  438001 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key
	I0819 19:13:26.867502  438001 certs.go:256] generating profile certs ...
	I0819 19:13:26.867600  438001 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/no-preload-278232/client.key
	I0819 19:13:26.867705  438001 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/no-preload-278232/apiserver.key.4086521c
	I0819 19:13:26.867759  438001 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/no-preload-278232/proxy-client.key
	I0819 19:13:26.867936  438001 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem (1338 bytes)
	W0819 19:13:26.867980  438001 certs.go:480] ignoring /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009_empty.pem, impossibly tiny 0 bytes
	I0819 19:13:26.867995  438001 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 19:13:26.868037  438001 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem (1082 bytes)
	I0819 19:13:26.868075  438001 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem (1123 bytes)
	I0819 19:13:26.868107  438001 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem (1675 bytes)
	I0819 19:13:26.868171  438001 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem (1708 bytes)
	I0819 19:13:26.869217  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 19:13:26.903250  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 19:13:26.928593  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 19:13:26.957098  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 19:13:26.982422  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/no-preload-278232/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0819 19:13:27.009252  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/no-preload-278232/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 19:13:27.038043  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/no-preload-278232/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 19:13:27.075400  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/no-preload-278232/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 19:13:27.101568  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem --> /usr/share/ca-certificates/3800092.pem (1708 bytes)
	I0819 19:13:27.127162  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 19:13:27.152327  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem --> /usr/share/ca-certificates/380009.pem (1338 bytes)
	I0819 19:13:27.176207  438001 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 19:13:27.194919  438001 ssh_runner.go:195] Run: openssl version
	I0819 19:13:27.201002  438001 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3800092.pem && ln -fs /usr/share/ca-certificates/3800092.pem /etc/ssl/certs/3800092.pem"
	I0819 19:13:27.212050  438001 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3800092.pem
	I0819 19:13:27.216607  438001 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 17:56 /usr/share/ca-certificates/3800092.pem
	I0819 19:13:27.216663  438001 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3800092.pem
	I0819 19:13:27.222437  438001 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3800092.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 19:13:27.234112  438001 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 19:13:27.245472  438001 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:13:27.250203  438001 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:13:27.250257  438001 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:13:27.256045  438001 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 19:13:27.266746  438001 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/380009.pem && ln -fs /usr/share/ca-certificates/380009.pem /etc/ssl/certs/380009.pem"
	I0819 19:13:27.277316  438001 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/380009.pem
	I0819 19:13:27.281660  438001 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 17:56 /usr/share/ca-certificates/380009.pem
	I0819 19:13:27.281721  438001 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/380009.pem
	I0819 19:13:27.287223  438001 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/380009.pem /etc/ssl/certs/51391683.0"
	I0819 19:13:27.299791  438001 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 19:13:27.304470  438001 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 19:13:27.310642  438001 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 19:13:27.316259  438001 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 19:13:27.322248  438001 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 19:13:27.327902  438001 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 19:13:27.333447  438001 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 19:13:27.339044  438001 kubeadm.go:392] StartCluster: {Name:no-preload-278232 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-278232 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.106 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 19:13:27.339165  438001 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 19:13:27.339241  438001 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 19:13:27.378362  438001 cri.go:89] found id: ""
	I0819 19:13:27.378436  438001 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 19:13:27.388560  438001 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 19:13:27.388580  438001 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 19:13:27.388623  438001 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 19:13:27.397834  438001 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 19:13:27.399336  438001 kubeconfig.go:125] found "no-preload-278232" server: "https://192.168.39.106:8443"
	I0819 19:13:27.402651  438001 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 19:13:27.412108  438001 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.106
	I0819 19:13:27.412155  438001 kubeadm.go:1160] stopping kube-system containers ...
	I0819 19:13:27.412170  438001 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0819 19:13:27.412230  438001 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 19:13:27.450332  438001 cri.go:89] found id: ""
	I0819 19:13:27.450431  438001 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0819 19:13:27.466943  438001 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 19:13:27.476741  438001 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 19:13:27.476765  438001 kubeadm.go:157] found existing configuration files:
	
	I0819 19:13:27.476810  438001 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 19:13:27.485630  438001 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 19:13:27.485695  438001 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 19:13:27.495232  438001 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 19:13:27.504379  438001 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 19:13:27.504449  438001 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 19:13:27.513723  438001 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 19:13:27.522864  438001 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 19:13:27.522946  438001 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 19:13:27.532402  438001 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 19:13:27.541502  438001 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 19:13:27.541592  438001 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 19:13:27.550934  438001 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 19:13:27.560650  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:13:27.684890  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:13:28.534223  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:13:28.757538  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:13:28.831313  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:13:28.897644  438001 api_server.go:52] waiting for apiserver process to appear ...
	I0819 19:13:28.897735  438001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:29.398486  438001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:29.898494  438001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:29.924881  438001 api_server.go:72] duration metric: took 1.027247684s to wait for apiserver process to appear ...
	I0819 19:13:29.924918  438001 api_server.go:88] waiting for apiserver healthz status ...
	I0819 19:13:29.924944  438001 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8443/healthz ...
	I0819 19:13:29.925535  438001 api_server.go:269] stopped: https://192.168.39.106:8443/healthz: Get "https://192.168.39.106:8443/healthz": dial tcp 192.168.39.106:8443: connect: connection refused
	I0819 19:13:30.425624  438001 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8443/healthz ...
	I0819 19:13:29.193722  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:31.194540  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:32.406445  438001 api_server.go:279] https://192.168.39.106:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 19:13:32.406476  438001 api_server.go:103] status: https://192.168.39.106:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 19:13:32.406491  438001 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8443/healthz ...
	I0819 19:13:32.470160  438001 api_server.go:279] https://192.168.39.106:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 19:13:32.470195  438001 api_server.go:103] status: https://192.168.39.106:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 19:13:32.470211  438001 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8443/healthz ...
	I0819 19:13:32.486292  438001 api_server.go:279] https://192.168.39.106:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 19:13:32.486322  438001 api_server.go:103] status: https://192.168.39.106:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 19:13:32.925943  438001 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8443/healthz ...
	I0819 19:13:32.933024  438001 api_server.go:279] https://192.168.39.106:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 19:13:32.933068  438001 api_server.go:103] status: https://192.168.39.106:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 19:13:33.425638  438001 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8443/healthz ...
	I0819 19:13:33.431919  438001 api_server.go:279] https://192.168.39.106:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 19:13:33.432051  438001 api_server.go:103] status: https://192.168.39.106:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 19:13:33.925369  438001 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8443/healthz ...
	I0819 19:13:33.930489  438001 api_server.go:279] https://192.168.39.106:8443/healthz returned 200:
	ok
	I0819 19:13:33.937758  438001 api_server.go:141] control plane version: v1.31.0
	I0819 19:13:33.937789  438001 api_server.go:131] duration metric: took 4.012862801s to wait for apiserver health ...
	I0819 19:13:33.937800  438001 cni.go:84] Creating CNI manager for ""
	I0819 19:13:33.937807  438001 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 19:13:33.939711  438001 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 19:13:30.419241  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:32.419437  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:30.723537  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:31.223437  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:31.723289  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:32.222714  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:32.723037  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:33.223138  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:33.723303  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:34.223334  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:34.722692  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:35.223021  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:33.941055  438001 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 19:13:33.953427  438001 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 19:13:33.982889  438001 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 19:13:33.998701  438001 system_pods.go:59] 8 kube-system pods found
	I0819 19:13:33.998750  438001 system_pods.go:61] "coredns-6f6b679f8f-22lbt" [c8a5cabd-41d4-41cb-91c1-2db1f3471db3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0819 19:13:33.998762  438001 system_pods.go:61] "etcd-no-preload-278232" [36d555a1-33e4-4c6c-b24e-2fee4fd84f2b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0819 19:13:33.998775  438001 system_pods.go:61] "kube-apiserver-no-preload-278232" [af7173e5-c4ac-4ece-b8b9-bb81cb6b9bfd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0819 19:13:33.998784  438001 system_pods.go:61] "kube-controller-manager-no-preload-278232" [2463d97a-5221-40ce-8fd7-08151165d6f7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0819 19:13:33.998794  438001 system_pods.go:61] "kube-proxy-rcf49" [85d5814a-1ba9-46be-ab11-17bf40c0f029] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0819 19:13:33.998807  438001 system_pods.go:61] "kube-scheduler-no-preload-278232" [3b327704-f70c-4d6f-a774-15427a305472] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0819 19:13:33.998819  438001 system_pods.go:61] "metrics-server-6867b74b74-vxwrs" [e8b74128-b393-4f0f-90fe-e05f20d54acd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 19:13:33.998827  438001 system_pods.go:61] "storage-provisioner" [24766475-1a5b-4f1a-9350-3e891b5272cc] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0819 19:13:33.998841  438001 system_pods.go:74] duration metric: took 15.918876ms to wait for pod list to return data ...
	I0819 19:13:33.998853  438001 node_conditions.go:102] verifying NodePressure condition ...
	I0819 19:13:34.003102  438001 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 19:13:34.003131  438001 node_conditions.go:123] node cpu capacity is 2
	I0819 19:13:34.003145  438001 node_conditions.go:105] duration metric: took 4.283682ms to run NodePressure ...
	I0819 19:13:34.003163  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:13:34.300052  438001 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0819 19:13:34.304483  438001 kubeadm.go:739] kubelet initialised
	I0819 19:13:34.304505  438001 kubeadm.go:740] duration metric: took 4.421894ms waiting for restarted kubelet to initialise ...
	I0819 19:13:34.304513  438001 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 19:13:34.310575  438001 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-22lbt" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:34.316040  438001 pod_ready.go:98] node "no-preload-278232" hosting pod "coredns-6f6b679f8f-22lbt" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:34.316068  438001 pod_ready.go:82] duration metric: took 5.462078ms for pod "coredns-6f6b679f8f-22lbt" in "kube-system" namespace to be "Ready" ...
	E0819 19:13:34.316080  438001 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-278232" hosting pod "coredns-6f6b679f8f-22lbt" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:34.316088  438001 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:34.320731  438001 pod_ready.go:98] node "no-preload-278232" hosting pod "etcd-no-preload-278232" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:34.320751  438001 pod_ready.go:82] duration metric: took 4.649545ms for pod "etcd-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	E0819 19:13:34.320758  438001 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-278232" hosting pod "etcd-no-preload-278232" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:34.320763  438001 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:34.325499  438001 pod_ready.go:98] node "no-preload-278232" hosting pod "kube-apiserver-no-preload-278232" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:34.325519  438001 pod_ready.go:82] duration metric: took 4.750861ms for pod "kube-apiserver-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	E0819 19:13:34.325526  438001 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-278232" hosting pod "kube-apiserver-no-preload-278232" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:34.325531  438001 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:34.388221  438001 pod_ready.go:98] node "no-preload-278232" hosting pod "kube-controller-manager-no-preload-278232" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:34.388248  438001 pod_ready.go:82] duration metric: took 62.708596ms for pod "kube-controller-manager-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	E0819 19:13:34.388259  438001 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-278232" hosting pod "kube-controller-manager-no-preload-278232" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:34.388265  438001 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-rcf49" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:34.787164  438001 pod_ready.go:98] node "no-preload-278232" hosting pod "kube-proxy-rcf49" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:34.787193  438001 pod_ready.go:82] duration metric: took 398.919585ms for pod "kube-proxy-rcf49" in "kube-system" namespace to be "Ready" ...
	E0819 19:13:34.787203  438001 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-278232" hosting pod "kube-proxy-rcf49" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:34.787210  438001 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:35.186336  438001 pod_ready.go:98] node "no-preload-278232" hosting pod "kube-scheduler-no-preload-278232" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:35.186365  438001 pod_ready.go:82] duration metric: took 399.147858ms for pod "kube-scheduler-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	E0819 19:13:35.186377  438001 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-278232" hosting pod "kube-scheduler-no-preload-278232" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:35.186386  438001 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:35.586266  438001 pod_ready.go:98] node "no-preload-278232" hosting pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:35.586292  438001 pod_ready.go:82] duration metric: took 399.895038ms for pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace to be "Ready" ...
	E0819 19:13:35.586301  438001 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-278232" hosting pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:35.586307  438001 pod_ready.go:39] duration metric: took 1.281785432s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 19:13:35.586326  438001 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 19:13:35.598523  438001 ops.go:34] apiserver oom_adj: -16
	I0819 19:13:35.598545  438001 kubeadm.go:597] duration metric: took 8.20995933s to restartPrimaryControlPlane
	I0819 19:13:35.598554  438001 kubeadm.go:394] duration metric: took 8.259514907s to StartCluster
	I0819 19:13:35.598576  438001 settings.go:142] acquiring lock: {Name:mk396fcf49a1d0e69583cf37ff3c819e37118163 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:13:35.598662  438001 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19468-372744/kubeconfig
	I0819 19:13:35.600424  438001 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/kubeconfig: {Name:mk8e7b4e1bb7da665111d2acd83eb48882c66853 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:13:35.600672  438001 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.106 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 19:13:35.600768  438001 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 19:13:35.600850  438001 addons.go:69] Setting storage-provisioner=true in profile "no-preload-278232"
	I0819 19:13:35.600879  438001 addons.go:69] Setting metrics-server=true in profile "no-preload-278232"
	I0819 19:13:35.600924  438001 addons.go:234] Setting addon metrics-server=true in "no-preload-278232"
	W0819 19:13:35.600938  438001 addons.go:243] addon metrics-server should already be in state true
	I0819 19:13:35.600884  438001 addons.go:234] Setting addon storage-provisioner=true in "no-preload-278232"
	W0819 19:13:35.600969  438001 addons.go:243] addon storage-provisioner should already be in state true
	I0819 19:13:35.600966  438001 config.go:182] Loaded profile config "no-preload-278232": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:13:35.600976  438001 host.go:66] Checking if "no-preload-278232" exists ...
	I0819 19:13:35.600988  438001 host.go:66] Checking if "no-preload-278232" exists ...
	I0819 19:13:35.601395  438001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:13:35.601428  438001 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:13:35.601436  438001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:13:35.601453  438001 addons.go:69] Setting default-storageclass=true in profile "no-preload-278232"
	I0819 19:13:35.601501  438001 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-278232"
	I0819 19:13:35.601463  438001 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:13:35.601898  438001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:13:35.601948  438001 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:13:35.602507  438001 out.go:177] * Verifying Kubernetes components...
	I0819 19:13:35.604092  438001 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:13:35.617515  438001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34839
	I0819 19:13:35.617538  438001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36157
	I0819 19:13:35.617521  438001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35771
	I0819 19:13:35.618045  438001 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:13:35.618101  438001 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:13:35.618163  438001 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:13:35.618570  438001 main.go:141] libmachine: Using API Version  1
	I0819 19:13:35.618598  438001 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:13:35.618712  438001 main.go:141] libmachine: Using API Version  1
	I0819 19:13:35.618734  438001 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:13:35.618715  438001 main.go:141] libmachine: Using API Version  1
	I0819 19:13:35.618754  438001 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:13:35.618989  438001 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:13:35.619109  438001 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:13:35.619111  438001 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:13:35.619177  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetState
	I0819 19:13:35.619649  438001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:13:35.619693  438001 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:13:35.619695  438001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:13:35.619768  438001 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:13:35.641244  438001 addons.go:234] Setting addon default-storageclass=true in "no-preload-278232"
	W0819 19:13:35.641268  438001 addons.go:243] addon default-storageclass should already be in state true
	I0819 19:13:35.641298  438001 host.go:66] Checking if "no-preload-278232" exists ...
	I0819 19:13:35.641558  438001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:13:35.641610  438001 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:13:35.659392  438001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39373
	I0819 19:13:35.659999  438001 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:13:35.660432  438001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38477
	I0819 19:13:35.660432  438001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35477
	I0819 19:13:35.660604  438001 main.go:141] libmachine: Using API Version  1
	I0819 19:13:35.660631  438001 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:13:35.661089  438001 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:13:35.661149  438001 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:13:35.661169  438001 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:13:35.661641  438001 main.go:141] libmachine: Using API Version  1
	I0819 19:13:35.661661  438001 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:13:35.661757  438001 main.go:141] libmachine: Using API Version  1
	I0819 19:13:35.661772  438001 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:13:35.661792  438001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:13:35.661826  438001 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:13:35.662039  438001 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:13:35.662142  438001 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:13:35.662222  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetState
	I0819 19:13:35.662375  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetState
	I0819 19:13:35.664221  438001 main.go:141] libmachine: (no-preload-278232) Calling .DriverName
	I0819 19:13:35.664397  438001 main.go:141] libmachine: (no-preload-278232) Calling .DriverName
	I0819 19:13:35.666459  438001 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0819 19:13:35.666471  438001 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:13:35.667849  438001 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0819 19:13:35.667864  438001 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0819 19:13:35.667882  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHHostname
	I0819 19:13:35.667944  438001 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 19:13:35.667959  438001 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 19:13:35.667977  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHHostname
	I0819 19:13:35.673516  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHPort
	I0819 19:13:35.673544  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:35.673520  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:35.673578  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:35.673593  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:35.673602  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:35.673521  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHPort
	I0819 19:13:35.673615  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:35.673793  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:35.673937  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:35.673986  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHUsername
	I0819 19:13:35.674150  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHUsername
	I0819 19:13:35.674324  438001 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/no-preload-278232/id_rsa Username:docker}
	I0819 19:13:35.674350  438001 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/no-preload-278232/id_rsa Username:docker}
	I0819 19:13:35.683691  438001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39783
	I0819 19:13:35.684219  438001 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:13:35.684806  438001 main.go:141] libmachine: Using API Version  1
	I0819 19:13:35.684831  438001 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:13:35.685251  438001 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:13:35.685515  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetState
	I0819 19:13:35.687268  438001 main.go:141] libmachine: (no-preload-278232) Calling .DriverName
	I0819 19:13:35.687485  438001 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 19:13:35.687503  438001 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 19:13:35.687524  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHHostname
	I0819 19:13:35.690504  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:35.691297  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHPort
	I0819 19:13:35.691333  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:35.691356  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:35.691477  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:35.691659  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHUsername
	I0819 19:13:35.691814  438001 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/no-preload-278232/id_rsa Username:docker}
	I0819 19:13:35.833054  438001 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 19:13:35.855442  438001 node_ready.go:35] waiting up to 6m0s for node "no-preload-278232" to be "Ready" ...
	I0819 19:13:35.923521  438001 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0819 19:13:35.923551  438001 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0819 19:13:35.940005  438001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 19:13:35.965657  438001 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0819 19:13:35.965686  438001 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0819 19:13:36.002636  438001 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 19:13:36.002665  438001 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0819 19:13:36.024764  438001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 19:13:36.058824  438001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 19:13:36.420421  438001 main.go:141] libmachine: Making call to close driver server
	I0819 19:13:36.420452  438001 main.go:141] libmachine: (no-preload-278232) Calling .Close
	I0819 19:13:36.420785  438001 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:13:36.420804  438001 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:13:36.420844  438001 main.go:141] libmachine: (no-preload-278232) DBG | Closing plugin on server side
	I0819 19:13:36.420904  438001 main.go:141] libmachine: Making call to close driver server
	I0819 19:13:36.420918  438001 main.go:141] libmachine: (no-preload-278232) Calling .Close
	I0819 19:13:36.421185  438001 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:13:36.421210  438001 main.go:141] libmachine: (no-preload-278232) DBG | Closing plugin on server side
	I0819 19:13:36.421224  438001 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:13:36.429463  438001 main.go:141] libmachine: Making call to close driver server
	I0819 19:13:36.429481  438001 main.go:141] libmachine: (no-preload-278232) Calling .Close
	I0819 19:13:36.429811  438001 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:13:36.429830  438001 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:13:37.141893  438001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.117083882s)
	I0819 19:13:37.141987  438001 main.go:141] libmachine: Making call to close driver server
	I0819 19:13:37.141999  438001 main.go:141] libmachine: (no-preload-278232) Calling .Close
	I0819 19:13:37.142472  438001 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:13:37.142495  438001 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:13:37.142506  438001 main.go:141] libmachine: Making call to close driver server
	I0819 19:13:37.142515  438001 main.go:141] libmachine: (no-preload-278232) Calling .Close
	I0819 19:13:37.142788  438001 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:13:37.142808  438001 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:13:37.142814  438001 main.go:141] libmachine: (no-preload-278232) DBG | Closing plugin on server side
	I0819 19:13:37.161659  438001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.10278963s)
	I0819 19:13:37.161723  438001 main.go:141] libmachine: Making call to close driver server
	I0819 19:13:37.161739  438001 main.go:141] libmachine: (no-preload-278232) Calling .Close
	I0819 19:13:37.162067  438001 main.go:141] libmachine: (no-preload-278232) DBG | Closing plugin on server side
	I0819 19:13:37.162099  438001 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:13:37.162125  438001 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:13:37.162142  438001 main.go:141] libmachine: Making call to close driver server
	I0819 19:13:37.162154  438001 main.go:141] libmachine: (no-preload-278232) Calling .Close
	I0819 19:13:37.162404  438001 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:13:37.162420  438001 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:13:37.162432  438001 addons.go:475] Verifying addon metrics-server=true in "no-preload-278232"
	I0819 19:13:37.164423  438001 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0819 19:13:33.694203  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:35.694403  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:34.918988  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:36.919564  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:35.722784  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:36.223168  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:36.723041  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:37.222801  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:37.722855  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:38.223296  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:38.722936  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:39.223326  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:39.722883  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:40.223284  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:37.165767  438001 addons.go:510] duration metric: took 1.565026237s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0819 19:13:37.859454  438001 node_ready.go:53] node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:39.859662  438001 node_ready.go:53] node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:38.193207  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:40.694127  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:39.418572  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:41.918302  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:43.918558  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:40.722612  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:41.222700  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:41.723144  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:42.223369  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:42.723209  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:43.222849  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:43.723518  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:44.223585  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:44.722772  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:45.223078  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:41.859965  438001 node_ready.go:53] node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:43.359120  438001 node_ready.go:49] node "no-preload-278232" has status "Ready":"True"
	I0819 19:13:43.359151  438001 node_ready.go:38] duration metric: took 7.503671074s for node "no-preload-278232" to be "Ready" ...
	I0819 19:13:43.359169  438001 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 19:13:43.365307  438001 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-22lbt" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:43.369626  438001 pod_ready.go:93] pod "coredns-6f6b679f8f-22lbt" in "kube-system" namespace has status "Ready":"True"
	I0819 19:13:43.369646  438001 pod_ready.go:82] duration metric: took 4.316734ms for pod "coredns-6f6b679f8f-22lbt" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:43.369654  438001 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:45.377672  438001 pod_ready.go:103] pod "etcd-no-preload-278232" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:43.193775  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:45.693494  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:45.919705  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:48.418981  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:45.723287  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:46.223666  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:46.722754  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:47.223414  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:47.723567  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:48.222938  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:48.723011  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:49.223076  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:49.723443  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:50.223627  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:47.875409  438001 pod_ready.go:103] pod "etcd-no-preload-278232" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:48.377127  438001 pod_ready.go:93] pod "etcd-no-preload-278232" in "kube-system" namespace has status "Ready":"True"
	I0819 19:13:48.377155  438001 pod_ready.go:82] duration metric: took 5.007493319s for pod "etcd-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:48.377169  438001 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:48.381841  438001 pod_ready.go:93] pod "kube-apiserver-no-preload-278232" in "kube-system" namespace has status "Ready":"True"
	I0819 19:13:48.381864  438001 pod_ready.go:82] duration metric: took 4.686309ms for pod "kube-apiserver-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:48.381877  438001 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:48.386382  438001 pod_ready.go:93] pod "kube-controller-manager-no-preload-278232" in "kube-system" namespace has status "Ready":"True"
	I0819 19:13:48.386397  438001 pod_ready.go:82] duration metric: took 4.514361ms for pod "kube-controller-manager-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:48.386405  438001 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rcf49" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:48.390940  438001 pod_ready.go:93] pod "kube-proxy-rcf49" in "kube-system" namespace has status "Ready":"True"
	I0819 19:13:48.390955  438001 pod_ready.go:82] duration metric: took 4.544499ms for pod "kube-proxy-rcf49" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:48.390963  438001 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:48.395159  438001 pod_ready.go:93] pod "kube-scheduler-no-preload-278232" in "kube-system" namespace has status "Ready":"True"
	I0819 19:13:48.395180  438001 pod_ready.go:82] duration metric: took 4.211012ms for pod "kube-scheduler-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:48.395197  438001 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:50.402109  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:47.693601  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:50.193183  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:50.918811  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:52.919981  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:50.723259  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:51.222697  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:51.723284  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:52.222757  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:52.723414  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:53.223202  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:53.722721  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:54.223578  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:54.723400  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:55.222730  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:52.901901  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:54.903583  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:52.693231  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:54.693934  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:56.695700  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:55.418965  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:57.918885  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:55.723644  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:56.223212  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:56.722729  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:57.223226  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:57.723045  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:58.222901  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:58.722710  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:59.223149  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:59.723186  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:00.222763  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:00.222844  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:00.271266  438716 cri.go:89] found id: ""
	I0819 19:14:00.271296  438716 logs.go:276] 0 containers: []
	W0819 19:14:00.271305  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:00.271312  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:00.271373  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:00.311870  438716 cri.go:89] found id: ""
	I0819 19:14:00.311900  438716 logs.go:276] 0 containers: []
	W0819 19:14:00.311936  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:00.311946  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:00.312011  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:00.350476  438716 cri.go:89] found id: ""
	I0819 19:14:00.350505  438716 logs.go:276] 0 containers: []
	W0819 19:14:00.350514  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:00.350520  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:00.350586  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:00.387404  438716 cri.go:89] found id: ""
	I0819 19:14:00.387438  438716 logs.go:276] 0 containers: []
	W0819 19:14:00.387447  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:00.387457  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:00.387516  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:00.423493  438716 cri.go:89] found id: ""
	I0819 19:14:00.423521  438716 logs.go:276] 0 containers: []
	W0819 19:14:00.423529  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:00.423535  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:00.423596  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:00.458593  438716 cri.go:89] found id: ""
	I0819 19:14:00.458630  438716 logs.go:276] 0 containers: []
	W0819 19:14:00.458642  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:00.458651  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:00.458722  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:00.495645  438716 cri.go:89] found id: ""
	I0819 19:14:00.495695  438716 logs.go:276] 0 containers: []
	W0819 19:14:00.495709  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:00.495717  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:00.495782  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:00.531464  438716 cri.go:89] found id: ""
	I0819 19:14:00.531498  438716 logs.go:276] 0 containers: []
	W0819 19:14:00.531508  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:00.531529  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:00.531543  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:13:57.401329  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:59.402701  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:59.192781  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:01.194411  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:00.419287  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:02.918450  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:00.584029  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:00.584078  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:00.597870  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:00.597908  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:00.746061  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:00.746085  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:00.746098  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:00.818001  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:00.818042  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:03.358509  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:03.371262  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:03.371345  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:03.408201  438716 cri.go:89] found id: ""
	I0819 19:14:03.408231  438716 logs.go:276] 0 containers: []
	W0819 19:14:03.408241  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:03.408248  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:03.408306  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:03.445354  438716 cri.go:89] found id: ""
	I0819 19:14:03.445386  438716 logs.go:276] 0 containers: []
	W0819 19:14:03.445396  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:03.445408  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:03.445470  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:03.481144  438716 cri.go:89] found id: ""
	I0819 19:14:03.481178  438716 logs.go:276] 0 containers: []
	W0819 19:14:03.481188  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:03.481195  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:03.481260  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:03.529069  438716 cri.go:89] found id: ""
	I0819 19:14:03.529109  438716 logs.go:276] 0 containers: []
	W0819 19:14:03.529141  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:03.529148  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:03.529216  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:03.590325  438716 cri.go:89] found id: ""
	I0819 19:14:03.590364  438716 logs.go:276] 0 containers: []
	W0819 19:14:03.590377  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:03.590386  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:03.590456  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:03.634924  438716 cri.go:89] found id: ""
	I0819 19:14:03.634969  438716 logs.go:276] 0 containers: []
	W0819 19:14:03.634981  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:03.634990  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:03.635062  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:03.684133  438716 cri.go:89] found id: ""
	I0819 19:14:03.684164  438716 logs.go:276] 0 containers: []
	W0819 19:14:03.684176  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:03.684184  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:03.684253  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:03.722285  438716 cri.go:89] found id: ""
	I0819 19:14:03.722312  438716 logs.go:276] 0 containers: []
	W0819 19:14:03.722321  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:03.722330  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:03.722372  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:03.735937  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:03.735965  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:03.814906  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:03.814931  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:03.814948  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:03.896323  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:03.896363  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:03.943002  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:03.943037  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:01.901154  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:03.902972  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:05.903388  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:03.694686  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:06.193228  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:04.919332  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:07.419221  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:06.496886  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:06.510719  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:06.510790  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:06.544692  438716 cri.go:89] found id: ""
	I0819 19:14:06.544724  438716 logs.go:276] 0 containers: []
	W0819 19:14:06.544737  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:06.544747  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:06.544818  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:06.578935  438716 cri.go:89] found id: ""
	I0819 19:14:06.578962  438716 logs.go:276] 0 containers: []
	W0819 19:14:06.578971  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:06.578979  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:06.579033  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:06.614488  438716 cri.go:89] found id: ""
	I0819 19:14:06.614516  438716 logs.go:276] 0 containers: []
	W0819 19:14:06.614525  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:06.614532  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:06.614583  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:06.648579  438716 cri.go:89] found id: ""
	I0819 19:14:06.648612  438716 logs.go:276] 0 containers: []
	W0819 19:14:06.648623  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:06.648630  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:06.648685  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:06.685168  438716 cri.go:89] found id: ""
	I0819 19:14:06.685198  438716 logs.go:276] 0 containers: []
	W0819 19:14:06.685208  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:06.685217  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:06.685280  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:06.720391  438716 cri.go:89] found id: ""
	I0819 19:14:06.720424  438716 logs.go:276] 0 containers: []
	W0819 19:14:06.720433  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:06.720440  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:06.720491  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:06.758183  438716 cri.go:89] found id: ""
	I0819 19:14:06.758217  438716 logs.go:276] 0 containers: []
	W0819 19:14:06.758228  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:06.758237  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:06.758307  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:06.800182  438716 cri.go:89] found id: ""
	I0819 19:14:06.800215  438716 logs.go:276] 0 containers: []
	W0819 19:14:06.800224  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:06.800234  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:06.800247  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:06.852735  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:06.852777  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:06.867214  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:06.867249  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:06.938942  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:06.938967  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:06.938980  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:07.023950  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:07.023992  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:09.568889  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:09.588481  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:09.588545  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:09.630790  438716 cri.go:89] found id: ""
	I0819 19:14:09.630825  438716 logs.go:276] 0 containers: []
	W0819 19:14:09.630839  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:09.630848  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:09.630926  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:09.673258  438716 cri.go:89] found id: ""
	I0819 19:14:09.673291  438716 logs.go:276] 0 containers: []
	W0819 19:14:09.673302  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:09.673311  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:09.673374  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:09.709500  438716 cri.go:89] found id: ""
	I0819 19:14:09.709530  438716 logs.go:276] 0 containers: []
	W0819 19:14:09.709541  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:09.709549  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:09.709617  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:09.743110  438716 cri.go:89] found id: ""
	I0819 19:14:09.743139  438716 logs.go:276] 0 containers: []
	W0819 19:14:09.743150  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:09.743164  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:09.743238  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:09.776717  438716 cri.go:89] found id: ""
	I0819 19:14:09.776746  438716 logs.go:276] 0 containers: []
	W0819 19:14:09.776754  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:09.776761  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:09.776820  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:09.811381  438716 cri.go:89] found id: ""
	I0819 19:14:09.811409  438716 logs.go:276] 0 containers: []
	W0819 19:14:09.811417  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:09.811423  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:09.811474  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:09.843699  438716 cri.go:89] found id: ""
	I0819 19:14:09.843730  438716 logs.go:276] 0 containers: []
	W0819 19:14:09.843741  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:09.843750  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:09.843822  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:09.882972  438716 cri.go:89] found id: ""
	I0819 19:14:09.883005  438716 logs.go:276] 0 containers: []
	W0819 19:14:09.883018  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:09.883033  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:09.883050  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:09.973077  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:09.973114  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:10.014505  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:10.014556  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:10.069779  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:10.069819  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:10.084337  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:10.084367  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:10.164870  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:08.402464  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:10.900684  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:08.193980  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:10.194818  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:09.918852  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:12.419687  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:12.665929  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:12.679881  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:12.679960  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:12.718305  438716 cri.go:89] found id: ""
	I0819 19:14:12.718332  438716 logs.go:276] 0 containers: []
	W0819 19:14:12.718341  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:12.718348  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:12.718398  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:12.759084  438716 cri.go:89] found id: ""
	I0819 19:14:12.759112  438716 logs.go:276] 0 containers: []
	W0819 19:14:12.759127  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:12.759135  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:12.759205  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:12.793193  438716 cri.go:89] found id: ""
	I0819 19:14:12.793228  438716 logs.go:276] 0 containers: []
	W0819 19:14:12.793238  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:12.793245  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:12.793299  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:12.828283  438716 cri.go:89] found id: ""
	I0819 19:14:12.828310  438716 logs.go:276] 0 containers: []
	W0819 19:14:12.828322  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:12.828329  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:12.828379  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:12.861971  438716 cri.go:89] found id: ""
	I0819 19:14:12.862004  438716 logs.go:276] 0 containers: []
	W0819 19:14:12.862016  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:12.862025  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:12.862092  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:12.898173  438716 cri.go:89] found id: ""
	I0819 19:14:12.898203  438716 logs.go:276] 0 containers: []
	W0819 19:14:12.898214  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:12.898223  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:12.898287  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:12.940203  438716 cri.go:89] found id: ""
	I0819 19:14:12.940234  438716 logs.go:276] 0 containers: []
	W0819 19:14:12.940246  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:12.940254  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:12.940309  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:12.978092  438716 cri.go:89] found id: ""
	I0819 19:14:12.978123  438716 logs.go:276] 0 containers: []
	W0819 19:14:12.978134  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:12.978147  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:12.978172  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:12.992082  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:12.992117  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:13.073609  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:13.073636  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:13.073649  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:13.153060  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:13.153105  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:13.196535  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:13.196581  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:12.903116  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:15.401183  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:12.693872  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:14.694252  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:17.193116  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:14.919563  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:17.418946  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:15.750298  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:15.763913  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:15.763996  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:15.804515  438716 cri.go:89] found id: ""
	I0819 19:14:15.804542  438716 logs.go:276] 0 containers: []
	W0819 19:14:15.804551  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:15.804558  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:15.804624  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:15.847077  438716 cri.go:89] found id: ""
	I0819 19:14:15.847112  438716 logs.go:276] 0 containers: []
	W0819 19:14:15.847125  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:15.847133  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:15.847200  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:15.882316  438716 cri.go:89] found id: ""
	I0819 19:14:15.882348  438716 logs.go:276] 0 containers: []
	W0819 19:14:15.882358  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:15.882365  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:15.882417  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:15.919084  438716 cri.go:89] found id: ""
	I0819 19:14:15.919114  438716 logs.go:276] 0 containers: []
	W0819 19:14:15.919125  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:15.919132  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:15.919202  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:15.953139  438716 cri.go:89] found id: ""
	I0819 19:14:15.953175  438716 logs.go:276] 0 containers: []
	W0819 19:14:15.953188  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:15.953209  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:15.953276  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:15.993231  438716 cri.go:89] found id: ""
	I0819 19:14:15.993259  438716 logs.go:276] 0 containers: []
	W0819 19:14:15.993268  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:15.993286  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:15.993337  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:16.030382  438716 cri.go:89] found id: ""
	I0819 19:14:16.030412  438716 logs.go:276] 0 containers: []
	W0819 19:14:16.030422  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:16.030428  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:16.030482  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:16.065834  438716 cri.go:89] found id: ""
	I0819 19:14:16.065861  438716 logs.go:276] 0 containers: []
	W0819 19:14:16.065872  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:16.065885  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:16.065901  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:16.117943  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:16.117983  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:16.132010  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:16.132041  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:16.202398  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:16.202416  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:16.202429  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:16.286609  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:16.286653  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:18.830502  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:18.844022  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:18.844107  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:18.880539  438716 cri.go:89] found id: ""
	I0819 19:14:18.880576  438716 logs.go:276] 0 containers: []
	W0819 19:14:18.880588  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:18.880595  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:18.880657  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:18.918426  438716 cri.go:89] found id: ""
	I0819 19:14:18.918454  438716 logs.go:276] 0 containers: []
	W0819 19:14:18.918463  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:18.918470  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:18.918531  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:18.954534  438716 cri.go:89] found id: ""
	I0819 19:14:18.954566  438716 logs.go:276] 0 containers: []
	W0819 19:14:18.954578  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:18.954587  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:18.954651  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:18.993820  438716 cri.go:89] found id: ""
	I0819 19:14:18.993852  438716 logs.go:276] 0 containers: []
	W0819 19:14:18.993864  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:18.993885  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:18.993967  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:19.026947  438716 cri.go:89] found id: ""
	I0819 19:14:19.026982  438716 logs.go:276] 0 containers: []
	W0819 19:14:19.026995  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:19.027005  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:19.027072  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:19.062097  438716 cri.go:89] found id: ""
	I0819 19:14:19.062130  438716 logs.go:276] 0 containers: []
	W0819 19:14:19.062142  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:19.062150  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:19.062207  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:19.099522  438716 cri.go:89] found id: ""
	I0819 19:14:19.099549  438716 logs.go:276] 0 containers: []
	W0819 19:14:19.099559  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:19.099567  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:19.099630  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:19.134766  438716 cri.go:89] found id: ""
	I0819 19:14:19.134803  438716 logs.go:276] 0 containers: []
	W0819 19:14:19.134815  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:19.134850  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:19.134867  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:19.176428  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:19.176458  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:19.231448  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:19.231484  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:19.245631  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:19.245687  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:19.318679  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:19.318703  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:19.318717  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:17.401916  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:19.402628  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:19.195224  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:21.693528  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:19.918727  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:21.918863  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:23.919050  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:21.898430  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:21.913840  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:21.913911  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:21.955682  438716 cri.go:89] found id: ""
	I0819 19:14:21.955720  438716 logs.go:276] 0 containers: []
	W0819 19:14:21.955732  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:21.955743  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:21.955820  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:21.994798  438716 cri.go:89] found id: ""
	I0819 19:14:21.994836  438716 logs.go:276] 0 containers: []
	W0819 19:14:21.994845  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:21.994852  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:21.994904  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:22.029155  438716 cri.go:89] found id: ""
	I0819 19:14:22.029191  438716 logs.go:276] 0 containers: []
	W0819 19:14:22.029202  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:22.029210  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:22.029281  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:22.072489  438716 cri.go:89] found id: ""
	I0819 19:14:22.072534  438716 logs.go:276] 0 containers: []
	W0819 19:14:22.072546  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:22.072559  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:22.072621  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:22.109160  438716 cri.go:89] found id: ""
	I0819 19:14:22.109192  438716 logs.go:276] 0 containers: []
	W0819 19:14:22.109203  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:22.109211  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:22.109281  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:22.146161  438716 cri.go:89] found id: ""
	I0819 19:14:22.146194  438716 logs.go:276] 0 containers: []
	W0819 19:14:22.146206  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:22.146215  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:22.146276  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:22.183005  438716 cri.go:89] found id: ""
	I0819 19:14:22.183033  438716 logs.go:276] 0 containers: []
	W0819 19:14:22.183046  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:22.183054  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:22.183108  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:22.220745  438716 cri.go:89] found id: ""
	I0819 19:14:22.220772  438716 logs.go:276] 0 containers: []
	W0819 19:14:22.220784  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:22.220798  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:22.220817  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:22.297377  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:22.297403  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:22.297416  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:22.373503  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:22.373542  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:22.414922  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:22.414956  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:22.477902  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:22.477944  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:24.993405  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:25.007305  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:25.007379  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:25.041157  438716 cri.go:89] found id: ""
	I0819 19:14:25.041191  438716 logs.go:276] 0 containers: []
	W0819 19:14:25.041203  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:25.041211  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:25.041278  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:25.078572  438716 cri.go:89] found id: ""
	I0819 19:14:25.078605  438716 logs.go:276] 0 containers: []
	W0819 19:14:25.078617  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:25.078625  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:25.078695  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:25.114571  438716 cri.go:89] found id: ""
	I0819 19:14:25.114603  438716 logs.go:276] 0 containers: []
	W0819 19:14:25.114615  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:25.114624  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:25.114690  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:25.154341  438716 cri.go:89] found id: ""
	I0819 19:14:25.154366  438716 logs.go:276] 0 containers: []
	W0819 19:14:25.154375  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:25.154381  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:25.154434  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:25.192592  438716 cri.go:89] found id: ""
	I0819 19:14:25.192620  438716 logs.go:276] 0 containers: []
	W0819 19:14:25.192631  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:25.192640  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:25.192705  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:25.227813  438716 cri.go:89] found id: ""
	I0819 19:14:25.227847  438716 logs.go:276] 0 containers: []
	W0819 19:14:25.227860  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:25.227869  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:25.227933  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:25.264321  438716 cri.go:89] found id: ""
	I0819 19:14:25.264349  438716 logs.go:276] 0 containers: []
	W0819 19:14:25.264357  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:25.264364  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:25.264427  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:25.298562  438716 cri.go:89] found id: ""
	I0819 19:14:25.298596  438716 logs.go:276] 0 containers: []
	W0819 19:14:25.298608  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:25.298621  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:25.298638  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:25.352659  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:25.352695  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:25.366638  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:25.366665  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:25.432964  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:25.432992  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:25.433010  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:25.511487  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:25.511549  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:21.902660  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:24.401454  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:26.402255  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:24.193406  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:26.194758  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:25.919090  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:28.420031  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:28.057003  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:28.070849  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:28.070914  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:28.107817  438716 cri.go:89] found id: ""
	I0819 19:14:28.107852  438716 logs.go:276] 0 containers: []
	W0819 19:14:28.107865  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:28.107875  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:28.107948  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:28.141816  438716 cri.go:89] found id: ""
	I0819 19:14:28.141862  438716 logs.go:276] 0 containers: []
	W0819 19:14:28.141874  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:28.141887  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:28.141958  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:28.179854  438716 cri.go:89] found id: ""
	I0819 19:14:28.179885  438716 logs.go:276] 0 containers: []
	W0819 19:14:28.179893  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:28.179905  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:28.179972  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:28.217335  438716 cri.go:89] found id: ""
	I0819 19:14:28.217364  438716 logs.go:276] 0 containers: []
	W0819 19:14:28.217372  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:28.217380  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:28.217438  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:28.254161  438716 cri.go:89] found id: ""
	I0819 19:14:28.254193  438716 logs.go:276] 0 containers: []
	W0819 19:14:28.254204  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:28.254213  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:28.254276  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:28.288658  438716 cri.go:89] found id: ""
	I0819 19:14:28.288682  438716 logs.go:276] 0 containers: []
	W0819 19:14:28.288691  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:28.288698  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:28.288749  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:28.321957  438716 cri.go:89] found id: ""
	I0819 19:14:28.321987  438716 logs.go:276] 0 containers: []
	W0819 19:14:28.321996  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:28.322004  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:28.322057  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:28.355032  438716 cri.go:89] found id: ""
	I0819 19:14:28.355068  438716 logs.go:276] 0 containers: []
	W0819 19:14:28.355080  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:28.355094  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:28.355111  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:28.406220  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:28.406253  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:28.420877  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:28.420907  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:28.502576  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:28.502598  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:28.502614  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:28.582717  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:28.582769  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:28.904716  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:31.401098  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:28.195001  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:30.693605  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:30.917957  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:32.918239  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:31.121960  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:31.135502  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:31.135568  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:31.170423  438716 cri.go:89] found id: ""
	I0819 19:14:31.170451  438716 logs.go:276] 0 containers: []
	W0819 19:14:31.170461  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:31.170467  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:31.170532  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:31.207328  438716 cri.go:89] found id: ""
	I0819 19:14:31.207356  438716 logs.go:276] 0 containers: []
	W0819 19:14:31.207364  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:31.207370  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:31.207430  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:31.245655  438716 cri.go:89] found id: ""
	I0819 19:14:31.245687  438716 logs.go:276] 0 containers: []
	W0819 19:14:31.245698  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:31.245707  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:31.245773  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:31.282174  438716 cri.go:89] found id: ""
	I0819 19:14:31.282208  438716 logs.go:276] 0 containers: []
	W0819 19:14:31.282221  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:31.282230  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:31.282303  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:31.316779  438716 cri.go:89] found id: ""
	I0819 19:14:31.316810  438716 logs.go:276] 0 containers: []
	W0819 19:14:31.316818  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:31.316826  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:31.316879  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:31.356849  438716 cri.go:89] found id: ""
	I0819 19:14:31.356884  438716 logs.go:276] 0 containers: []
	W0819 19:14:31.356894  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:31.356900  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:31.356963  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:31.395102  438716 cri.go:89] found id: ""
	I0819 19:14:31.395135  438716 logs.go:276] 0 containers: []
	W0819 19:14:31.395143  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:31.395150  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:31.395205  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:31.433018  438716 cri.go:89] found id: ""
	I0819 19:14:31.433045  438716 logs.go:276] 0 containers: []
	W0819 19:14:31.433076  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:31.433091  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:31.433108  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:31.446294  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:31.446319  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:31.518158  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:31.518180  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:31.518196  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:31.600568  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:31.600611  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:31.642356  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:31.642386  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:34.195665  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:34.210300  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:34.210370  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:34.248715  438716 cri.go:89] found id: ""
	I0819 19:14:34.248753  438716 logs.go:276] 0 containers: []
	W0819 19:14:34.248767  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:34.248775  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:34.248849  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:34.285305  438716 cri.go:89] found id: ""
	I0819 19:14:34.285334  438716 logs.go:276] 0 containers: []
	W0819 19:14:34.285347  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:34.285355  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:34.285438  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:34.326114  438716 cri.go:89] found id: ""
	I0819 19:14:34.326148  438716 logs.go:276] 0 containers: []
	W0819 19:14:34.326160  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:34.326168  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:34.326235  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:34.360587  438716 cri.go:89] found id: ""
	I0819 19:14:34.360616  438716 logs.go:276] 0 containers: []
	W0819 19:14:34.360628  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:34.360638  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:34.360715  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:34.397452  438716 cri.go:89] found id: ""
	I0819 19:14:34.397483  438716 logs.go:276] 0 containers: []
	W0819 19:14:34.397491  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:34.397498  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:34.397556  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:34.433651  438716 cri.go:89] found id: ""
	I0819 19:14:34.433683  438716 logs.go:276] 0 containers: []
	W0819 19:14:34.433694  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:34.433702  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:34.433771  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:34.468758  438716 cri.go:89] found id: ""
	I0819 19:14:34.468787  438716 logs.go:276] 0 containers: []
	W0819 19:14:34.468796  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:34.468802  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:34.468856  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:34.505787  438716 cri.go:89] found id: ""
	I0819 19:14:34.505816  438716 logs.go:276] 0 containers: []
	W0819 19:14:34.505828  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:34.505842  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:34.505859  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:34.519430  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:34.519463  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:34.592785  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:34.592810  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:34.592827  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:34.671215  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:34.671254  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:34.711248  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:34.711277  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:33.403429  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:35.901124  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:33.194319  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:35.694280  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:34.918372  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:37.418982  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:37.265131  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:37.279035  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:37.279127  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:37.325556  438716 cri.go:89] found id: ""
	I0819 19:14:37.325589  438716 logs.go:276] 0 containers: []
	W0819 19:14:37.325601  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:37.325610  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:37.325676  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:37.360514  438716 cri.go:89] found id: ""
	I0819 19:14:37.360541  438716 logs.go:276] 0 containers: []
	W0819 19:14:37.360553  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:37.360561  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:37.360616  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:37.394428  438716 cri.go:89] found id: ""
	I0819 19:14:37.394456  438716 logs.go:276] 0 containers: []
	W0819 19:14:37.394465  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:37.394472  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:37.394531  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:37.430221  438716 cri.go:89] found id: ""
	I0819 19:14:37.430249  438716 logs.go:276] 0 containers: []
	W0819 19:14:37.430257  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:37.430264  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:37.430324  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:37.466598  438716 cri.go:89] found id: ""
	I0819 19:14:37.466630  438716 logs.go:276] 0 containers: []
	W0819 19:14:37.466641  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:37.466649  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:37.466719  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:37.510455  438716 cri.go:89] found id: ""
	I0819 19:14:37.510484  438716 logs.go:276] 0 containers: []
	W0819 19:14:37.510492  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:37.510499  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:37.510563  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:37.546122  438716 cri.go:89] found id: ""
	I0819 19:14:37.546157  438716 logs.go:276] 0 containers: []
	W0819 19:14:37.546169  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:37.546178  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:37.546247  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:37.579425  438716 cri.go:89] found id: ""
	I0819 19:14:37.579452  438716 logs.go:276] 0 containers: []
	W0819 19:14:37.579463  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:37.579475  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:37.579491  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:37.592673  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:37.592704  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:37.674026  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:37.674048  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:37.674065  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:37.752206  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:37.752244  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:37.791281  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:37.791321  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:40.345520  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:40.358771  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:40.358835  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:40.394515  438716 cri.go:89] found id: ""
	I0819 19:14:40.394549  438716 logs.go:276] 0 containers: []
	W0819 19:14:40.394565  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:40.394575  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:40.394637  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:40.430971  438716 cri.go:89] found id: ""
	I0819 19:14:40.431007  438716 logs.go:276] 0 containers: []
	W0819 19:14:40.431018  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:40.431027  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:40.431094  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:40.471417  438716 cri.go:89] found id: ""
	I0819 19:14:40.471443  438716 logs.go:276] 0 containers: []
	W0819 19:14:40.471452  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:40.471458  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:40.471511  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:40.508641  438716 cri.go:89] found id: ""
	I0819 19:14:40.508670  438716 logs.go:276] 0 containers: []
	W0819 19:14:40.508678  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:40.508684  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:40.508749  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:37.903083  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:40.402562  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:37.695031  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:40.193724  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:39.921480  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:42.420201  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:40.542418  438716 cri.go:89] found id: ""
	I0819 19:14:40.542456  438716 logs.go:276] 0 containers: []
	W0819 19:14:40.542465  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:40.542472  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:40.542533  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:40.577367  438716 cri.go:89] found id: ""
	I0819 19:14:40.577399  438716 logs.go:276] 0 containers: []
	W0819 19:14:40.577408  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:40.577414  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:40.577476  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:40.611111  438716 cri.go:89] found id: ""
	I0819 19:14:40.611138  438716 logs.go:276] 0 containers: []
	W0819 19:14:40.611147  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:40.611155  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:40.611222  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:40.650769  438716 cri.go:89] found id: ""
	I0819 19:14:40.650797  438716 logs.go:276] 0 containers: []
	W0819 19:14:40.650805  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:40.650814  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:40.650827  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:40.688085  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:40.688111  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:40.740187  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:40.740225  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:40.754774  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:40.754803  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:40.828689  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:40.828712  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:40.828728  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:43.419171  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:43.432127  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:43.432201  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:43.468751  438716 cri.go:89] found id: ""
	I0819 19:14:43.468778  438716 logs.go:276] 0 containers: []
	W0819 19:14:43.468787  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:43.468803  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:43.468870  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:43.503290  438716 cri.go:89] found id: ""
	I0819 19:14:43.503319  438716 logs.go:276] 0 containers: []
	W0819 19:14:43.503328  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:43.503334  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:43.503390  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:43.536382  438716 cri.go:89] found id: ""
	I0819 19:14:43.536416  438716 logs.go:276] 0 containers: []
	W0819 19:14:43.536435  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:43.536443  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:43.536494  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:43.571570  438716 cri.go:89] found id: ""
	I0819 19:14:43.571602  438716 logs.go:276] 0 containers: []
	W0819 19:14:43.571611  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:43.571617  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:43.571682  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:43.610421  438716 cri.go:89] found id: ""
	I0819 19:14:43.610455  438716 logs.go:276] 0 containers: []
	W0819 19:14:43.610465  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:43.610473  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:43.610524  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:43.647173  438716 cri.go:89] found id: ""
	I0819 19:14:43.647200  438716 logs.go:276] 0 containers: []
	W0819 19:14:43.647209  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:43.647215  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:43.647266  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:43.684493  438716 cri.go:89] found id: ""
	I0819 19:14:43.684525  438716 logs.go:276] 0 containers: []
	W0819 19:14:43.684535  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:43.684541  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:43.684609  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:43.718781  438716 cri.go:89] found id: ""
	I0819 19:14:43.718811  438716 logs.go:276] 0 containers: []
	W0819 19:14:43.718822  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:43.718834  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:43.718858  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:43.732546  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:43.732578  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:43.819640  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:43.819665  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:43.819700  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:43.900246  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:43.900286  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:43.941751  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:43.941783  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:42.901387  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:44.901876  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:42.693950  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:45.193132  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:44.918631  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:47.417977  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:46.498232  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:46.511167  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:46.511237  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:46.545493  438716 cri.go:89] found id: ""
	I0819 19:14:46.545528  438716 logs.go:276] 0 containers: []
	W0819 19:14:46.545541  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:46.545549  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:46.545607  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:46.580599  438716 cri.go:89] found id: ""
	I0819 19:14:46.580626  438716 logs.go:276] 0 containers: []
	W0819 19:14:46.580634  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:46.580640  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:46.580760  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:46.614515  438716 cri.go:89] found id: ""
	I0819 19:14:46.614551  438716 logs.go:276] 0 containers: []
	W0819 19:14:46.614561  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:46.614570  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:46.614637  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:46.647767  438716 cri.go:89] found id: ""
	I0819 19:14:46.647803  438716 logs.go:276] 0 containers: []
	W0819 19:14:46.647816  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:46.647825  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:46.647893  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:46.681660  438716 cri.go:89] found id: ""
	I0819 19:14:46.681695  438716 logs.go:276] 0 containers: []
	W0819 19:14:46.681707  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:46.681717  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:46.681788  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:46.718828  438716 cri.go:89] found id: ""
	I0819 19:14:46.718858  438716 logs.go:276] 0 containers: []
	W0819 19:14:46.718868  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:46.718875  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:46.718929  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:46.760524  438716 cri.go:89] found id: ""
	I0819 19:14:46.760553  438716 logs.go:276] 0 containers: []
	W0819 19:14:46.760561  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:46.760569  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:46.760634  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:46.799014  438716 cri.go:89] found id: ""
	I0819 19:14:46.799042  438716 logs.go:276] 0 containers: []
	W0819 19:14:46.799054  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:46.799067  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:46.799135  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:46.850769  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:46.850812  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:46.865647  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:46.865698  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:46.942197  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:46.942228  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:46.942244  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:47.019295  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:47.019337  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:49.562713  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:49.575406  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:49.575484  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:49.610067  438716 cri.go:89] found id: ""
	I0819 19:14:49.610105  438716 logs.go:276] 0 containers: []
	W0819 19:14:49.610115  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:49.610121  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:49.610182  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:49.646164  438716 cri.go:89] found id: ""
	I0819 19:14:49.646205  438716 logs.go:276] 0 containers: []
	W0819 19:14:49.646230  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:49.646238  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:49.646317  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:49.680268  438716 cri.go:89] found id: ""
	I0819 19:14:49.680303  438716 logs.go:276] 0 containers: []
	W0819 19:14:49.680314  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:49.680322  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:49.680387  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:49.714952  438716 cri.go:89] found id: ""
	I0819 19:14:49.714981  438716 logs.go:276] 0 containers: []
	W0819 19:14:49.714992  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:49.715001  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:49.715067  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:49.749483  438716 cri.go:89] found id: ""
	I0819 19:14:49.749516  438716 logs.go:276] 0 containers: []
	W0819 19:14:49.749528  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:49.749537  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:49.749616  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:49.794506  438716 cri.go:89] found id: ""
	I0819 19:14:49.794538  438716 logs.go:276] 0 containers: []
	W0819 19:14:49.794550  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:49.794558  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:49.794628  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:49.847284  438716 cri.go:89] found id: ""
	I0819 19:14:49.847313  438716 logs.go:276] 0 containers: []
	W0819 19:14:49.847324  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:49.847334  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:49.847398  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:49.903800  438716 cri.go:89] found id: ""
	I0819 19:14:49.903829  438716 logs.go:276] 0 containers: []
	W0819 19:14:49.903839  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:49.903850  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:49.903867  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:49.972836  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:49.972866  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:49.972885  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:50.049939  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:50.049976  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:50.086514  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:50.086550  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:50.140681  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:50.140718  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:46.903667  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:49.402220  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:51.402281  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:47.693723  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:49.694755  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:52.193220  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:49.919931  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:52.419880  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:52.656573  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:52.670043  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:52.670124  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:52.704514  438716 cri.go:89] found id: ""
	I0819 19:14:52.704541  438716 logs.go:276] 0 containers: []
	W0819 19:14:52.704551  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:52.704558  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:52.704621  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:52.738329  438716 cri.go:89] found id: ""
	I0819 19:14:52.738357  438716 logs.go:276] 0 containers: []
	W0819 19:14:52.738365  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:52.738371  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:52.738423  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:52.774886  438716 cri.go:89] found id: ""
	I0819 19:14:52.774917  438716 logs.go:276] 0 containers: []
	W0819 19:14:52.774926  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:52.774933  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:52.774986  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:52.810262  438716 cri.go:89] found id: ""
	I0819 19:14:52.810288  438716 logs.go:276] 0 containers: []
	W0819 19:14:52.810296  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:52.810303  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:52.810363  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:52.848429  438716 cri.go:89] found id: ""
	I0819 19:14:52.848455  438716 logs.go:276] 0 containers: []
	W0819 19:14:52.848463  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:52.848474  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:52.848539  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:52.886135  438716 cri.go:89] found id: ""
	I0819 19:14:52.886163  438716 logs.go:276] 0 containers: []
	W0819 19:14:52.886179  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:52.886185  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:52.886241  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:52.923288  438716 cri.go:89] found id: ""
	I0819 19:14:52.923314  438716 logs.go:276] 0 containers: []
	W0819 19:14:52.923325  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:52.923333  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:52.923397  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:52.957273  438716 cri.go:89] found id: ""
	I0819 19:14:52.957303  438716 logs.go:276] 0 containers: []
	W0819 19:14:52.957315  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:52.957328  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:52.957345  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:52.970687  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:52.970714  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:53.045081  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:53.045108  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:53.045125  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:53.122233  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:53.122279  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:53.161525  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:53.161554  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:53.901584  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:55.902739  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:54.194220  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:56.197070  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:54.917358  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:56.918562  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:58.919041  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:55.714177  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:55.733726  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:55.733809  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:55.781435  438716 cri.go:89] found id: ""
	I0819 19:14:55.781472  438716 logs.go:276] 0 containers: []
	W0819 19:14:55.781485  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:55.781493  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:55.781560  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:55.846316  438716 cri.go:89] found id: ""
	I0819 19:14:55.846351  438716 logs.go:276] 0 containers: []
	W0819 19:14:55.846362  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:55.846370  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:55.846439  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:55.881587  438716 cri.go:89] found id: ""
	I0819 19:14:55.881623  438716 logs.go:276] 0 containers: []
	W0819 19:14:55.881635  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:55.881644  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:55.881719  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:55.919332  438716 cri.go:89] found id: ""
	I0819 19:14:55.919374  438716 logs.go:276] 0 containers: []
	W0819 19:14:55.919382  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:55.919389  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:55.919441  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:55.954704  438716 cri.go:89] found id: ""
	I0819 19:14:55.954739  438716 logs.go:276] 0 containers: []
	W0819 19:14:55.954752  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:55.954761  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:55.954836  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:55.989289  438716 cri.go:89] found id: ""
	I0819 19:14:55.989321  438716 logs.go:276] 0 containers: []
	W0819 19:14:55.989332  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:55.989340  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:55.989406  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:56.025771  438716 cri.go:89] found id: ""
	I0819 19:14:56.025800  438716 logs.go:276] 0 containers: []
	W0819 19:14:56.025809  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:56.025816  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:56.025883  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:56.065631  438716 cri.go:89] found id: ""
	I0819 19:14:56.065673  438716 logs.go:276] 0 containers: []
	W0819 19:14:56.065686  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:56.065699  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:56.065722  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:56.119482  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:56.119523  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:56.133885  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:56.133915  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:56.207012  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:56.207033  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:56.207045  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:56.288158  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:56.288195  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:58.829677  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:58.844085  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:58.844158  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:58.880900  438716 cri.go:89] found id: ""
	I0819 19:14:58.880934  438716 logs.go:276] 0 containers: []
	W0819 19:14:58.880945  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:58.880951  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:58.881016  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:58.918833  438716 cri.go:89] found id: ""
	I0819 19:14:58.918862  438716 logs.go:276] 0 containers: []
	W0819 19:14:58.918872  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:58.918881  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:58.918939  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:58.956577  438716 cri.go:89] found id: ""
	I0819 19:14:58.956612  438716 logs.go:276] 0 containers: []
	W0819 19:14:58.956623  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:58.956634  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:58.956705  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:58.993884  438716 cri.go:89] found id: ""
	I0819 19:14:58.993914  438716 logs.go:276] 0 containers: []
	W0819 19:14:58.993923  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:58.993930  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:58.993988  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:59.031366  438716 cri.go:89] found id: ""
	I0819 19:14:59.031389  438716 logs.go:276] 0 containers: []
	W0819 19:14:59.031398  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:59.031405  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:59.031464  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:59.072014  438716 cri.go:89] found id: ""
	I0819 19:14:59.072047  438716 logs.go:276] 0 containers: []
	W0819 19:14:59.072058  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:59.072065  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:59.072129  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:59.108713  438716 cri.go:89] found id: ""
	I0819 19:14:59.108744  438716 logs.go:276] 0 containers: []
	W0819 19:14:59.108756  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:59.108765  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:59.108866  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:59.147599  438716 cri.go:89] found id: ""
	I0819 19:14:59.147634  438716 logs.go:276] 0 containers: []
	W0819 19:14:59.147647  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:59.147659  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:59.147695  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:59.224745  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:59.224781  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:59.264586  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:59.264616  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:59.317065  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:59.317104  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:59.331230  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:59.331264  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:59.398370  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:58.401471  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:00.402623  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:58.694096  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:01.193262  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:01.418063  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:03.418302  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:01.899123  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:01.912743  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:01.912824  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:01.949717  438716 cri.go:89] found id: ""
	I0819 19:15:01.949748  438716 logs.go:276] 0 containers: []
	W0819 19:15:01.949756  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:01.949763  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:01.949819  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:01.992776  438716 cri.go:89] found id: ""
	I0819 19:15:01.992802  438716 logs.go:276] 0 containers: []
	W0819 19:15:01.992812  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:01.992819  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:01.992884  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:02.030551  438716 cri.go:89] found id: ""
	I0819 19:15:02.030579  438716 logs.go:276] 0 containers: []
	W0819 19:15:02.030592  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:02.030600  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:02.030672  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:02.069927  438716 cri.go:89] found id: ""
	I0819 19:15:02.069955  438716 logs.go:276] 0 containers: []
	W0819 19:15:02.069964  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:02.069971  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:02.070031  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:02.106584  438716 cri.go:89] found id: ""
	I0819 19:15:02.106609  438716 logs.go:276] 0 containers: []
	W0819 19:15:02.106619  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:02.106629  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:02.106695  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:02.145007  438716 cri.go:89] found id: ""
	I0819 19:15:02.145035  438716 logs.go:276] 0 containers: []
	W0819 19:15:02.145044  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:02.145051  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:02.145113  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:02.180693  438716 cri.go:89] found id: ""
	I0819 19:15:02.180730  438716 logs.go:276] 0 containers: []
	W0819 19:15:02.180741  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:02.180748  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:02.180800  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:02.215563  438716 cri.go:89] found id: ""
	I0819 19:15:02.215597  438716 logs.go:276] 0 containers: []
	W0819 19:15:02.215609  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:02.215623  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:02.215641  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:02.285658  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:02.285692  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:02.285711  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:02.363620  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:02.363660  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:02.414240  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:02.414274  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:02.467336  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:02.467380  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:04.981935  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:04.995537  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:04.995611  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:05.032700  438716 cri.go:89] found id: ""
	I0819 19:15:05.032735  438716 logs.go:276] 0 containers: []
	W0819 19:15:05.032748  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:05.032756  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:05.032827  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:05.069132  438716 cri.go:89] found id: ""
	I0819 19:15:05.069162  438716 logs.go:276] 0 containers: []
	W0819 19:15:05.069173  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:05.069181  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:05.069247  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:05.105320  438716 cri.go:89] found id: ""
	I0819 19:15:05.105346  438716 logs.go:276] 0 containers: []
	W0819 19:15:05.105355  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:05.105361  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:05.105421  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:05.142311  438716 cri.go:89] found id: ""
	I0819 19:15:05.142343  438716 logs.go:276] 0 containers: []
	W0819 19:15:05.142354  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:05.142362  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:05.142412  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:05.177398  438716 cri.go:89] found id: ""
	I0819 19:15:05.177426  438716 logs.go:276] 0 containers: []
	W0819 19:15:05.177437  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:05.177450  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:05.177506  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:05.212749  438716 cri.go:89] found id: ""
	I0819 19:15:05.212780  438716 logs.go:276] 0 containers: []
	W0819 19:15:05.212789  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:05.212796  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:05.212854  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:05.246325  438716 cri.go:89] found id: ""
	I0819 19:15:05.246356  438716 logs.go:276] 0 containers: []
	W0819 19:15:05.246364  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:05.246371  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:05.246420  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:05.287429  438716 cri.go:89] found id: ""
	I0819 19:15:05.287456  438716 logs.go:276] 0 containers: []
	W0819 19:15:05.287466  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:05.287476  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:05.287489  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:05.338742  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:05.338787  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:05.352948  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:05.352978  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:05.421478  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:05.421502  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:05.421529  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:05.497772  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:05.497809  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:02.902202  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:05.403518  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:03.193491  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:05.194340  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:05.419361  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:07.918522  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:08.040403  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:08.053761  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:08.053827  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:08.087047  438716 cri.go:89] found id: ""
	I0819 19:15:08.087073  438716 logs.go:276] 0 containers: []
	W0819 19:15:08.087082  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:08.087089  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:08.087140  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:08.122012  438716 cri.go:89] found id: ""
	I0819 19:15:08.122048  438716 logs.go:276] 0 containers: []
	W0819 19:15:08.122059  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:08.122068  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:08.122134  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:08.155319  438716 cri.go:89] found id: ""
	I0819 19:15:08.155349  438716 logs.go:276] 0 containers: []
	W0819 19:15:08.155360  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:08.155368  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:08.155447  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:08.196003  438716 cri.go:89] found id: ""
	I0819 19:15:08.196027  438716 logs.go:276] 0 containers: []
	W0819 19:15:08.196035  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:08.196041  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:08.196091  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:08.230798  438716 cri.go:89] found id: ""
	I0819 19:15:08.230826  438716 logs.go:276] 0 containers: []
	W0819 19:15:08.230836  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:08.230845  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:08.230910  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:08.267522  438716 cri.go:89] found id: ""
	I0819 19:15:08.267554  438716 logs.go:276] 0 containers: []
	W0819 19:15:08.267562  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:08.267569  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:08.267621  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:08.304775  438716 cri.go:89] found id: ""
	I0819 19:15:08.304801  438716 logs.go:276] 0 containers: []
	W0819 19:15:08.304809  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:08.304815  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:08.304866  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:08.344694  438716 cri.go:89] found id: ""
	I0819 19:15:08.344720  438716 logs.go:276] 0 containers: []
	W0819 19:15:08.344734  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:08.344744  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:08.344757  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:08.383581  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:08.383619  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:08.433868  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:08.433905  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:08.447627  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:08.447657  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:08.518846  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:08.518869  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:08.518887  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:07.901746  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:09.902647  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:07.693351  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:10.193893  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:12.194400  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:09.919436  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:12.418215  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:11.104449  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:11.118149  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:11.118228  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:11.157917  438716 cri.go:89] found id: ""
	I0819 19:15:11.157951  438716 logs.go:276] 0 containers: []
	W0819 19:15:11.157963  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:11.157971  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:11.158040  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:11.196685  438716 cri.go:89] found id: ""
	I0819 19:15:11.196711  438716 logs.go:276] 0 containers: []
	W0819 19:15:11.196721  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:11.196729  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:11.196788  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:11.231089  438716 cri.go:89] found id: ""
	I0819 19:15:11.231124  438716 logs.go:276] 0 containers: []
	W0819 19:15:11.231135  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:11.231144  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:11.231223  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:11.267001  438716 cri.go:89] found id: ""
	I0819 19:15:11.267032  438716 logs.go:276] 0 containers: []
	W0819 19:15:11.267041  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:11.267048  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:11.267113  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:11.302178  438716 cri.go:89] found id: ""
	I0819 19:15:11.302210  438716 logs.go:276] 0 containers: []
	W0819 19:15:11.302223  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:11.302232  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:11.302292  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:11.336335  438716 cri.go:89] found id: ""
	I0819 19:15:11.336368  438716 logs.go:276] 0 containers: []
	W0819 19:15:11.336442  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:11.336458  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:11.336525  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:11.370891  438716 cri.go:89] found id: ""
	I0819 19:15:11.370926  438716 logs.go:276] 0 containers: []
	W0819 19:15:11.370937  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:11.370945  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:11.371007  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:11.407439  438716 cri.go:89] found id: ""
	I0819 19:15:11.407466  438716 logs.go:276] 0 containers: []
	W0819 19:15:11.407473  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:11.407482  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:11.407497  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:11.458692  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:11.458735  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:11.473104  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:11.473133  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:11.542004  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:11.542031  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:11.542050  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:11.619972  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:11.620014  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:14.159220  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:14.173135  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:14.173204  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:14.210347  438716 cri.go:89] found id: ""
	I0819 19:15:14.210377  438716 logs.go:276] 0 containers: []
	W0819 19:15:14.210389  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:14.210398  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:14.210468  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:14.247143  438716 cri.go:89] found id: ""
	I0819 19:15:14.247169  438716 logs.go:276] 0 containers: []
	W0819 19:15:14.247180  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:14.247187  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:14.247260  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:14.284949  438716 cri.go:89] found id: ""
	I0819 19:15:14.284981  438716 logs.go:276] 0 containers: []
	W0819 19:15:14.284995  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:14.285003  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:14.285071  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:14.326801  438716 cri.go:89] found id: ""
	I0819 19:15:14.326826  438716 logs.go:276] 0 containers: []
	W0819 19:15:14.326834  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:14.326842  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:14.326903  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:14.362730  438716 cri.go:89] found id: ""
	I0819 19:15:14.362764  438716 logs.go:276] 0 containers: []
	W0819 19:15:14.362775  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:14.362783  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:14.362852  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:14.403406  438716 cri.go:89] found id: ""
	I0819 19:15:14.403437  438716 logs.go:276] 0 containers: []
	W0819 19:15:14.403448  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:14.403456  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:14.403514  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:14.440641  438716 cri.go:89] found id: ""
	I0819 19:15:14.440670  438716 logs.go:276] 0 containers: []
	W0819 19:15:14.440678  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:14.440685  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:14.440737  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:14.479477  438716 cri.go:89] found id: ""
	I0819 19:15:14.479511  438716 logs.go:276] 0 containers: []
	W0819 19:15:14.479521  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:14.479530  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:14.479544  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:14.530573  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:14.530620  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:14.545329  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:14.545368  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:14.619632  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:14.619652  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:14.619680  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:14.694923  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:14.694956  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:12.401350  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:14.402845  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:14.693534  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:16.693737  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:14.420872  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:16.918227  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:18.919244  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:17.237830  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:17.250579  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:17.250645  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:17.284706  438716 cri.go:89] found id: ""
	I0819 19:15:17.284738  438716 logs.go:276] 0 containers: []
	W0819 19:15:17.284750  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:17.284759  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:17.284832  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:17.320313  438716 cri.go:89] found id: ""
	I0819 19:15:17.320342  438716 logs.go:276] 0 containers: []
	W0819 19:15:17.320350  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:17.320356  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:17.320419  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:17.355974  438716 cri.go:89] found id: ""
	I0819 19:15:17.356008  438716 logs.go:276] 0 containers: []
	W0819 19:15:17.356018  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:17.356027  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:17.356093  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:17.390759  438716 cri.go:89] found id: ""
	I0819 19:15:17.390786  438716 logs.go:276] 0 containers: []
	W0819 19:15:17.390795  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:17.390803  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:17.390861  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:17.431951  438716 cri.go:89] found id: ""
	I0819 19:15:17.431982  438716 logs.go:276] 0 containers: []
	W0819 19:15:17.431993  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:17.432001  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:17.432068  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:17.467183  438716 cri.go:89] found id: ""
	I0819 19:15:17.467215  438716 logs.go:276] 0 containers: []
	W0819 19:15:17.467227  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:17.467236  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:17.467306  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:17.502678  438716 cri.go:89] found id: ""
	I0819 19:15:17.502709  438716 logs.go:276] 0 containers: []
	W0819 19:15:17.502721  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:17.502730  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:17.502801  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:17.537597  438716 cri.go:89] found id: ""
	I0819 19:15:17.537629  438716 logs.go:276] 0 containers: []
	W0819 19:15:17.537643  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:17.537656  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:17.537672  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:17.620076  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:17.620117  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:17.659979  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:17.660009  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:17.710963  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:17.711006  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:17.725556  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:17.725590  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:17.796176  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:20.297246  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:20.311395  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:20.311476  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:20.352279  438716 cri.go:89] found id: ""
	I0819 19:15:20.352317  438716 logs.go:276] 0 containers: []
	W0819 19:15:20.352328  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:20.352338  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:20.352401  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:20.390335  438716 cri.go:89] found id: ""
	I0819 19:15:20.390368  438716 logs.go:276] 0 containers: []
	W0819 19:15:20.390377  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:20.390384  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:20.390450  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:20.430264  438716 cri.go:89] found id: ""
	I0819 19:15:20.430300  438716 logs.go:276] 0 containers: []
	W0819 19:15:20.430312  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:20.430320  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:20.430386  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:20.469670  438716 cri.go:89] found id: ""
	I0819 19:15:20.469703  438716 logs.go:276] 0 containers: []
	W0819 19:15:20.469715  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:20.469723  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:20.469790  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:20.503233  438716 cri.go:89] found id: ""
	I0819 19:15:20.503263  438716 logs.go:276] 0 containers: []
	W0819 19:15:20.503274  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:20.503283  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:20.503371  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:16.902246  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:19.402407  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:18.693921  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:21.193124  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:21.418463  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:23.418730  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:20.538180  438716 cri.go:89] found id: ""
	I0819 19:15:20.538211  438716 logs.go:276] 0 containers: []
	W0819 19:15:20.538223  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:20.538231  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:20.538302  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:20.573301  438716 cri.go:89] found id: ""
	I0819 19:15:20.573329  438716 logs.go:276] 0 containers: []
	W0819 19:15:20.573337  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:20.573352  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:20.573411  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:20.606962  438716 cri.go:89] found id: ""
	I0819 19:15:20.606995  438716 logs.go:276] 0 containers: []
	W0819 19:15:20.607007  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:20.607019  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:20.607035  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:20.658392  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:20.658428  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:20.672063  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:20.672092  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:20.747987  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:20.748010  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:20.748035  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:20.829367  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:20.829415  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:23.378885  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:23.393711  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:23.393778  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:23.430629  438716 cri.go:89] found id: ""
	I0819 19:15:23.430655  438716 logs.go:276] 0 containers: []
	W0819 19:15:23.430665  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:23.430675  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:23.430727  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:23.467509  438716 cri.go:89] found id: ""
	I0819 19:15:23.467541  438716 logs.go:276] 0 containers: []
	W0819 19:15:23.467552  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:23.467560  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:23.467634  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:23.505313  438716 cri.go:89] found id: ""
	I0819 19:15:23.505351  438716 logs.go:276] 0 containers: []
	W0819 19:15:23.505359  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:23.505366  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:23.505416  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:23.543393  438716 cri.go:89] found id: ""
	I0819 19:15:23.543428  438716 logs.go:276] 0 containers: []
	W0819 19:15:23.543441  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:23.543450  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:23.543514  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:23.578265  438716 cri.go:89] found id: ""
	I0819 19:15:23.578293  438716 logs.go:276] 0 containers: []
	W0819 19:15:23.578301  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:23.578308  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:23.578376  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:23.613951  438716 cri.go:89] found id: ""
	I0819 19:15:23.613981  438716 logs.go:276] 0 containers: []
	W0819 19:15:23.613989  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:23.613996  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:23.614061  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:23.647387  438716 cri.go:89] found id: ""
	I0819 19:15:23.647418  438716 logs.go:276] 0 containers: []
	W0819 19:15:23.647426  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:23.647433  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:23.647501  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:23.682482  438716 cri.go:89] found id: ""
	I0819 19:15:23.682510  438716 logs.go:276] 0 containers: []
	W0819 19:15:23.682519  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:23.682530  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:23.682547  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:23.696601  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:23.696629  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:23.766762  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:23.766788  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:23.766804  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:23.850947  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:23.850988  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:23.891113  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:23.891146  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:21.902926  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:24.401874  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:23.193192  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:25.193347  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:25.919555  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:28.419920  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:26.444086  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:26.457774  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:26.457844  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:26.494525  438716 cri.go:89] found id: ""
	I0819 19:15:26.494552  438716 logs.go:276] 0 containers: []
	W0819 19:15:26.494560  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:26.494567  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:26.494618  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:26.535317  438716 cri.go:89] found id: ""
	I0819 19:15:26.535348  438716 logs.go:276] 0 containers: []
	W0819 19:15:26.535359  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:26.535368  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:26.535437  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:26.570853  438716 cri.go:89] found id: ""
	I0819 19:15:26.570886  438716 logs.go:276] 0 containers: []
	W0819 19:15:26.570896  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:26.570920  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:26.570987  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:26.610739  438716 cri.go:89] found id: ""
	I0819 19:15:26.610773  438716 logs.go:276] 0 containers: []
	W0819 19:15:26.610785  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:26.610794  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:26.610885  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:26.651274  438716 cri.go:89] found id: ""
	I0819 19:15:26.651303  438716 logs.go:276] 0 containers: []
	W0819 19:15:26.651311  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:26.651318  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:26.651367  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:26.689963  438716 cri.go:89] found id: ""
	I0819 19:15:26.689993  438716 logs.go:276] 0 containers: []
	W0819 19:15:26.690005  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:26.690013  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:26.690083  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:26.729433  438716 cri.go:89] found id: ""
	I0819 19:15:26.729465  438716 logs.go:276] 0 containers: []
	W0819 19:15:26.729475  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:26.729483  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:26.729548  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:26.768386  438716 cri.go:89] found id: ""
	I0819 19:15:26.768418  438716 logs.go:276] 0 containers: []
	W0819 19:15:26.768427  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:26.768436  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:26.768449  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:26.821526  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:26.821564  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:26.835714  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:26.835763  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:26.907981  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:26.908007  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:26.908023  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:26.991969  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:26.992008  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:29.529743  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:29.544812  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:29.544883  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:29.581455  438716 cri.go:89] found id: ""
	I0819 19:15:29.581486  438716 logs.go:276] 0 containers: []
	W0819 19:15:29.581496  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:29.581503  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:29.581559  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:29.634542  438716 cri.go:89] found id: ""
	I0819 19:15:29.634576  438716 logs.go:276] 0 containers: []
	W0819 19:15:29.634587  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:29.634596  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:29.634663  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:29.670388  438716 cri.go:89] found id: ""
	I0819 19:15:29.670422  438716 logs.go:276] 0 containers: []
	W0819 19:15:29.670439  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:29.670449  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:29.670511  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:29.712267  438716 cri.go:89] found id: ""
	I0819 19:15:29.712293  438716 logs.go:276] 0 containers: []
	W0819 19:15:29.712304  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:29.712313  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:29.712376  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:29.752392  438716 cri.go:89] found id: ""
	I0819 19:15:29.752423  438716 logs.go:276] 0 containers: []
	W0819 19:15:29.752432  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:29.752438  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:29.752500  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:29.791734  438716 cri.go:89] found id: ""
	I0819 19:15:29.791763  438716 logs.go:276] 0 containers: []
	W0819 19:15:29.791772  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:29.791778  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:29.791830  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:29.832882  438716 cri.go:89] found id: ""
	I0819 19:15:29.832910  438716 logs.go:276] 0 containers: []
	W0819 19:15:29.832921  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:29.832929  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:29.832986  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:29.872035  438716 cri.go:89] found id: ""
	I0819 19:15:29.872068  438716 logs.go:276] 0 containers: []
	W0819 19:15:29.872076  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:29.872086  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:29.872098  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:29.926551  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:29.926588  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:29.940500  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:29.940537  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:30.010327  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:30.010348  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:30.010368  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:30.090864  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:30.090910  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:26.902881  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:29.401449  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:27.692753  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:29.693161  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:32.193256  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:30.421066  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:32.918642  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:32.636291  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:32.649264  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:32.649334  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:32.683746  438716 cri.go:89] found id: ""
	I0819 19:15:32.683774  438716 logs.go:276] 0 containers: []
	W0819 19:15:32.683785  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:32.683794  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:32.683867  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:32.723805  438716 cri.go:89] found id: ""
	I0819 19:15:32.723838  438716 logs.go:276] 0 containers: []
	W0819 19:15:32.723850  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:32.723858  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:32.723917  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:32.758119  438716 cri.go:89] found id: ""
	I0819 19:15:32.758148  438716 logs.go:276] 0 containers: []
	W0819 19:15:32.758157  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:32.758164  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:32.758215  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:32.792726  438716 cri.go:89] found id: ""
	I0819 19:15:32.792754  438716 logs.go:276] 0 containers: []
	W0819 19:15:32.792768  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:32.792775  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:32.792823  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:32.829180  438716 cri.go:89] found id: ""
	I0819 19:15:32.829208  438716 logs.go:276] 0 containers: []
	W0819 19:15:32.829217  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:32.829224  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:32.829274  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:32.869045  438716 cri.go:89] found id: ""
	I0819 19:15:32.869081  438716 logs.go:276] 0 containers: []
	W0819 19:15:32.869093  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:32.869102  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:32.869172  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:32.904780  438716 cri.go:89] found id: ""
	I0819 19:15:32.904803  438716 logs.go:276] 0 containers: []
	W0819 19:15:32.904811  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:32.904818  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:32.904870  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:32.940846  438716 cri.go:89] found id: ""
	I0819 19:15:32.940876  438716 logs.go:276] 0 containers: []
	W0819 19:15:32.940886  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:32.940900  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:32.940924  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:33.008569  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:33.008592  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:33.008606  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:33.092605  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:33.092657  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:33.133016  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:33.133045  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:33.188335  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:33.188376  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:31.901719  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:34.401060  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:36.401983  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:34.193690  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:36.694042  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:34.918948  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:37.418186  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:35.704043  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:35.717647  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:35.717708  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:35.752337  438716 cri.go:89] found id: ""
	I0819 19:15:35.752364  438716 logs.go:276] 0 containers: []
	W0819 19:15:35.752372  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:35.752378  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:35.752431  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:35.787233  438716 cri.go:89] found id: ""
	I0819 19:15:35.787261  438716 logs.go:276] 0 containers: []
	W0819 19:15:35.787269  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:35.787275  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:35.787334  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:35.819641  438716 cri.go:89] found id: ""
	I0819 19:15:35.819667  438716 logs.go:276] 0 containers: []
	W0819 19:15:35.819697  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:35.819705  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:35.819775  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:35.856133  438716 cri.go:89] found id: ""
	I0819 19:15:35.856160  438716 logs.go:276] 0 containers: []
	W0819 19:15:35.856169  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:35.856176  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:35.856240  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:35.889390  438716 cri.go:89] found id: ""
	I0819 19:15:35.889422  438716 logs.go:276] 0 containers: []
	W0819 19:15:35.889432  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:35.889438  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:35.889501  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:35.927477  438716 cri.go:89] found id: ""
	I0819 19:15:35.927519  438716 logs.go:276] 0 containers: []
	W0819 19:15:35.927531  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:35.927539  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:35.927600  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:35.961787  438716 cri.go:89] found id: ""
	I0819 19:15:35.961825  438716 logs.go:276] 0 containers: []
	W0819 19:15:35.961837  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:35.961845  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:35.961912  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:35.998350  438716 cri.go:89] found id: ""
	I0819 19:15:35.998384  438716 logs.go:276] 0 containers: []
	W0819 19:15:35.998396  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:35.998407  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:35.998419  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:36.054352  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:36.054394  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:36.078278  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:36.078311  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:36.166388  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:36.166416  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:36.166433  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:36.247222  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:36.247269  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:38.786510  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:38.800306  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:38.800364  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:38.834555  438716 cri.go:89] found id: ""
	I0819 19:15:38.834583  438716 logs.go:276] 0 containers: []
	W0819 19:15:38.834591  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:38.834598  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:38.834648  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:38.869078  438716 cri.go:89] found id: ""
	I0819 19:15:38.869105  438716 logs.go:276] 0 containers: []
	W0819 19:15:38.869114  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:38.869120  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:38.869174  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:38.903702  438716 cri.go:89] found id: ""
	I0819 19:15:38.903728  438716 logs.go:276] 0 containers: []
	W0819 19:15:38.903736  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:38.903743  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:38.903795  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:38.938326  438716 cri.go:89] found id: ""
	I0819 19:15:38.938352  438716 logs.go:276] 0 containers: []
	W0819 19:15:38.938360  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:38.938367  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:38.938422  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:38.976032  438716 cri.go:89] found id: ""
	I0819 19:15:38.976063  438716 logs.go:276] 0 containers: []
	W0819 19:15:38.976075  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:38.976084  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:38.976149  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:39.009957  438716 cri.go:89] found id: ""
	I0819 19:15:39.009991  438716 logs.go:276] 0 containers: []
	W0819 19:15:39.010002  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:39.010011  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:39.010077  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:39.046381  438716 cri.go:89] found id: ""
	I0819 19:15:39.046408  438716 logs.go:276] 0 containers: []
	W0819 19:15:39.046416  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:39.046422  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:39.046474  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:39.083022  438716 cri.go:89] found id: ""
	I0819 19:15:39.083050  438716 logs.go:276] 0 containers: []
	W0819 19:15:39.083058  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:39.083067  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:39.083079  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:39.160731  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:39.160768  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:39.204846  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:39.204879  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:39.259248  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:39.259287  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:39.273764  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:39.273796  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:39.344477  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:38.402275  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:40.901494  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:39.194367  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:41.692933  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:39.419291  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:41.919708  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:43.919984  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:41.845258  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:41.861691  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:41.861754  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:41.908235  438716 cri.go:89] found id: ""
	I0819 19:15:41.908269  438716 logs.go:276] 0 containers: []
	W0819 19:15:41.908281  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:41.908289  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:41.908357  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:41.965631  438716 cri.go:89] found id: ""
	I0819 19:15:41.965657  438716 logs.go:276] 0 containers: []
	W0819 19:15:41.965667  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:41.965673  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:41.965732  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:42.004540  438716 cri.go:89] found id: ""
	I0819 19:15:42.004569  438716 logs.go:276] 0 containers: []
	W0819 19:15:42.004578  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:42.004585  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:42.004650  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:42.042189  438716 cri.go:89] found id: ""
	I0819 19:15:42.042215  438716 logs.go:276] 0 containers: []
	W0819 19:15:42.042224  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:42.042231  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:42.042299  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:42.079313  438716 cri.go:89] found id: ""
	I0819 19:15:42.079349  438716 logs.go:276] 0 containers: []
	W0819 19:15:42.079361  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:42.079370  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:42.079450  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:42.116130  438716 cri.go:89] found id: ""
	I0819 19:15:42.116164  438716 logs.go:276] 0 containers: []
	W0819 19:15:42.116176  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:42.116184  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:42.116253  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:42.154886  438716 cri.go:89] found id: ""
	I0819 19:15:42.154919  438716 logs.go:276] 0 containers: []
	W0819 19:15:42.154928  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:42.154935  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:42.154987  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:42.191204  438716 cri.go:89] found id: ""
	I0819 19:15:42.191237  438716 logs.go:276] 0 containers: []
	W0819 19:15:42.191248  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:42.191258  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:42.191275  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:42.244395  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:42.244434  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:42.258029  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:42.258066  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:42.323461  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:42.323481  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:42.323498  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:42.401932  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:42.401969  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:44.943615  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:44.958243  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:44.958315  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:44.995181  438716 cri.go:89] found id: ""
	I0819 19:15:44.995217  438716 logs.go:276] 0 containers: []
	W0819 19:15:44.995236  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:44.995244  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:44.995309  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:45.030705  438716 cri.go:89] found id: ""
	I0819 19:15:45.030743  438716 logs.go:276] 0 containers: []
	W0819 19:15:45.030752  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:45.030759  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:45.030814  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:45.068186  438716 cri.go:89] found id: ""
	I0819 19:15:45.068215  438716 logs.go:276] 0 containers: []
	W0819 19:15:45.068224  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:45.068231  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:45.068314  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:45.105415  438716 cri.go:89] found id: ""
	I0819 19:15:45.105443  438716 logs.go:276] 0 containers: []
	W0819 19:15:45.105452  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:45.105458  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:45.105517  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:45.143628  438716 cri.go:89] found id: ""
	I0819 19:15:45.143662  438716 logs.go:276] 0 containers: []
	W0819 19:15:45.143694  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:45.143704  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:45.143771  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:45.184896  438716 cri.go:89] found id: ""
	I0819 19:15:45.184922  438716 logs.go:276] 0 containers: []
	W0819 19:15:45.184930  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:45.184937  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:45.185000  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:45.222599  438716 cri.go:89] found id: ""
	I0819 19:15:45.222631  438716 logs.go:276] 0 containers: []
	W0819 19:15:45.222639  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:45.222645  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:45.222700  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:45.260310  438716 cri.go:89] found id: ""
	I0819 19:15:45.260341  438716 logs.go:276] 0 containers: []
	W0819 19:15:45.260352  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:45.260361  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:45.260379  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:45.273687  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:45.273718  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:45.351367  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:45.351390  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:45.351407  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:45.428751  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:45.428787  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:45.468830  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:45.468869  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:42.902576  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:45.402812  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:43.693205  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:46.192804  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:46.419903  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:48.918620  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:48.023654  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:48.037206  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:48.037294  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:48.071647  438716 cri.go:89] found id: ""
	I0819 19:15:48.071686  438716 logs.go:276] 0 containers: []
	W0819 19:15:48.071695  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:48.071704  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:48.071765  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:48.106542  438716 cri.go:89] found id: ""
	I0819 19:15:48.106575  438716 logs.go:276] 0 containers: []
	W0819 19:15:48.106586  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:48.106596  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:48.106662  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:48.151917  438716 cri.go:89] found id: ""
	I0819 19:15:48.151949  438716 logs.go:276] 0 containers: []
	W0819 19:15:48.151959  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:48.151966  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:48.152022  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:48.190095  438716 cri.go:89] found id: ""
	I0819 19:15:48.190125  438716 logs.go:276] 0 containers: []
	W0819 19:15:48.190137  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:48.190146  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:48.190211  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:48.227193  438716 cri.go:89] found id: ""
	I0819 19:15:48.227228  438716 logs.go:276] 0 containers: []
	W0819 19:15:48.227240  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:48.227248  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:48.227317  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:48.261353  438716 cri.go:89] found id: ""
	I0819 19:15:48.261386  438716 logs.go:276] 0 containers: []
	W0819 19:15:48.261396  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:48.261403  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:48.261455  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:48.295749  438716 cri.go:89] found id: ""
	I0819 19:15:48.295782  438716 logs.go:276] 0 containers: []
	W0819 19:15:48.295794  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:48.295803  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:48.295874  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:48.338350  438716 cri.go:89] found id: ""
	I0819 19:15:48.338383  438716 logs.go:276] 0 containers: []
	W0819 19:15:48.338394  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:48.338404  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:48.338420  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:48.420705  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:48.420749  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:48.464114  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:48.464153  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:48.519461  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:48.519505  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:48.534324  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:48.534357  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:48.603580  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:47.900813  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:49.902363  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:48.194425  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:50.693598  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:51.419909  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:53.918494  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:51.104343  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:51.117552  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:51.117629  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:51.150630  438716 cri.go:89] found id: ""
	I0819 19:15:51.150665  438716 logs.go:276] 0 containers: []
	W0819 19:15:51.150677  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:51.150691  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:51.150765  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:51.184316  438716 cri.go:89] found id: ""
	I0819 19:15:51.184346  438716 logs.go:276] 0 containers: []
	W0819 19:15:51.184356  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:51.184362  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:51.184410  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:51.221252  438716 cri.go:89] found id: ""
	I0819 19:15:51.221277  438716 logs.go:276] 0 containers: []
	W0819 19:15:51.221286  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:51.221292  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:51.221349  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:51.255727  438716 cri.go:89] found id: ""
	I0819 19:15:51.255755  438716 logs.go:276] 0 containers: []
	W0819 19:15:51.255763  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:51.255769  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:51.255823  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:51.290615  438716 cri.go:89] found id: ""
	I0819 19:15:51.290651  438716 logs.go:276] 0 containers: []
	W0819 19:15:51.290660  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:51.290667  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:51.290721  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:51.326895  438716 cri.go:89] found id: ""
	I0819 19:15:51.326922  438716 logs.go:276] 0 containers: []
	W0819 19:15:51.326930  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:51.326937  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:51.326987  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:51.365516  438716 cri.go:89] found id: ""
	I0819 19:15:51.365547  438716 logs.go:276] 0 containers: []
	W0819 19:15:51.365558  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:51.365566  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:51.365632  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:51.399002  438716 cri.go:89] found id: ""
	I0819 19:15:51.399030  438716 logs.go:276] 0 containers: []
	W0819 19:15:51.399038  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:51.399048  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:51.399059  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:51.453481  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:51.453524  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:51.467246  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:51.467277  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:51.548547  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:51.548578  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:51.548595  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:51.635627  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:51.635670  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:54.175003  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:54.190462  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:54.190537  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:54.232140  438716 cri.go:89] found id: ""
	I0819 19:15:54.232168  438716 logs.go:276] 0 containers: []
	W0819 19:15:54.232178  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:54.232186  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:54.232254  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:54.267700  438716 cri.go:89] found id: ""
	I0819 19:15:54.267732  438716 logs.go:276] 0 containers: []
	W0819 19:15:54.267742  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:54.267748  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:54.267807  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:54.306272  438716 cri.go:89] found id: ""
	I0819 19:15:54.306300  438716 logs.go:276] 0 containers: []
	W0819 19:15:54.306308  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:54.306315  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:54.306368  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:54.341503  438716 cri.go:89] found id: ""
	I0819 19:15:54.341536  438716 logs.go:276] 0 containers: []
	W0819 19:15:54.341549  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:54.341556  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:54.341609  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:54.375535  438716 cri.go:89] found id: ""
	I0819 19:15:54.375570  438716 logs.go:276] 0 containers: []
	W0819 19:15:54.375582  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:54.375591  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:54.375661  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:54.409611  438716 cri.go:89] found id: ""
	I0819 19:15:54.409641  438716 logs.go:276] 0 containers: []
	W0819 19:15:54.409653  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:54.409662  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:54.409731  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:54.444318  438716 cri.go:89] found id: ""
	I0819 19:15:54.444346  438716 logs.go:276] 0 containers: []
	W0819 19:15:54.444358  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:54.444366  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:54.444425  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:54.480746  438716 cri.go:89] found id: ""
	I0819 19:15:54.480777  438716 logs.go:276] 0 containers: []
	W0819 19:15:54.480789  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:54.480802  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:54.480817  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:54.534209  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:54.534245  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:54.549557  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:54.549598  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:54.625086  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:54.625111  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:54.625136  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:54.705549  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:54.705589  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:52.401150  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:54.402049  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:56.402545  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:52.693826  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:54.694875  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:57.193741  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:56.418166  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:58.418955  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:57.257440  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:57.276724  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:57.276812  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:57.319032  438716 cri.go:89] found id: ""
	I0819 19:15:57.319062  438716 logs.go:276] 0 containers: []
	W0819 19:15:57.319073  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:57.319081  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:57.319163  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:57.357093  438716 cri.go:89] found id: ""
	I0819 19:15:57.357129  438716 logs.go:276] 0 containers: []
	W0819 19:15:57.357140  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:57.357152  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:57.357222  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:57.393978  438716 cri.go:89] found id: ""
	I0819 19:15:57.394013  438716 logs.go:276] 0 containers: []
	W0819 19:15:57.394025  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:57.394033  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:57.394102  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:57.428731  438716 cri.go:89] found id: ""
	I0819 19:15:57.428760  438716 logs.go:276] 0 containers: []
	W0819 19:15:57.428768  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:57.428775  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:57.428824  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:57.467772  438716 cri.go:89] found id: ""
	I0819 19:15:57.467810  438716 logs.go:276] 0 containers: []
	W0819 19:15:57.467822  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:57.467832  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:57.467904  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:57.502398  438716 cri.go:89] found id: ""
	I0819 19:15:57.502434  438716 logs.go:276] 0 containers: []
	W0819 19:15:57.502444  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:57.502450  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:57.502503  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:57.536729  438716 cri.go:89] found id: ""
	I0819 19:15:57.536760  438716 logs.go:276] 0 containers: []
	W0819 19:15:57.536771  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:57.536779  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:57.536845  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:57.574738  438716 cri.go:89] found id: ""
	I0819 19:15:57.574762  438716 logs.go:276] 0 containers: []
	W0819 19:15:57.574770  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:57.574780  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:57.574793  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:57.630063  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:57.630113  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:57.643083  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:57.643111  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:57.725081  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:57.725104  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:57.725118  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:57.805065  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:57.805105  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:00.344557  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:00.357940  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:00.358005  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:00.399319  438716 cri.go:89] found id: ""
	I0819 19:16:00.399355  438716 logs.go:276] 0 containers: []
	W0819 19:16:00.399368  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:00.399377  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:00.399446  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:00.444223  438716 cri.go:89] found id: ""
	I0819 19:16:00.444254  438716 logs.go:276] 0 containers: []
	W0819 19:16:00.444264  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:00.444271  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:00.444323  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:00.479903  438716 cri.go:89] found id: ""
	I0819 19:16:00.479932  438716 logs.go:276] 0 containers: []
	W0819 19:16:00.479942  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:00.479948  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:00.480003  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:00.515923  438716 cri.go:89] found id: ""
	I0819 19:16:00.515954  438716 logs.go:276] 0 containers: []
	W0819 19:16:00.515966  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:00.515974  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:00.516043  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:58.901349  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:00.902114  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:59.194660  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:01.693174  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:00.419210  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:02.918814  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:00.551319  438716 cri.go:89] found id: ""
	I0819 19:16:00.551348  438716 logs.go:276] 0 containers: []
	W0819 19:16:00.551360  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:00.551370  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:00.551434  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:00.587847  438716 cri.go:89] found id: ""
	I0819 19:16:00.587882  438716 logs.go:276] 0 containers: []
	W0819 19:16:00.587892  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:00.587901  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:00.587976  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:00.624769  438716 cri.go:89] found id: ""
	I0819 19:16:00.624800  438716 logs.go:276] 0 containers: []
	W0819 19:16:00.624812  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:00.624820  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:00.624894  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:00.659300  438716 cri.go:89] found id: ""
	I0819 19:16:00.659330  438716 logs.go:276] 0 containers: []
	W0819 19:16:00.659342  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:00.659355  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:00.659371  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:00.739073  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:00.739113  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:00.779087  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:00.779116  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:00.831864  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:00.831914  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:00.845832  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:00.845863  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:00.920622  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:03.420751  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:03.434599  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:03.434664  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:03.469288  438716 cri.go:89] found id: ""
	I0819 19:16:03.469326  438716 logs.go:276] 0 containers: []
	W0819 19:16:03.469349  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:03.469372  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:03.469445  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:03.507885  438716 cri.go:89] found id: ""
	I0819 19:16:03.507911  438716 logs.go:276] 0 containers: []
	W0819 19:16:03.507927  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:03.507934  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:03.507987  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:03.543805  438716 cri.go:89] found id: ""
	I0819 19:16:03.543837  438716 logs.go:276] 0 containers: []
	W0819 19:16:03.543847  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:03.543854  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:03.543928  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:03.584060  438716 cri.go:89] found id: ""
	I0819 19:16:03.584093  438716 logs.go:276] 0 containers: []
	W0819 19:16:03.584105  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:03.584114  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:03.584202  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:03.619724  438716 cri.go:89] found id: ""
	I0819 19:16:03.619758  438716 logs.go:276] 0 containers: []
	W0819 19:16:03.619769  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:03.619776  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:03.619854  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:03.657180  438716 cri.go:89] found id: ""
	I0819 19:16:03.657213  438716 logs.go:276] 0 containers: []
	W0819 19:16:03.657225  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:03.657234  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:03.657303  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:03.695099  438716 cri.go:89] found id: ""
	I0819 19:16:03.695125  438716 logs.go:276] 0 containers: []
	W0819 19:16:03.695134  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:03.695139  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:03.695193  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:03.730263  438716 cri.go:89] found id: ""
	I0819 19:16:03.730291  438716 logs.go:276] 0 containers: []
	W0819 19:16:03.730302  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:03.730314  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:03.730331  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:03.780776  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:03.780816  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:03.795381  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:03.795419  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:03.869995  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:03.870016  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:03.870029  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:03.949654  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:03.949691  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:03.402500  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:05.902412  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:03.694220  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:06.193280  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:04.919284  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:07.418061  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:06.493589  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:06.506758  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:06.506834  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:06.545325  438716 cri.go:89] found id: ""
	I0819 19:16:06.545357  438716 logs.go:276] 0 containers: []
	W0819 19:16:06.545370  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:06.545378  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:06.545443  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:06.581708  438716 cri.go:89] found id: ""
	I0819 19:16:06.581741  438716 logs.go:276] 0 containers: []
	W0819 19:16:06.581753  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:06.581761  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:06.581828  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:06.626543  438716 cri.go:89] found id: ""
	I0819 19:16:06.626588  438716 logs.go:276] 0 containers: []
	W0819 19:16:06.626600  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:06.626609  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:06.626676  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:06.662466  438716 cri.go:89] found id: ""
	I0819 19:16:06.662499  438716 logs.go:276] 0 containers: []
	W0819 19:16:06.662509  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:06.662518  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:06.662585  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:06.701584  438716 cri.go:89] found id: ""
	I0819 19:16:06.701619  438716 logs.go:276] 0 containers: []
	W0819 19:16:06.701628  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:06.701635  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:06.701688  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:06.736245  438716 cri.go:89] found id: ""
	I0819 19:16:06.736280  438716 logs.go:276] 0 containers: []
	W0819 19:16:06.736292  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:06.736300  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:06.736392  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:06.774411  438716 cri.go:89] found id: ""
	I0819 19:16:06.774439  438716 logs.go:276] 0 containers: []
	W0819 19:16:06.774447  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:06.774454  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:06.774510  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:06.809560  438716 cri.go:89] found id: ""
	I0819 19:16:06.809597  438716 logs.go:276] 0 containers: []
	W0819 19:16:06.809609  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:06.809624  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:06.809648  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:06.884841  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:06.884862  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:06.884878  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:06.971467  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:06.971507  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:07.010737  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:07.010767  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:07.063807  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:07.063846  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:09.578451  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:09.591643  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:09.591737  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:09.625607  438716 cri.go:89] found id: ""
	I0819 19:16:09.625639  438716 logs.go:276] 0 containers: []
	W0819 19:16:09.625650  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:09.625659  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:09.625727  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:09.669145  438716 cri.go:89] found id: ""
	I0819 19:16:09.669177  438716 logs.go:276] 0 containers: []
	W0819 19:16:09.669185  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:09.669191  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:09.669254  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:09.707035  438716 cri.go:89] found id: ""
	I0819 19:16:09.707064  438716 logs.go:276] 0 containers: []
	W0819 19:16:09.707073  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:09.707080  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:09.707142  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:09.742089  438716 cri.go:89] found id: ""
	I0819 19:16:09.742116  438716 logs.go:276] 0 containers: []
	W0819 19:16:09.742125  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:09.742132  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:09.742193  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:09.782736  438716 cri.go:89] found id: ""
	I0819 19:16:09.782774  438716 logs.go:276] 0 containers: []
	W0819 19:16:09.782785  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:09.782794  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:09.782860  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:09.818003  438716 cri.go:89] found id: ""
	I0819 19:16:09.818031  438716 logs.go:276] 0 containers: []
	W0819 19:16:09.818040  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:09.818047  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:09.818110  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:09.852716  438716 cri.go:89] found id: ""
	I0819 19:16:09.852748  438716 logs.go:276] 0 containers: []
	W0819 19:16:09.852757  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:09.852764  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:09.852828  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:09.887176  438716 cri.go:89] found id: ""
	I0819 19:16:09.887206  438716 logs.go:276] 0 containers: []
	W0819 19:16:09.887218  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:09.887230  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:09.887247  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:09.901547  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:09.901573  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:09.969153  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:09.969190  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:09.969205  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:10.053777  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:10.053820  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:10.100888  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:10.100916  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:08.401650  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:10.402279  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:08.194305  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:10.693097  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:09.418856  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:11.918836  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:12.655112  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:12.667824  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:12.667897  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:12.702337  438716 cri.go:89] found id: ""
	I0819 19:16:12.702364  438716 logs.go:276] 0 containers: []
	W0819 19:16:12.702373  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:12.702379  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:12.702432  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:12.736628  438716 cri.go:89] found id: ""
	I0819 19:16:12.736655  438716 logs.go:276] 0 containers: []
	W0819 19:16:12.736663  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:12.736669  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:12.736720  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:12.773598  438716 cri.go:89] found id: ""
	I0819 19:16:12.773628  438716 logs.go:276] 0 containers: []
	W0819 19:16:12.773636  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:12.773643  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:12.773695  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:12.806584  438716 cri.go:89] found id: ""
	I0819 19:16:12.806620  438716 logs.go:276] 0 containers: []
	W0819 19:16:12.806632  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:12.806640  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:12.806723  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:12.840535  438716 cri.go:89] found id: ""
	I0819 19:16:12.840561  438716 logs.go:276] 0 containers: []
	W0819 19:16:12.840569  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:12.840575  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:12.840639  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:12.877680  438716 cri.go:89] found id: ""
	I0819 19:16:12.877712  438716 logs.go:276] 0 containers: []
	W0819 19:16:12.877721  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:12.877728  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:12.877779  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:12.912226  438716 cri.go:89] found id: ""
	I0819 19:16:12.912253  438716 logs.go:276] 0 containers: []
	W0819 19:16:12.912264  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:12.912272  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:12.912342  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:12.953463  438716 cri.go:89] found id: ""
	I0819 19:16:12.953493  438716 logs.go:276] 0 containers: []
	W0819 19:16:12.953504  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:12.953524  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:12.953542  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:13.007648  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:13.007691  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:13.022452  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:13.022494  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:13.092411  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:13.092439  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:13.092455  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:13.168711  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:13.168750  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:12.903478  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:15.402551  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:12.693162  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:14.698051  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:17.193988  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:14.417821  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:16.418541  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:18.918478  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:15.711501  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:15.724841  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:15.724921  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:15.760120  438716 cri.go:89] found id: ""
	I0819 19:16:15.760149  438716 logs.go:276] 0 containers: []
	W0819 19:16:15.760158  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:15.760166  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:15.760234  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:15.794959  438716 cri.go:89] found id: ""
	I0819 19:16:15.794988  438716 logs.go:276] 0 containers: []
	W0819 19:16:15.794996  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:15.795002  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:15.795054  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:15.842776  438716 cri.go:89] found id: ""
	I0819 19:16:15.842804  438716 logs.go:276] 0 containers: []
	W0819 19:16:15.842814  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:15.842820  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:15.842874  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:15.882134  438716 cri.go:89] found id: ""
	I0819 19:16:15.882167  438716 logs.go:276] 0 containers: []
	W0819 19:16:15.882178  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:15.882187  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:15.882251  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:15.919296  438716 cri.go:89] found id: ""
	I0819 19:16:15.919325  438716 logs.go:276] 0 containers: []
	W0819 19:16:15.919336  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:15.919345  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:15.919409  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:15.956401  438716 cri.go:89] found id: ""
	I0819 19:16:15.956429  438716 logs.go:276] 0 containers: []
	W0819 19:16:15.956437  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:15.956444  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:15.956507  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:15.994271  438716 cri.go:89] found id: ""
	I0819 19:16:15.994304  438716 logs.go:276] 0 containers: []
	W0819 19:16:15.994314  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:15.994320  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:15.994378  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:16.033685  438716 cri.go:89] found id: ""
	I0819 19:16:16.033714  438716 logs.go:276] 0 containers: []
	W0819 19:16:16.033724  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:16.033736  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:16.033754  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:16.083929  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:16.083964  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:16.107309  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:16.107342  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:16.193657  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:16.193681  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:16.193697  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:16.276974  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:16.277016  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:18.818532  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:18.831586  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:18.831655  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:18.866663  438716 cri.go:89] found id: ""
	I0819 19:16:18.866689  438716 logs.go:276] 0 containers: []
	W0819 19:16:18.866700  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:18.866709  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:18.866769  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:18.900711  438716 cri.go:89] found id: ""
	I0819 19:16:18.900746  438716 logs.go:276] 0 containers: []
	W0819 19:16:18.900757  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:18.900765  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:18.900849  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:18.935156  438716 cri.go:89] found id: ""
	I0819 19:16:18.935179  438716 logs.go:276] 0 containers: []
	W0819 19:16:18.935186  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:18.935193  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:18.935246  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:18.973853  438716 cri.go:89] found id: ""
	I0819 19:16:18.973889  438716 logs.go:276] 0 containers: []
	W0819 19:16:18.973902  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:18.973911  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:18.973978  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:19.014212  438716 cri.go:89] found id: ""
	I0819 19:16:19.014241  438716 logs.go:276] 0 containers: []
	W0819 19:16:19.014250  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:19.014255  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:19.014317  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:19.056089  438716 cri.go:89] found id: ""
	I0819 19:16:19.056125  438716 logs.go:276] 0 containers: []
	W0819 19:16:19.056137  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:19.056146  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:19.056211  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:19.091372  438716 cri.go:89] found id: ""
	I0819 19:16:19.091399  438716 logs.go:276] 0 containers: []
	W0819 19:16:19.091411  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:19.091420  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:19.091478  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:19.129737  438716 cri.go:89] found id: ""
	I0819 19:16:19.129767  438716 logs.go:276] 0 containers: []
	W0819 19:16:19.129777  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:19.129787  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:19.129800  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:19.207325  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:19.207360  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:19.247780  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:19.247816  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:19.302496  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:19.302543  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:19.317706  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:19.317739  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:19.395029  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:17.901762  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:19.901818  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:19.195079  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:21.693863  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:21.418534  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:23.420217  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:21.895538  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:21.910595  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:21.910658  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:21.948363  438716 cri.go:89] found id: ""
	I0819 19:16:21.948398  438716 logs.go:276] 0 containers: []
	W0819 19:16:21.948410  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:21.948419  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:21.948492  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:21.983391  438716 cri.go:89] found id: ""
	I0819 19:16:21.983428  438716 logs.go:276] 0 containers: []
	W0819 19:16:21.983440  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:21.983449  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:21.983520  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:22.022383  438716 cri.go:89] found id: ""
	I0819 19:16:22.022415  438716 logs.go:276] 0 containers: []
	W0819 19:16:22.022427  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:22.022436  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:22.022493  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:22.060676  438716 cri.go:89] found id: ""
	I0819 19:16:22.060707  438716 logs.go:276] 0 containers: []
	W0819 19:16:22.060716  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:22.060725  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:22.060778  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:22.095188  438716 cri.go:89] found id: ""
	I0819 19:16:22.095218  438716 logs.go:276] 0 containers: []
	W0819 19:16:22.095227  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:22.095234  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:22.095300  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:22.131164  438716 cri.go:89] found id: ""
	I0819 19:16:22.131192  438716 logs.go:276] 0 containers: []
	W0819 19:16:22.131200  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:22.131209  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:22.131275  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:22.166539  438716 cri.go:89] found id: ""
	I0819 19:16:22.166566  438716 logs.go:276] 0 containers: []
	W0819 19:16:22.166573  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:22.166580  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:22.166643  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:22.205604  438716 cri.go:89] found id: ""
	I0819 19:16:22.205631  438716 logs.go:276] 0 containers: []
	W0819 19:16:22.205640  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:22.205649  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:22.205662  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:22.265650  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:22.265689  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:22.280401  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:22.280443  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:22.356818  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:22.356851  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:22.356872  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:22.437678  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:22.437719  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:24.979655  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:24.993462  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:24.993526  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:25.029955  438716 cri.go:89] found id: ""
	I0819 19:16:25.029983  438716 logs.go:276] 0 containers: []
	W0819 19:16:25.029992  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:25.029999  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:25.030049  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:25.068478  438716 cri.go:89] found id: ""
	I0819 19:16:25.068507  438716 logs.go:276] 0 containers: []
	W0819 19:16:25.068518  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:25.068527  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:25.068594  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:25.105209  438716 cri.go:89] found id: ""
	I0819 19:16:25.105238  438716 logs.go:276] 0 containers: []
	W0819 19:16:25.105247  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:25.105256  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:25.105327  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:25.143166  438716 cri.go:89] found id: ""
	I0819 19:16:25.143203  438716 logs.go:276] 0 containers: []
	W0819 19:16:25.143218  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:25.143225  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:25.143279  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:25.177993  438716 cri.go:89] found id: ""
	I0819 19:16:25.178023  438716 logs.go:276] 0 containers: []
	W0819 19:16:25.178035  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:25.178044  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:25.178129  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:25.216473  438716 cri.go:89] found id: ""
	I0819 19:16:25.216501  438716 logs.go:276] 0 containers: []
	W0819 19:16:25.216523  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:25.216540  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:25.216603  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:25.251454  438716 cri.go:89] found id: ""
	I0819 19:16:25.251486  438716 logs.go:276] 0 containers: []
	W0819 19:16:25.251495  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:25.251501  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:25.251555  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:25.287145  438716 cri.go:89] found id: ""
	I0819 19:16:25.287179  438716 logs.go:276] 0 containers: []
	W0819 19:16:25.287188  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:25.287198  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:25.287210  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:25.371571  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:25.371619  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:25.418247  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:25.418277  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:25.472209  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:25.472248  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:25.486286  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:25.486315  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 19:16:21.902887  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:23.904358  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:26.403026  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:24.193797  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:26.194535  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:25.919371  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:28.418267  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	W0819 19:16:25.554470  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:28.055382  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:28.068750  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:28.068827  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:28.101856  438716 cri.go:89] found id: ""
	I0819 19:16:28.101891  438716 logs.go:276] 0 containers: []
	W0819 19:16:28.101903  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:28.101912  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:28.101977  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:28.136402  438716 cri.go:89] found id: ""
	I0819 19:16:28.136437  438716 logs.go:276] 0 containers: []
	W0819 19:16:28.136449  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:28.136460  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:28.136528  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:28.171766  438716 cri.go:89] found id: ""
	I0819 19:16:28.171795  438716 logs.go:276] 0 containers: []
	W0819 19:16:28.171803  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:28.171809  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:28.171864  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:28.206228  438716 cri.go:89] found id: ""
	I0819 19:16:28.206256  438716 logs.go:276] 0 containers: []
	W0819 19:16:28.206264  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:28.206272  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:28.206337  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:28.248877  438716 cri.go:89] found id: ""
	I0819 19:16:28.248912  438716 logs.go:276] 0 containers: []
	W0819 19:16:28.248923  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:28.248931  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:28.249002  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:28.290160  438716 cri.go:89] found id: ""
	I0819 19:16:28.290201  438716 logs.go:276] 0 containers: []
	W0819 19:16:28.290212  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:28.290221  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:28.290287  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:28.340413  438716 cri.go:89] found id: ""
	I0819 19:16:28.340445  438716 logs.go:276] 0 containers: []
	W0819 19:16:28.340454  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:28.340461  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:28.340513  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:28.385486  438716 cri.go:89] found id: ""
	I0819 19:16:28.385513  438716 logs.go:276] 0 containers: []
	W0819 19:16:28.385521  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:28.385532  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:28.385544  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:28.441987  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:28.442029  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:28.456509  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:28.456538  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:28.527941  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:28.527976  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:28.527993  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:28.612696  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:28.612738  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:28.901312  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:30.901640  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:28.693578  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:30.693686  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:30.418811  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:32.919696  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:31.154773  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:31.168718  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:31.168789  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:31.205365  438716 cri.go:89] found id: ""
	I0819 19:16:31.205399  438716 logs.go:276] 0 containers: []
	W0819 19:16:31.205411  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:31.205419  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:31.205496  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:31.238829  438716 cri.go:89] found id: ""
	I0819 19:16:31.238871  438716 logs.go:276] 0 containers: []
	W0819 19:16:31.238879  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:31.238886  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:31.238936  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:31.273229  438716 cri.go:89] found id: ""
	I0819 19:16:31.273259  438716 logs.go:276] 0 containers: []
	W0819 19:16:31.273304  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:31.273313  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:31.273377  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:31.309559  438716 cri.go:89] found id: ""
	I0819 19:16:31.309601  438716 logs.go:276] 0 containers: []
	W0819 19:16:31.309613  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:31.309622  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:31.309689  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:31.344939  438716 cri.go:89] found id: ""
	I0819 19:16:31.344971  438716 logs.go:276] 0 containers: []
	W0819 19:16:31.344981  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:31.344987  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:31.345043  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:31.382423  438716 cri.go:89] found id: ""
	I0819 19:16:31.382455  438716 logs.go:276] 0 containers: []
	W0819 19:16:31.382468  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:31.382474  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:31.382525  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:31.420148  438716 cri.go:89] found id: ""
	I0819 19:16:31.420174  438716 logs.go:276] 0 containers: []
	W0819 19:16:31.420184  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:31.420192  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:31.420262  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:31.455691  438716 cri.go:89] found id: ""
	I0819 19:16:31.455720  438716 logs.go:276] 0 containers: []
	W0819 19:16:31.455730  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:31.455740  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:31.455753  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:31.509501  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:31.509549  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:31.523650  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:31.523693  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:31.591535  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:31.591557  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:31.591574  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:31.674038  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:31.674077  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:34.216506  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:34.232782  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:34.232875  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:34.286103  438716 cri.go:89] found id: ""
	I0819 19:16:34.286136  438716 logs.go:276] 0 containers: []
	W0819 19:16:34.286147  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:34.286156  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:34.286221  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:34.324193  438716 cri.go:89] found id: ""
	I0819 19:16:34.324220  438716 logs.go:276] 0 containers: []
	W0819 19:16:34.324229  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:34.324235  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:34.324292  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:34.382777  438716 cri.go:89] found id: ""
	I0819 19:16:34.382804  438716 logs.go:276] 0 containers: []
	W0819 19:16:34.382814  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:34.382822  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:34.382887  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:34.420714  438716 cri.go:89] found id: ""
	I0819 19:16:34.420743  438716 logs.go:276] 0 containers: []
	W0819 19:16:34.420753  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:34.420771  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:34.420840  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:34.455338  438716 cri.go:89] found id: ""
	I0819 19:16:34.455369  438716 logs.go:276] 0 containers: []
	W0819 19:16:34.455381  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:34.455391  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:34.455467  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:34.489528  438716 cri.go:89] found id: ""
	I0819 19:16:34.489566  438716 logs.go:276] 0 containers: []
	W0819 19:16:34.489575  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:34.489581  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:34.489634  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:34.523830  438716 cri.go:89] found id: ""
	I0819 19:16:34.523857  438716 logs.go:276] 0 containers: []
	W0819 19:16:34.523866  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:34.523873  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:34.523940  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:34.559023  438716 cri.go:89] found id: ""
	I0819 19:16:34.559052  438716 logs.go:276] 0 containers: []
	W0819 19:16:34.559063  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:34.559077  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:34.559092  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:34.639116  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:34.639159  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:34.675990  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:34.676017  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:34.730900  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:34.730935  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:34.744938  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:34.744964  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:34.816267  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:32.902138  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:35.401865  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:32.696537  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:35.192648  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:35.687633  438245 pod_ready.go:82] duration metric: took 4m0.000667446s for pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace to be "Ready" ...
	E0819 19:16:35.687688  438245 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0819 19:16:35.687715  438245 pod_ready.go:39] duration metric: took 4m13.552784118s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 19:16:35.687770  438245 kubeadm.go:597] duration metric: took 4m20.936149722s to restartPrimaryControlPlane
	W0819 19:16:35.687875  438245 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0819 19:16:35.687929  438245 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 19:16:35.419327  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:37.420007  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:37.317314  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:37.331915  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:37.331982  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:37.370233  438716 cri.go:89] found id: ""
	I0819 19:16:37.370261  438716 logs.go:276] 0 containers: []
	W0819 19:16:37.370269  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:37.370276  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:37.370343  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:37.409042  438716 cri.go:89] found id: ""
	I0819 19:16:37.409071  438716 logs.go:276] 0 containers: []
	W0819 19:16:37.409082  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:37.409090  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:37.409161  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:37.445903  438716 cri.go:89] found id: ""
	I0819 19:16:37.445932  438716 logs.go:276] 0 containers: []
	W0819 19:16:37.445941  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:37.445948  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:37.445999  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:37.484275  438716 cri.go:89] found id: ""
	I0819 19:16:37.484318  438716 logs.go:276] 0 containers: []
	W0819 19:16:37.484328  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:37.484334  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:37.484393  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:37.528131  438716 cri.go:89] found id: ""
	I0819 19:16:37.528161  438716 logs.go:276] 0 containers: []
	W0819 19:16:37.528174  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:37.528180  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:37.528243  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:37.563374  438716 cri.go:89] found id: ""
	I0819 19:16:37.563406  438716 logs.go:276] 0 containers: []
	W0819 19:16:37.563414  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:37.563421  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:37.563473  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:37.597234  438716 cri.go:89] found id: ""
	I0819 19:16:37.597260  438716 logs.go:276] 0 containers: []
	W0819 19:16:37.597267  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:37.597274  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:37.597329  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:37.634809  438716 cri.go:89] found id: ""
	I0819 19:16:37.634845  438716 logs.go:276] 0 containers: []
	W0819 19:16:37.634854  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:37.634864  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:37.634879  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:37.704354  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:37.704380  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:37.704396  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:37.788606  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:37.788646  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:37.830486  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:37.830513  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:37.890642  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:37.890681  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:40.405473  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:40.420019  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:40.420094  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:40.458558  438716 cri.go:89] found id: ""
	I0819 19:16:40.458586  438716 logs.go:276] 0 containers: []
	W0819 19:16:40.458598  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:40.458606  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:40.458671  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:40.500353  438716 cri.go:89] found id: ""
	I0819 19:16:40.500379  438716 logs.go:276] 0 containers: []
	W0819 19:16:40.500388  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:40.500394  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:40.500445  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:37.901881  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:39.902097  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:39.918877  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:41.919112  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:43.920092  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:40.534281  438716 cri.go:89] found id: ""
	I0819 19:16:40.534307  438716 logs.go:276] 0 containers: []
	W0819 19:16:40.534316  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:40.534322  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:40.534379  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:40.569537  438716 cri.go:89] found id: ""
	I0819 19:16:40.569568  438716 logs.go:276] 0 containers: []
	W0819 19:16:40.569578  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:40.569587  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:40.569654  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:40.603066  438716 cri.go:89] found id: ""
	I0819 19:16:40.603097  438716 logs.go:276] 0 containers: []
	W0819 19:16:40.603110  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:40.603118  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:40.603171  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:40.637598  438716 cri.go:89] found id: ""
	I0819 19:16:40.637628  438716 logs.go:276] 0 containers: []
	W0819 19:16:40.637637  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:40.637643  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:40.637704  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:40.673583  438716 cri.go:89] found id: ""
	I0819 19:16:40.673616  438716 logs.go:276] 0 containers: []
	W0819 19:16:40.673629  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:40.673637  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:40.673692  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:40.708324  438716 cri.go:89] found id: ""
	I0819 19:16:40.708354  438716 logs.go:276] 0 containers: []
	W0819 19:16:40.708363  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:40.708373  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:40.708387  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:40.789743  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:40.789782  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:40.830849  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:40.830884  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:40.882662  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:40.882700  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:40.896843  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:40.896869  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:40.969491  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:43.470579  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:43.483791  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:43.483876  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:43.523764  438716 cri.go:89] found id: ""
	I0819 19:16:43.523797  438716 logs.go:276] 0 containers: []
	W0819 19:16:43.523809  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:43.523817  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:43.523882  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:43.557925  438716 cri.go:89] found id: ""
	I0819 19:16:43.557953  438716 logs.go:276] 0 containers: []
	W0819 19:16:43.557960  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:43.557966  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:43.558017  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:43.591324  438716 cri.go:89] found id: ""
	I0819 19:16:43.591355  438716 logs.go:276] 0 containers: []
	W0819 19:16:43.591364  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:43.591370  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:43.591421  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:43.625798  438716 cri.go:89] found id: ""
	I0819 19:16:43.625826  438716 logs.go:276] 0 containers: []
	W0819 19:16:43.625834  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:43.625840  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:43.625898  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:43.659787  438716 cri.go:89] found id: ""
	I0819 19:16:43.659815  438716 logs.go:276] 0 containers: []
	W0819 19:16:43.659823  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:43.659830  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:43.659882  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:43.692982  438716 cri.go:89] found id: ""
	I0819 19:16:43.693008  438716 logs.go:276] 0 containers: []
	W0819 19:16:43.693017  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:43.693024  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:43.693075  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:43.726059  438716 cri.go:89] found id: ""
	I0819 19:16:43.726092  438716 logs.go:276] 0 containers: []
	W0819 19:16:43.726104  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:43.726113  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:43.726187  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:43.760906  438716 cri.go:89] found id: ""
	I0819 19:16:43.760947  438716 logs.go:276] 0 containers: []
	W0819 19:16:43.760958  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:43.760971  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:43.760994  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:43.812249  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:43.812285  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:43.826538  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:43.826566  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:43.894904  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:43.894926  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:43.894941  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:43.975746  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:43.975796  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:41.902398  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:43.902728  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:46.401834  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:46.419345  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:48.918688  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:46.515329  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:46.529088  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:46.529170  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:46.564525  438716 cri.go:89] found id: ""
	I0819 19:16:46.564557  438716 logs.go:276] 0 containers: []
	W0819 19:16:46.564570  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:46.564578  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:46.564647  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:46.598457  438716 cri.go:89] found id: ""
	I0819 19:16:46.598485  438716 logs.go:276] 0 containers: []
	W0819 19:16:46.598494  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:46.598499  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:46.598549  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:46.631767  438716 cri.go:89] found id: ""
	I0819 19:16:46.631798  438716 logs.go:276] 0 containers: []
	W0819 19:16:46.631807  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:46.631814  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:46.631867  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:46.664978  438716 cri.go:89] found id: ""
	I0819 19:16:46.665013  438716 logs.go:276] 0 containers: []
	W0819 19:16:46.665026  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:46.665034  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:46.665094  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:46.701024  438716 cri.go:89] found id: ""
	I0819 19:16:46.701052  438716 logs.go:276] 0 containers: []
	W0819 19:16:46.701061  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:46.701067  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:46.701132  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:46.735834  438716 cri.go:89] found id: ""
	I0819 19:16:46.735874  438716 logs.go:276] 0 containers: []
	W0819 19:16:46.735886  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:46.735894  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:46.735978  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:46.773392  438716 cri.go:89] found id: ""
	I0819 19:16:46.773426  438716 logs.go:276] 0 containers: []
	W0819 19:16:46.773437  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:46.773445  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:46.773498  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:46.819800  438716 cri.go:89] found id: ""
	I0819 19:16:46.819829  438716 logs.go:276] 0 containers: []
	W0819 19:16:46.819841  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:46.819869  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:46.819889  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:46.860633  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:46.860669  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:46.911895  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:46.911936  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:46.927388  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:46.927422  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:46.998601  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:46.998628  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:46.998645  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:49.585303  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:49.598962  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:49.599032  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:49.631891  438716 cri.go:89] found id: ""
	I0819 19:16:49.631920  438716 logs.go:276] 0 containers: []
	W0819 19:16:49.631931  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:49.631940  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:49.631998  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:49.671731  438716 cri.go:89] found id: ""
	I0819 19:16:49.671761  438716 logs.go:276] 0 containers: []
	W0819 19:16:49.671777  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:49.671786  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:49.671846  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:49.707517  438716 cri.go:89] found id: ""
	I0819 19:16:49.707556  438716 logs.go:276] 0 containers: []
	W0819 19:16:49.707568  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:49.707578  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:49.707651  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:49.744255  438716 cri.go:89] found id: ""
	I0819 19:16:49.744289  438716 logs.go:276] 0 containers: []
	W0819 19:16:49.744299  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:49.744305  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:49.744357  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:49.779224  438716 cri.go:89] found id: ""
	I0819 19:16:49.779252  438716 logs.go:276] 0 containers: []
	W0819 19:16:49.779259  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:49.779266  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:49.779322  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:49.815641  438716 cri.go:89] found id: ""
	I0819 19:16:49.815689  438716 logs.go:276] 0 containers: []
	W0819 19:16:49.815701  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:49.815711  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:49.815769  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:49.851861  438716 cri.go:89] found id: ""
	I0819 19:16:49.851894  438716 logs.go:276] 0 containers: []
	W0819 19:16:49.851906  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:49.851915  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:49.851984  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:49.888140  438716 cri.go:89] found id: ""
	I0819 19:16:49.888173  438716 logs.go:276] 0 containers: []
	W0819 19:16:49.888186  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:49.888199  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:49.888215  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:49.940389  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:49.940430  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:49.954519  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:49.954553  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:50.028462  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:50.028486  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:50.028502  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:50.108319  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:50.108362  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:48.901902  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:50.902702  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:50.919079  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:52.919271  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:52.647146  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:52.660468  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:52.660558  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:52.697665  438716 cri.go:89] found id: ""
	I0819 19:16:52.697703  438716 logs.go:276] 0 containers: []
	W0819 19:16:52.697719  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:52.697727  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:52.697786  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:52.739169  438716 cri.go:89] found id: ""
	I0819 19:16:52.739203  438716 logs.go:276] 0 containers: []
	W0819 19:16:52.739214  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:52.739222  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:52.739289  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:52.776580  438716 cri.go:89] found id: ""
	I0819 19:16:52.776610  438716 logs.go:276] 0 containers: []
	W0819 19:16:52.776619  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:52.776630  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:52.776683  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:52.813443  438716 cri.go:89] found id: ""
	I0819 19:16:52.813475  438716 logs.go:276] 0 containers: []
	W0819 19:16:52.813488  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:52.813497  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:52.813557  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:52.848035  438716 cri.go:89] found id: ""
	I0819 19:16:52.848064  438716 logs.go:276] 0 containers: []
	W0819 19:16:52.848075  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:52.848082  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:52.848150  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:52.881814  438716 cri.go:89] found id: ""
	I0819 19:16:52.881841  438716 logs.go:276] 0 containers: []
	W0819 19:16:52.881858  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:52.881867  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:52.881930  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:52.922179  438716 cri.go:89] found id: ""
	I0819 19:16:52.922202  438716 logs.go:276] 0 containers: []
	W0819 19:16:52.922210  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:52.922216  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:52.922277  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:52.958110  438716 cri.go:89] found id: ""
	I0819 19:16:52.958136  438716 logs.go:276] 0 containers: []
	W0819 19:16:52.958144  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:52.958153  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:52.958167  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:53.008553  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:53.008592  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:53.022826  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:53.022860  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:53.094940  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:53.094967  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:53.094982  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:53.173877  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:53.173920  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:53.403382  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:55.905504  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:55.419297  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:55.419331  438295 pod_ready.go:82] duration metric: took 4m0.007107243s for pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace to be "Ready" ...
	E0819 19:16:55.419345  438295 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0819 19:16:55.419355  438295 pod_ready.go:39] duration metric: took 4m4.316528467s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 19:16:55.419408  438295 api_server.go:52] waiting for apiserver process to appear ...
	I0819 19:16:55.419449  438295 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:55.419499  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:55.466648  438295 cri.go:89] found id: "d66ad075c652a3b446078444a32327c07459f74199be8f89197067dbad566d5a"
	I0819 19:16:55.466679  438295 cri.go:89] found id: ""
	I0819 19:16:55.466690  438295 logs.go:276] 1 containers: [d66ad075c652a3b446078444a32327c07459f74199be8f89197067dbad566d5a]
	I0819 19:16:55.466758  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:16:55.471085  438295 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:55.471164  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:55.509883  438295 cri.go:89] found id: "a3cb2c04e3eb3398fa324b660ca1864f22175cbf41fd84eae34a24ce7928b672"
	I0819 19:16:55.509910  438295 cri.go:89] found id: ""
	I0819 19:16:55.509921  438295 logs.go:276] 1 containers: [a3cb2c04e3eb3398fa324b660ca1864f22175cbf41fd84eae34a24ce7928b672]
	I0819 19:16:55.509984  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:16:55.516866  438295 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:55.516954  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:55.560957  438295 cri.go:89] found id: "a6bc5b24f616e32fdffb80b6ed0201250b02f143c8217d56ef90dc55551d709f"
	I0819 19:16:55.560988  438295 cri.go:89] found id: ""
	I0819 19:16:55.560999  438295 logs.go:276] 1 containers: [a6bc5b24f616e32fdffb80b6ed0201250b02f143c8217d56ef90dc55551d709f]
	I0819 19:16:55.561065  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:16:55.565592  438295 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:55.565662  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:55.610872  438295 cri.go:89] found id: "c09c2a3840c6b84c4d187a5b4938f1e79c515609ad3ff7077a163e94acd5fc22"
	I0819 19:16:55.610905  438295 cri.go:89] found id: ""
	I0819 19:16:55.610914  438295 logs.go:276] 1 containers: [c09c2a3840c6b84c4d187a5b4938f1e79c515609ad3ff7077a163e94acd5fc22]
	I0819 19:16:55.610976  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:16:55.615411  438295 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:55.615486  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:55.652759  438295 cri.go:89] found id: "3e23a8501fe9333693618c26b918ed665ca9f2ea955dfc771ddbd90f4af91338"
	I0819 19:16:55.652792  438295 cri.go:89] found id: ""
	I0819 19:16:55.652807  438295 logs.go:276] 1 containers: [3e23a8501fe9333693618c26b918ed665ca9f2ea955dfc771ddbd90f4af91338]
	I0819 19:16:55.652873  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:16:55.657124  438295 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:55.657190  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:55.699063  438295 cri.go:89] found id: "6e6dab43bac16fb6a2155177fd2cb01da57c882a322ae89145bc332c50c87071"
	I0819 19:16:55.699085  438295 cri.go:89] found id: ""
	I0819 19:16:55.699093  438295 logs.go:276] 1 containers: [6e6dab43bac16fb6a2155177fd2cb01da57c882a322ae89145bc332c50c87071]
	I0819 19:16:55.699145  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:16:55.703224  438295 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:55.703292  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:55.753166  438295 cri.go:89] found id: ""
	I0819 19:16:55.753198  438295 logs.go:276] 0 containers: []
	W0819 19:16:55.753210  438295 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:55.753218  438295 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 19:16:55.753286  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 19:16:55.803518  438295 cri.go:89] found id: "902796698c02b97c3f50f231cba5dfbc00bc7e8344f104fe7a36109e1d10a4f8"
	I0819 19:16:55.803551  438295 cri.go:89] found id: "44a4290db8405288dc877d1dbfa8f1a4976cb6221431aef419db3cdff822d3b6"
	I0819 19:16:55.803558  438295 cri.go:89] found id: ""
	I0819 19:16:55.803568  438295 logs.go:276] 2 containers: [902796698c02b97c3f50f231cba5dfbc00bc7e8344f104fe7a36109e1d10a4f8 44a4290db8405288dc877d1dbfa8f1a4976cb6221431aef419db3cdff822d3b6]
	I0819 19:16:55.803637  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:16:55.808063  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:16:55.812708  438295 logs.go:123] Gathering logs for container status ...
	I0819 19:16:55.812737  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:55.861697  438295 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:55.861736  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 19:16:55.911203  438295 logs.go:138] Found kubelet problem: Aug 19 19:12:40 embed-certs-024748 kubelet[936]: W0819 19:12:40.671901     936 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:embed-certs-024748" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-024748' and this object
	W0819 19:16:55.911420  438295 logs.go:138] Found kubelet problem: Aug 19 19:12:40 embed-certs-024748 kubelet[936]: E0819 19:12:40.672098     936 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:embed-certs-024748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-024748' and this object" logger="UnhandledError"
	W0819 19:16:55.911603  438295 logs.go:138] Found kubelet problem: Aug 19 19:12:40 embed-certs-024748 kubelet[936]: W0819 19:12:40.672624     936 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-024748" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-024748' and this object
	W0819 19:16:55.911834  438295 logs.go:138] Found kubelet problem: Aug 19 19:12:40 embed-certs-024748 kubelet[936]: E0819 19:12:40.672667     936 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:embed-certs-024748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-024748' and this object" logger="UnhandledError"
	I0819 19:16:55.949585  438295 logs.go:123] Gathering logs for kube-scheduler [c09c2a3840c6b84c4d187a5b4938f1e79c515609ad3ff7077a163e94acd5fc22] ...
	I0819 19:16:55.949663  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c09c2a3840c6b84c4d187a5b4938f1e79c515609ad3ff7077a163e94acd5fc22"
	I0819 19:16:55.995063  438295 logs.go:123] Gathering logs for kube-controller-manager [6e6dab43bac16fb6a2155177fd2cb01da57c882a322ae89145bc332c50c87071] ...
	I0819 19:16:55.995100  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e6dab43bac16fb6a2155177fd2cb01da57c882a322ae89145bc332c50c87071"
	I0819 19:16:56.062320  438295 logs.go:123] Gathering logs for storage-provisioner [902796698c02b97c3f50f231cba5dfbc00bc7e8344f104fe7a36109e1d10a4f8] ...
	I0819 19:16:56.062376  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 902796698c02b97c3f50f231cba5dfbc00bc7e8344f104fe7a36109e1d10a4f8"
	I0819 19:16:56.100112  438295 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:56.100152  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:56.589439  438295 logs.go:123] Gathering logs for kube-proxy [3e23a8501fe9333693618c26b918ed665ca9f2ea955dfc771ddbd90f4af91338] ...
	I0819 19:16:56.589486  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e23a8501fe9333693618c26b918ed665ca9f2ea955dfc771ddbd90f4af91338"
	I0819 19:16:56.632096  438295 logs.go:123] Gathering logs for storage-provisioner [44a4290db8405288dc877d1dbfa8f1a4976cb6221431aef419db3cdff822d3b6] ...
	I0819 19:16:56.632132  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44a4290db8405288dc877d1dbfa8f1a4976cb6221431aef419db3cdff822d3b6"
	I0819 19:16:56.670952  438295 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:56.670984  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:56.685246  438295 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:56.685279  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 19:16:56.826418  438295 logs.go:123] Gathering logs for kube-apiserver [d66ad075c652a3b446078444a32327c07459f74199be8f89197067dbad566d5a] ...
	I0819 19:16:56.826456  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d66ad075c652a3b446078444a32327c07459f74199be8f89197067dbad566d5a"
	I0819 19:16:56.876901  438295 logs.go:123] Gathering logs for etcd [a3cb2c04e3eb3398fa324b660ca1864f22175cbf41fd84eae34a24ce7928b672] ...
	I0819 19:16:56.876944  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a3cb2c04e3eb3398fa324b660ca1864f22175cbf41fd84eae34a24ce7928b672"
	I0819 19:16:56.920390  438295 logs.go:123] Gathering logs for coredns [a6bc5b24f616e32fdffb80b6ed0201250b02f143c8217d56ef90dc55551d709f] ...
	I0819 19:16:56.920423  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6bc5b24f616e32fdffb80b6ed0201250b02f143c8217d56ef90dc55551d709f"
	I0819 19:16:56.961691  438295 out.go:358] Setting ErrFile to fd 2...
	I0819 19:16:56.961718  438295 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 19:16:56.961793  438295 out.go:270] X Problems detected in kubelet:
	W0819 19:16:56.961805  438295 out.go:270]   Aug 19 19:12:40 embed-certs-024748 kubelet[936]: W0819 19:12:40.671901     936 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:embed-certs-024748" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-024748' and this object
	W0819 19:16:56.961824  438295 out.go:270]   Aug 19 19:12:40 embed-certs-024748 kubelet[936]: E0819 19:12:40.672098     936 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:embed-certs-024748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-024748' and this object" logger="UnhandledError"
	W0819 19:16:56.961839  438295 out.go:270]   Aug 19 19:12:40 embed-certs-024748 kubelet[936]: W0819 19:12:40.672624     936 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-024748" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-024748' and this object
	W0819 19:16:56.961853  438295 out.go:270]   Aug 19 19:12:40 embed-certs-024748 kubelet[936]: E0819 19:12:40.672667     936 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:embed-certs-024748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-024748' and this object" logger="UnhandledError"
	I0819 19:16:56.961884  438295 out.go:358] Setting ErrFile to fd 2...
	I0819 19:16:56.961893  438295 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:16:55.716096  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:55.734732  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:55.734817  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:55.780484  438716 cri.go:89] found id: ""
	I0819 19:16:55.780514  438716 logs.go:276] 0 containers: []
	W0819 19:16:55.780525  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:55.780534  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:55.780607  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:55.821755  438716 cri.go:89] found id: ""
	I0819 19:16:55.821778  438716 logs.go:276] 0 containers: []
	W0819 19:16:55.821786  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:55.821792  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:55.821855  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:55.861032  438716 cri.go:89] found id: ""
	I0819 19:16:55.861066  438716 logs.go:276] 0 containers: []
	W0819 19:16:55.861077  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:55.861086  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:55.861159  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:55.909978  438716 cri.go:89] found id: ""
	I0819 19:16:55.910004  438716 logs.go:276] 0 containers: []
	W0819 19:16:55.910015  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:55.910024  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:55.910087  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:55.956603  438716 cri.go:89] found id: ""
	I0819 19:16:55.956634  438716 logs.go:276] 0 containers: []
	W0819 19:16:55.956645  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:55.956653  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:55.956722  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:55.999176  438716 cri.go:89] found id: ""
	I0819 19:16:55.999203  438716 logs.go:276] 0 containers: []
	W0819 19:16:55.999216  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:55.999225  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:55.999286  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:56.035141  438716 cri.go:89] found id: ""
	I0819 19:16:56.035172  438716 logs.go:276] 0 containers: []
	W0819 19:16:56.035183  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:56.035192  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:56.035255  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:56.076152  438716 cri.go:89] found id: ""
	I0819 19:16:56.076185  438716 logs.go:276] 0 containers: []
	W0819 19:16:56.076197  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:56.076209  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:56.076226  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:56.136624  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:56.136671  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:56.151867  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:56.151902  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:56.231650  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:56.231696  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:56.231713  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:56.307203  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:56.307247  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:58.848295  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:58.861984  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:58.862172  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:58.900089  438716 cri.go:89] found id: ""
	I0819 19:16:58.900114  438716 logs.go:276] 0 containers: []
	W0819 19:16:58.900124  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:58.900132  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:58.900203  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:58.932528  438716 cri.go:89] found id: ""
	I0819 19:16:58.932551  438716 logs.go:276] 0 containers: []
	W0819 19:16:58.932559  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:58.932565  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:58.932618  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:58.967255  438716 cri.go:89] found id: ""
	I0819 19:16:58.967283  438716 logs.go:276] 0 containers: []
	W0819 19:16:58.967291  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:58.967298  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:58.967349  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:59.000887  438716 cri.go:89] found id: ""
	I0819 19:16:59.000923  438716 logs.go:276] 0 containers: []
	W0819 19:16:59.000934  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:59.000942  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:59.001009  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:59.041386  438716 cri.go:89] found id: ""
	I0819 19:16:59.041417  438716 logs.go:276] 0 containers: []
	W0819 19:16:59.041428  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:59.041436  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:59.041499  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:59.080036  438716 cri.go:89] found id: ""
	I0819 19:16:59.080078  438716 logs.go:276] 0 containers: []
	W0819 19:16:59.080090  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:59.080099  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:59.080168  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:59.113946  438716 cri.go:89] found id: ""
	I0819 19:16:59.113982  438716 logs.go:276] 0 containers: []
	W0819 19:16:59.113995  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:59.114004  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:59.114066  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:59.155413  438716 cri.go:89] found id: ""
	I0819 19:16:59.155437  438716 logs.go:276] 0 containers: []
	W0819 19:16:59.155446  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:59.155456  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:59.155477  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:59.223795  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:59.223815  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:59.223828  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:59.304516  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:59.304554  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:59.344975  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:59.345005  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:59.397751  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:59.397789  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:58.402453  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:00.901494  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:02.043611  438245 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.355651212s)
	I0819 19:17:02.043735  438245 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 19:17:02.066981  438245 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 19:17:02.083179  438245 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 19:17:02.100807  438245 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 19:17:02.100829  438245 kubeadm.go:157] found existing configuration files:
	
	I0819 19:17:02.100877  438245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0819 19:17:02.116462  438245 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 19:17:02.116534  438245 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 19:17:02.127313  438245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0819 19:17:02.147096  438245 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 19:17:02.147170  438245 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 19:17:02.159262  438245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0819 19:17:02.168825  438245 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 19:17:02.168918  438245 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 19:17:02.179354  438245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0819 19:17:02.188982  438245 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 19:17:02.189051  438245 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 19:17:02.199291  438245 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 19:17:01.914433  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:17:01.927468  438716 kubeadm.go:597] duration metric: took 4m3.453401239s to restartPrimaryControlPlane
	W0819 19:17:01.927564  438716 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0819 19:17:01.927600  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 19:17:02.647971  438716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 19:17:02.665946  438716 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 19:17:02.676665  438716 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 19:17:02.686818  438716 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 19:17:02.686840  438716 kubeadm.go:157] found existing configuration files:
	
	I0819 19:17:02.686885  438716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 19:17:02.697160  438716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 19:17:02.697228  438716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 19:17:02.707774  438716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 19:17:02.717251  438716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 19:17:02.717310  438716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 19:17:02.727481  438716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 19:17:02.738085  438716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 19:17:02.738141  438716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 19:17:02.749286  438716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 19:17:02.759965  438716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 19:17:02.760025  438716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 19:17:02.770753  438716 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 19:17:02.835857  438716 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0819 19:17:02.835940  438716 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 19:17:02.983775  438716 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 19:17:02.983974  438716 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 19:17:02.984149  438716 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0819 19:17:03.173404  438716 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 19:17:03.175412  438716 out.go:235]   - Generating certificates and keys ...
	I0819 19:17:03.175520  438716 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 19:17:03.175659  438716 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 19:17:03.175805  438716 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 19:17:03.175913  438716 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 19:17:03.176021  438716 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 19:17:03.176125  438716 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 19:17:03.176626  438716 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 19:17:03.177624  438716 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 19:17:03.178399  438716 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 19:17:03.179325  438716 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 19:17:03.179599  438716 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 19:17:03.179702  438716 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 19:17:03.416467  438716 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 19:17:03.505378  438716 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 19:17:03.588959  438716 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 19:17:03.680602  438716 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 19:17:03.697717  438716 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 19:17:03.700436  438716 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 19:17:03.700579  438716 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 19:17:03.858804  438716 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 19:17:03.861395  438716 out.go:235]   - Booting up control plane ...
	I0819 19:17:03.861520  438716 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 19:17:03.877387  438716 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 19:17:03.878611  438716 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 19:17:03.882842  438716 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 19:17:03.887436  438716 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0819 19:17:02.902839  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:05.402376  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:02.248409  438245 kubeadm.go:310] W0819 19:17:02.217617    2563 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 19:17:02.250447  438245 kubeadm.go:310] W0819 19:17:02.219827    2563 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 19:17:02.377127  438245 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 19:17:06.962848  438295 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:17:06.984774  438295 api_server.go:72] duration metric: took 4m23.117653428s to wait for apiserver process to appear ...
	I0819 19:17:06.984811  438295 api_server.go:88] waiting for apiserver healthz status ...
	I0819 19:17:06.984865  438295 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:17:06.984939  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:17:07.025158  438295 cri.go:89] found id: "d66ad075c652a3b446078444a32327c07459f74199be8f89197067dbad566d5a"
	I0819 19:17:07.025201  438295 cri.go:89] found id: ""
	I0819 19:17:07.025213  438295 logs.go:276] 1 containers: [d66ad075c652a3b446078444a32327c07459f74199be8f89197067dbad566d5a]
	I0819 19:17:07.025287  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:07.032365  438295 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:17:07.032446  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:17:07.073368  438295 cri.go:89] found id: "a3cb2c04e3eb3398fa324b660ca1864f22175cbf41fd84eae34a24ce7928b672"
	I0819 19:17:07.073394  438295 cri.go:89] found id: ""
	I0819 19:17:07.073403  438295 logs.go:276] 1 containers: [a3cb2c04e3eb3398fa324b660ca1864f22175cbf41fd84eae34a24ce7928b672]
	I0819 19:17:07.073463  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:07.078781  438295 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:17:07.078891  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:17:07.123263  438295 cri.go:89] found id: "a6bc5b24f616e32fdffb80b6ed0201250b02f143c8217d56ef90dc55551d709f"
	I0819 19:17:07.123293  438295 cri.go:89] found id: ""
	I0819 19:17:07.123303  438295 logs.go:276] 1 containers: [a6bc5b24f616e32fdffb80b6ed0201250b02f143c8217d56ef90dc55551d709f]
	I0819 19:17:07.123365  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:07.128485  438295 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:17:07.128579  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:17:07.167105  438295 cri.go:89] found id: "c09c2a3840c6b84c4d187a5b4938f1e79c515609ad3ff7077a163e94acd5fc22"
	I0819 19:17:07.167137  438295 cri.go:89] found id: ""
	I0819 19:17:07.167148  438295 logs.go:276] 1 containers: [c09c2a3840c6b84c4d187a5b4938f1e79c515609ad3ff7077a163e94acd5fc22]
	I0819 19:17:07.167215  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:07.171571  438295 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:17:07.171641  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:17:07.215524  438295 cri.go:89] found id: "3e23a8501fe9333693618c26b918ed665ca9f2ea955dfc771ddbd90f4af91338"
	I0819 19:17:07.215547  438295 cri.go:89] found id: ""
	I0819 19:17:07.215555  438295 logs.go:276] 1 containers: [3e23a8501fe9333693618c26b918ed665ca9f2ea955dfc771ddbd90f4af91338]
	I0819 19:17:07.215621  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:07.221604  438295 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:17:07.221676  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:17:07.263106  438295 cri.go:89] found id: "6e6dab43bac16fb6a2155177fd2cb01da57c882a322ae89145bc332c50c87071"
	I0819 19:17:07.263140  438295 cri.go:89] found id: ""
	I0819 19:17:07.263149  438295 logs.go:276] 1 containers: [6e6dab43bac16fb6a2155177fd2cb01da57c882a322ae89145bc332c50c87071]
	I0819 19:17:07.263209  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:07.267703  438295 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:17:07.267770  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:17:07.316006  438295 cri.go:89] found id: ""
	I0819 19:17:07.316042  438295 logs.go:276] 0 containers: []
	W0819 19:17:07.316054  438295 logs.go:278] No container was found matching "kindnet"
	I0819 19:17:07.316062  438295 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 19:17:07.316132  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 19:17:07.361100  438295 cri.go:89] found id: "902796698c02b97c3f50f231cba5dfbc00bc7e8344f104fe7a36109e1d10a4f8"
	I0819 19:17:07.361123  438295 cri.go:89] found id: "44a4290db8405288dc877d1dbfa8f1a4976cb6221431aef419db3cdff822d3b6"
	I0819 19:17:07.361126  438295 cri.go:89] found id: ""
	I0819 19:17:07.361133  438295 logs.go:276] 2 containers: [902796698c02b97c3f50f231cba5dfbc00bc7e8344f104fe7a36109e1d10a4f8 44a4290db8405288dc877d1dbfa8f1a4976cb6221431aef419db3cdff822d3b6]
	I0819 19:17:07.361190  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:07.366949  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:07.372724  438295 logs.go:123] Gathering logs for kubelet ...
	I0819 19:17:07.372748  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 19:17:07.413540  438295 logs.go:138] Found kubelet problem: Aug 19 19:12:40 embed-certs-024748 kubelet[936]: W0819 19:12:40.671901     936 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:embed-certs-024748" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-024748' and this object
	W0819 19:17:07.413722  438295 logs.go:138] Found kubelet problem: Aug 19 19:12:40 embed-certs-024748 kubelet[936]: E0819 19:12:40.672098     936 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:embed-certs-024748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-024748' and this object" logger="UnhandledError"
	W0819 19:17:07.413858  438295 logs.go:138] Found kubelet problem: Aug 19 19:12:40 embed-certs-024748 kubelet[936]: W0819 19:12:40.672624     936 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-024748" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-024748' and this object
	W0819 19:17:07.414017  438295 logs.go:138] Found kubelet problem: Aug 19 19:12:40 embed-certs-024748 kubelet[936]: E0819 19:12:40.672667     936 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:embed-certs-024748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-024748' and this object" logger="UnhandledError"
	I0819 19:17:07.452061  438295 logs.go:123] Gathering logs for coredns [a6bc5b24f616e32fdffb80b6ed0201250b02f143c8217d56ef90dc55551d709f] ...
	I0819 19:17:07.452104  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6bc5b24f616e32fdffb80b6ed0201250b02f143c8217d56ef90dc55551d709f"
	I0819 19:17:07.490598  438295 logs.go:123] Gathering logs for kube-scheduler [c09c2a3840c6b84c4d187a5b4938f1e79c515609ad3ff7077a163e94acd5fc22] ...
	I0819 19:17:07.490636  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c09c2a3840c6b84c4d187a5b4938f1e79c515609ad3ff7077a163e94acd5fc22"
	I0819 19:17:07.530454  438295 logs.go:123] Gathering logs for kube-proxy [3e23a8501fe9333693618c26b918ed665ca9f2ea955dfc771ddbd90f4af91338] ...
	I0819 19:17:07.530486  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e23a8501fe9333693618c26b918ed665ca9f2ea955dfc771ddbd90f4af91338"
	I0819 19:17:07.581488  438295 logs.go:123] Gathering logs for storage-provisioner [902796698c02b97c3f50f231cba5dfbc00bc7e8344f104fe7a36109e1d10a4f8] ...
	I0819 19:17:07.581528  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 902796698c02b97c3f50f231cba5dfbc00bc7e8344f104fe7a36109e1d10a4f8"
	I0819 19:17:07.621752  438295 logs.go:123] Gathering logs for storage-provisioner [44a4290db8405288dc877d1dbfa8f1a4976cb6221431aef419db3cdff822d3b6] ...
	I0819 19:17:07.621787  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44a4290db8405288dc877d1dbfa8f1a4976cb6221431aef419db3cdff822d3b6"
	I0819 19:17:07.661330  438295 logs.go:123] Gathering logs for container status ...
	I0819 19:17:07.661365  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:17:07.709227  438295 logs.go:123] Gathering logs for dmesg ...
	I0819 19:17:07.709261  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:17:07.724634  438295 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:17:07.724670  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 19:17:07.850212  438295 logs.go:123] Gathering logs for kube-apiserver [d66ad075c652a3b446078444a32327c07459f74199be8f89197067dbad566d5a] ...
	I0819 19:17:07.850247  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d66ad075c652a3b446078444a32327c07459f74199be8f89197067dbad566d5a"
	I0819 19:17:07.894464  438295 logs.go:123] Gathering logs for etcd [a3cb2c04e3eb3398fa324b660ca1864f22175cbf41fd84eae34a24ce7928b672] ...
	I0819 19:17:07.894507  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a3cb2c04e3eb3398fa324b660ca1864f22175cbf41fd84eae34a24ce7928b672"
	I0819 19:17:07.943807  438295 logs.go:123] Gathering logs for kube-controller-manager [6e6dab43bac16fb6a2155177fd2cb01da57c882a322ae89145bc332c50c87071] ...
	I0819 19:17:07.943841  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e6dab43bac16fb6a2155177fd2cb01da57c882a322ae89145bc332c50c87071"
	I0819 19:17:08.007428  438295 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:17:08.007463  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:17:08.487397  438295 out.go:358] Setting ErrFile to fd 2...
	I0819 19:17:08.487435  438295 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 19:17:08.487518  438295 out.go:270] X Problems detected in kubelet:
	W0819 19:17:08.487534  438295 out.go:270]   Aug 19 19:12:40 embed-certs-024748 kubelet[936]: W0819 19:12:40.671901     936 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:embed-certs-024748" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-024748' and this object
	W0819 19:17:08.487546  438295 out.go:270]   Aug 19 19:12:40 embed-certs-024748 kubelet[936]: E0819 19:12:40.672098     936 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:embed-certs-024748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-024748' and this object" logger="UnhandledError"
	W0819 19:17:08.487560  438295 out.go:270]   Aug 19 19:12:40 embed-certs-024748 kubelet[936]: W0819 19:12:40.672624     936 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-024748" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-024748' and this object
	W0819 19:17:08.487574  438295 out.go:270]   Aug 19 19:12:40 embed-certs-024748 kubelet[936]: E0819 19:12:40.672667     936 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:embed-certs-024748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-024748' and this object" logger="UnhandledError"
	I0819 19:17:08.487584  438295 out.go:358] Setting ErrFile to fd 2...
	I0819 19:17:08.487598  438295 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:17:10.237580  438245 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0819 19:17:10.237675  438245 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 19:17:10.237792  438245 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 19:17:10.237934  438245 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 19:17:10.238088  438245 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0819 19:17:10.238194  438245 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 19:17:10.239873  438245 out.go:235]   - Generating certificates and keys ...
	I0819 19:17:10.239957  438245 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 19:17:10.240051  438245 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 19:17:10.240187  438245 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 19:17:10.240294  438245 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 19:17:10.240410  438245 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 19:17:10.240495  438245 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 19:17:10.240598  438245 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 19:17:10.240680  438245 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 19:17:10.240747  438245 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 19:17:10.240843  438245 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 19:17:10.240886  438245 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 19:17:10.240958  438245 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 19:17:10.241024  438245 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 19:17:10.241094  438245 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0819 19:17:10.241159  438245 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 19:17:10.241248  438245 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 19:17:10.241328  438245 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 19:17:10.241431  438245 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 19:17:10.241535  438245 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 19:17:10.243764  438245 out.go:235]   - Booting up control plane ...
	I0819 19:17:10.243859  438245 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 19:17:10.243934  438245 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 19:17:10.243994  438245 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 19:17:10.244131  438245 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 19:17:10.244263  438245 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 19:17:10.244301  438245 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 19:17:10.244458  438245 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0819 19:17:10.244611  438245 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0819 19:17:10.244685  438245 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.412341ms
	I0819 19:17:10.244770  438245 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0819 19:17:10.244850  438245 kubeadm.go:310] [api-check] The API server is healthy after 5.002047877s
	I0819 19:17:10.244953  438245 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 19:17:10.245093  438245 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 19:17:10.245199  438245 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 19:17:10.245400  438245 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-982795 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 19:17:10.245465  438245 kubeadm.go:310] [bootstrap-token] Using token: trsfx5.kx2phd1605yhia2w
	I0819 19:17:10.247722  438245 out.go:235]   - Configuring RBAC rules ...
	I0819 19:17:10.247861  438245 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 19:17:10.247955  438245 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 19:17:10.248144  438245 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 19:17:10.248264  438245 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 19:17:10.248379  438245 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 19:17:10.248468  438245 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 19:17:10.248567  438245 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 19:17:10.248612  438245 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 19:17:10.248654  438245 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 19:17:10.248660  438245 kubeadm.go:310] 
	I0819 19:17:10.248708  438245 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 19:17:10.248713  438245 kubeadm.go:310] 
	I0819 19:17:10.248779  438245 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 19:17:10.248786  438245 kubeadm.go:310] 
	I0819 19:17:10.248806  438245 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 19:17:10.248866  438245 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 19:17:10.248910  438245 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 19:17:10.248916  438245 kubeadm.go:310] 
	I0819 19:17:10.248966  438245 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 19:17:10.248972  438245 kubeadm.go:310] 
	I0819 19:17:10.249014  438245 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 19:17:10.249024  438245 kubeadm.go:310] 
	I0819 19:17:10.249069  438245 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 19:17:10.249136  438245 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 19:17:10.249209  438245 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 19:17:10.249221  438245 kubeadm.go:310] 
	I0819 19:17:10.249319  438245 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 19:17:10.249386  438245 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 19:17:10.249392  438245 kubeadm.go:310] 
	I0819 19:17:10.249464  438245 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token trsfx5.kx2phd1605yhia2w \
	I0819 19:17:10.249553  438245 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3fcbd90565c5acbc36a47b2db682cb22dce9b172c9bf3af21e506ebb67608039 \
	I0819 19:17:10.249575  438245 kubeadm.go:310] 	--control-plane 
	I0819 19:17:10.249581  438245 kubeadm.go:310] 
	I0819 19:17:10.249658  438245 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 19:17:10.249664  438245 kubeadm.go:310] 
	I0819 19:17:10.249734  438245 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token trsfx5.kx2phd1605yhia2w \
	I0819 19:17:10.249833  438245 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3fcbd90565c5acbc36a47b2db682cb22dce9b172c9bf3af21e506ebb67608039 
	I0819 19:17:10.249849  438245 cni.go:84] Creating CNI manager for ""
	I0819 19:17:10.249857  438245 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 19:17:10.252133  438245 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 19:17:07.403590  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:09.901861  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:10.253419  438245 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 19:17:10.264266  438245 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 19:17:10.289509  438245 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 19:17:10.289661  438245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-982795 minikube.k8s.io/updated_at=2024_08_19T19_17_10_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=9c2db9d51ec33b5c53a86e9ba3d384ee332e3411 minikube.k8s.io/name=default-k8s-diff-port-982795 minikube.k8s.io/primary=true
	I0819 19:17:10.289663  438245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:17:10.322738  438245 ops.go:34] apiserver oom_adj: -16
	I0819 19:17:10.519946  438245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:17:11.020736  438245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:17:11.520925  438245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:17:12.020276  438245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:17:12.520277  438245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:17:13.020787  438245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:17:13.520048  438245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:17:14.020893  438245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:17:14.520869  438245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:17:14.642214  438245 kubeadm.go:1113] duration metric: took 4.352638211s to wait for elevateKubeSystemPrivileges
	I0819 19:17:14.642251  438245 kubeadm.go:394] duration metric: took 4m59.943476935s to StartCluster
	I0819 19:17:14.642295  438245 settings.go:142] acquiring lock: {Name:mk396fcf49a1d0e69583cf37ff3c819e37118163 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:17:14.642382  438245 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19468-372744/kubeconfig
	I0819 19:17:14.644103  438245 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/kubeconfig: {Name:mk8e7b4e1bb7da665111d2acd83eb48882c66853 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:17:14.644408  438245 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.48 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 19:17:14.644550  438245 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 19:17:14.644641  438245 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-982795"
	I0819 19:17:14.644665  438245 config.go:182] Loaded profile config "default-k8s-diff-port-982795": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:17:14.644687  438245 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-982795"
	W0819 19:17:14.644701  438245 addons.go:243] addon storage-provisioner should already be in state true
	I0819 19:17:14.644712  438245 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-982795"
	I0819 19:17:14.644735  438245 host.go:66] Checking if "default-k8s-diff-port-982795" exists ...
	I0819 19:17:14.644757  438245 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-982795"
	W0819 19:17:14.644770  438245 addons.go:243] addon metrics-server should already be in state true
	I0819 19:17:14.644678  438245 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-982795"
	I0819 19:17:14.644852  438245 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-982795"
	I0819 19:17:14.644797  438245 host.go:66] Checking if "default-k8s-diff-port-982795" exists ...
	I0819 19:17:14.645125  438245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:17:14.645176  438245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:17:14.645272  438245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:17:14.645291  438245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:17:14.645355  438245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:17:14.645401  438245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:17:14.646083  438245 out.go:177] * Verifying Kubernetes components...
	I0819 19:17:14.647579  438245 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:17:14.662756  438245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42581
	I0819 19:17:14.663407  438245 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:17:14.664088  438245 main.go:141] libmachine: Using API Version  1
	I0819 19:17:14.664117  438245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:17:14.664528  438245 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:17:14.665189  438245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:17:14.665222  438245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:17:14.665665  438245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43637
	I0819 19:17:14.665842  438245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44021
	I0819 19:17:14.666204  438245 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:17:14.666321  438245 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:17:14.666761  438245 main.go:141] libmachine: Using API Version  1
	I0819 19:17:14.666783  438245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:17:14.666955  438245 main.go:141] libmachine: Using API Version  1
	I0819 19:17:14.666979  438245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:17:14.667173  438245 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:17:14.667363  438245 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:17:14.667592  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetState
	I0819 19:17:14.667786  438245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:17:14.667818  438245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:17:14.671231  438245 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-982795"
	W0819 19:17:14.671249  438245 addons.go:243] addon default-storageclass should already be in state true
	I0819 19:17:14.671273  438245 host.go:66] Checking if "default-k8s-diff-port-982795" exists ...
	I0819 19:17:14.671507  438245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:17:14.671533  438245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:17:14.682996  438245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36593
	I0819 19:17:14.683560  438245 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:17:14.684268  438245 main.go:141] libmachine: Using API Version  1
	I0819 19:17:14.684292  438245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:17:14.684686  438245 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:17:14.684899  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetState
	I0819 19:17:14.686943  438245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44459
	I0819 19:17:14.687384  438245 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:17:14.687309  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .DriverName
	I0819 19:17:14.687874  438245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46587
	I0819 19:17:14.687965  438245 main.go:141] libmachine: Using API Version  1
	I0819 19:17:14.687980  438245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:17:14.688367  438245 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:17:14.688420  438245 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:17:14.688623  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetState
	I0819 19:17:14.689039  438245 main.go:141] libmachine: Using API Version  1
	I0819 19:17:14.689362  438245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:17:14.689690  438245 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:17:14.690179  438245 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:17:14.690626  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .DriverName
	I0819 19:17:14.690789  438245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:17:14.690823  438245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:17:14.690938  438245 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 19:17:14.690958  438245 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 19:17:14.690979  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHHostname
	I0819 19:17:14.692114  438245 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0819 19:17:11.902284  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:13.903205  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:16.402298  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:14.693147  438245 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0819 19:17:14.693163  438245 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0819 19:17:14.693182  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHHostname
	I0819 19:17:14.694601  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:17:14.695302  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:17:14.695333  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:17:14.695541  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHPort
	I0819 19:17:14.695760  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:17:14.696133  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHUsername
	I0819 19:17:14.696303  438245 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/default-k8s-diff-port-982795/id_rsa Username:docker}
	I0819 19:17:14.696554  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:17:14.696979  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:17:14.697003  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:17:14.697110  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHPort
	I0819 19:17:14.697274  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:17:14.697445  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHUsername
	I0819 19:17:14.697578  438245 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/default-k8s-diff-port-982795/id_rsa Username:docker}
	I0819 19:17:14.708592  438245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38807
	I0819 19:17:14.709140  438245 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:17:14.709716  438245 main.go:141] libmachine: Using API Version  1
	I0819 19:17:14.709737  438245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:17:14.710049  438245 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:17:14.710269  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetState
	I0819 19:17:14.711887  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .DriverName
	I0819 19:17:14.712147  438245 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 19:17:14.712162  438245 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 19:17:14.712179  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHHostname
	I0819 19:17:14.715593  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:17:14.716040  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:17:14.716062  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:17:14.716384  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHPort
	I0819 19:17:14.716561  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:17:14.716710  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHUsername
	I0819 19:17:14.716938  438245 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/default-k8s-diff-port-982795/id_rsa Username:docker}
	I0819 19:17:14.874857  438245 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 19:17:14.903798  438245 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-982795" to be "Ready" ...
	I0819 19:17:14.919842  438245 node_ready.go:49] node "default-k8s-diff-port-982795" has status "Ready":"True"
	I0819 19:17:14.919866  438245 node_ready.go:38] duration metric: took 16.039402ms for node "default-k8s-diff-port-982795" to be "Ready" ...
	I0819 19:17:14.919877  438245 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 19:17:14.932785  438245 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-845gx" in "kube-system" namespace to be "Ready" ...
	I0819 19:17:15.019664  438245 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0819 19:17:15.019718  438245 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0819 19:17:15.030317  438245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 19:17:15.056177  438245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 19:17:15.074202  438245 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0819 19:17:15.074235  438245 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0819 19:17:15.127037  438245 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 19:17:15.127071  438245 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0819 19:17:15.217951  438245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 19:17:15.351034  438245 main.go:141] libmachine: Making call to close driver server
	I0819 19:17:15.351067  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .Close
	I0819 19:17:15.351398  438245 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:17:15.351417  438245 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:17:15.351429  438245 main.go:141] libmachine: Making call to close driver server
	I0819 19:17:15.351441  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .Close
	I0819 19:17:15.351678  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | Closing plugin on server side
	I0819 19:17:15.351728  438245 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:17:15.351750  438245 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:17:15.357999  438245 main.go:141] libmachine: Making call to close driver server
	I0819 19:17:15.358023  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .Close
	I0819 19:17:15.358291  438245 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:17:15.358316  438245 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:17:16.196638  438245 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.140417152s)
	I0819 19:17:16.196694  438245 main.go:141] libmachine: Making call to close driver server
	I0819 19:17:16.196707  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .Close
	I0819 19:17:16.197022  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | Closing plugin on server side
	I0819 19:17:16.197112  438245 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:17:16.197137  438245 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:17:16.197157  438245 main.go:141] libmachine: Making call to close driver server
	I0819 19:17:16.197167  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .Close
	I0819 19:17:16.197449  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | Closing plugin on server side
	I0819 19:17:16.197493  438245 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:17:16.197505  438245 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:17:16.638069  438245 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.42006496s)
	I0819 19:17:16.638141  438245 main.go:141] libmachine: Making call to close driver server
	I0819 19:17:16.638159  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .Close
	I0819 19:17:16.638488  438245 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:17:16.638518  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | Closing plugin on server side
	I0819 19:17:16.638529  438245 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:17:16.638564  438245 main.go:141] libmachine: Making call to close driver server
	I0819 19:17:16.638574  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .Close
	I0819 19:17:16.638861  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | Closing plugin on server side
	I0819 19:17:16.638896  438245 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:17:16.638904  438245 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:17:16.638915  438245 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-982795"
	I0819 19:17:16.641476  438245 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0819 19:17:16.642733  438245 addons.go:510] duration metric: took 1.998196502s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0819 19:17:16.954631  438245 pod_ready.go:103] pod "coredns-6f6b679f8f-845gx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:18.489333  438295 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0819 19:17:18.494609  438295 api_server.go:279] https://192.168.72.96:8443/healthz returned 200:
	ok
	I0819 19:17:18.495587  438295 api_server.go:141] control plane version: v1.31.0
	I0819 19:17:18.495613  438295 api_server.go:131] duration metric: took 11.510793296s to wait for apiserver health ...
	I0819 19:17:18.495624  438295 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 19:17:18.495656  438295 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:17:18.495735  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:17:18.540446  438295 cri.go:89] found id: "d66ad075c652a3b446078444a32327c07459f74199be8f89197067dbad566d5a"
	I0819 19:17:18.540477  438295 cri.go:89] found id: ""
	I0819 19:17:18.540487  438295 logs.go:276] 1 containers: [d66ad075c652a3b446078444a32327c07459f74199be8f89197067dbad566d5a]
	I0819 19:17:18.540555  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:18.551443  438295 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:17:18.551527  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:17:18.592388  438295 cri.go:89] found id: "a3cb2c04e3eb3398fa324b660ca1864f22175cbf41fd84eae34a24ce7928b672"
	I0819 19:17:18.592416  438295 cri.go:89] found id: ""
	I0819 19:17:18.592427  438295 logs.go:276] 1 containers: [a3cb2c04e3eb3398fa324b660ca1864f22175cbf41fd84eae34a24ce7928b672]
	I0819 19:17:18.592495  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:18.597534  438295 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:17:18.597615  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:17:18.637782  438295 cri.go:89] found id: "a6bc5b24f616e32fdffb80b6ed0201250b02f143c8217d56ef90dc55551d709f"
	I0819 19:17:18.637804  438295 cri.go:89] found id: ""
	I0819 19:17:18.637812  438295 logs.go:276] 1 containers: [a6bc5b24f616e32fdffb80b6ed0201250b02f143c8217d56ef90dc55551d709f]
	I0819 19:17:18.637861  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:18.642557  438295 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:17:18.642618  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:17:18.679573  438295 cri.go:89] found id: "c09c2a3840c6b84c4d187a5b4938f1e79c515609ad3ff7077a163e94acd5fc22"
	I0819 19:17:18.679597  438295 cri.go:89] found id: ""
	I0819 19:17:18.679605  438295 logs.go:276] 1 containers: [c09c2a3840c6b84c4d187a5b4938f1e79c515609ad3ff7077a163e94acd5fc22]
	I0819 19:17:18.679657  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:18.684160  438295 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:17:18.684230  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:17:18.726848  438295 cri.go:89] found id: "3e23a8501fe9333693618c26b918ed665ca9f2ea955dfc771ddbd90f4af91338"
	I0819 19:17:18.726881  438295 cri.go:89] found id: ""
	I0819 19:17:18.726889  438295 logs.go:276] 1 containers: [3e23a8501fe9333693618c26b918ed665ca9f2ea955dfc771ddbd90f4af91338]
	I0819 19:17:18.726943  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:18.731422  438295 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:17:18.731484  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:17:18.773623  438295 cri.go:89] found id: "6e6dab43bac16fb6a2155177fd2cb01da57c882a322ae89145bc332c50c87071"
	I0819 19:17:18.773649  438295 cri.go:89] found id: ""
	I0819 19:17:18.773658  438295 logs.go:276] 1 containers: [6e6dab43bac16fb6a2155177fd2cb01da57c882a322ae89145bc332c50c87071]
	I0819 19:17:18.773709  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:18.779609  438295 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:17:18.779687  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:17:18.822876  438295 cri.go:89] found id: ""
	I0819 19:17:18.822911  438295 logs.go:276] 0 containers: []
	W0819 19:17:18.822922  438295 logs.go:278] No container was found matching "kindnet"
	I0819 19:17:18.822931  438295 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 19:17:18.822998  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 19:17:18.868653  438295 cri.go:89] found id: "902796698c02b97c3f50f231cba5dfbc00bc7e8344f104fe7a36109e1d10a4f8"
	I0819 19:17:18.868685  438295 cri.go:89] found id: "44a4290db8405288dc877d1dbfa8f1a4976cb6221431aef419db3cdff822d3b6"
	I0819 19:17:18.868691  438295 cri.go:89] found id: ""
	I0819 19:17:18.868701  438295 logs.go:276] 2 containers: [902796698c02b97c3f50f231cba5dfbc00bc7e8344f104fe7a36109e1d10a4f8 44a4290db8405288dc877d1dbfa8f1a4976cb6221431aef419db3cdff822d3b6]
	I0819 19:17:18.868776  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:18.873136  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:18.877397  438295 logs.go:123] Gathering logs for kube-proxy [3e23a8501fe9333693618c26b918ed665ca9f2ea955dfc771ddbd90f4af91338] ...
	I0819 19:17:18.877425  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e23a8501fe9333693618c26b918ed665ca9f2ea955dfc771ddbd90f4af91338"
	I0819 19:17:18.918085  438295 logs.go:123] Gathering logs for kube-controller-manager [6e6dab43bac16fb6a2155177fd2cb01da57c882a322ae89145bc332c50c87071] ...
	I0819 19:17:18.918118  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e6dab43bac16fb6a2155177fd2cb01da57c882a322ae89145bc332c50c87071"
	I0819 19:17:18.973344  438295 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:17:18.973378  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:17:18.901539  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:20.902550  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:19.440295  438245 pod_ready.go:103] pod "coredns-6f6b679f8f-845gx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:21.939652  438245 pod_ready.go:103] pod "coredns-6f6b679f8f-845gx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:19.443625  438295 logs.go:123] Gathering logs for container status ...
	I0819 19:17:19.443689  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:17:19.492650  438295 logs.go:123] Gathering logs for dmesg ...
	I0819 19:17:19.492696  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:17:19.507957  438295 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:17:19.507996  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 19:17:19.617295  438295 logs.go:123] Gathering logs for coredns [a6bc5b24f616e32fdffb80b6ed0201250b02f143c8217d56ef90dc55551d709f] ...
	I0819 19:17:19.617341  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6bc5b24f616e32fdffb80b6ed0201250b02f143c8217d56ef90dc55551d709f"
	I0819 19:17:19.669869  438295 logs.go:123] Gathering logs for kube-scheduler [c09c2a3840c6b84c4d187a5b4938f1e79c515609ad3ff7077a163e94acd5fc22] ...
	I0819 19:17:19.669930  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c09c2a3840c6b84c4d187a5b4938f1e79c515609ad3ff7077a163e94acd5fc22"
	I0819 19:17:19.706649  438295 logs.go:123] Gathering logs for storage-provisioner [44a4290db8405288dc877d1dbfa8f1a4976cb6221431aef419db3cdff822d3b6] ...
	I0819 19:17:19.706681  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44a4290db8405288dc877d1dbfa8f1a4976cb6221431aef419db3cdff822d3b6"
	I0819 19:17:19.746742  438295 logs.go:123] Gathering logs for kubelet ...
	I0819 19:17:19.746780  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 19:17:19.796224  438295 logs.go:138] Found kubelet problem: Aug 19 19:12:40 embed-certs-024748 kubelet[936]: W0819 19:12:40.671901     936 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:embed-certs-024748" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-024748' and this object
	W0819 19:17:19.796442  438295 logs.go:138] Found kubelet problem: Aug 19 19:12:40 embed-certs-024748 kubelet[936]: E0819 19:12:40.672098     936 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:embed-certs-024748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-024748' and this object" logger="UnhandledError"
	W0819 19:17:19.796622  438295 logs.go:138] Found kubelet problem: Aug 19 19:12:40 embed-certs-024748 kubelet[936]: W0819 19:12:40.672624     936 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-024748" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-024748' and this object
	W0819 19:17:19.796845  438295 logs.go:138] Found kubelet problem: Aug 19 19:12:40 embed-certs-024748 kubelet[936]: E0819 19:12:40.672667     936 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:embed-certs-024748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-024748' and this object" logger="UnhandledError"
	I0819 19:17:19.836283  438295 logs.go:123] Gathering logs for kube-apiserver [d66ad075c652a3b446078444a32327c07459f74199be8f89197067dbad566d5a] ...
	I0819 19:17:19.836328  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d66ad075c652a3b446078444a32327c07459f74199be8f89197067dbad566d5a"
	I0819 19:17:19.889829  438295 logs.go:123] Gathering logs for etcd [a3cb2c04e3eb3398fa324b660ca1864f22175cbf41fd84eae34a24ce7928b672] ...
	I0819 19:17:19.889875  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a3cb2c04e3eb3398fa324b660ca1864f22175cbf41fd84eae34a24ce7928b672"
	I0819 19:17:19.938361  438295 logs.go:123] Gathering logs for storage-provisioner [902796698c02b97c3f50f231cba5dfbc00bc7e8344f104fe7a36109e1d10a4f8] ...
	I0819 19:17:19.938397  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 902796698c02b97c3f50f231cba5dfbc00bc7e8344f104fe7a36109e1d10a4f8"
	I0819 19:17:19.978525  438295 out.go:358] Setting ErrFile to fd 2...
	I0819 19:17:19.978557  438295 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 19:17:19.978628  438295 out.go:270] X Problems detected in kubelet:
	W0819 19:17:19.978642  438295 out.go:270]   Aug 19 19:12:40 embed-certs-024748 kubelet[936]: W0819 19:12:40.671901     936 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:embed-certs-024748" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-024748' and this object
	W0819 19:17:19.978656  438295 out.go:270]   Aug 19 19:12:40 embed-certs-024748 kubelet[936]: E0819 19:12:40.672098     936 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:embed-certs-024748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-024748' and this object" logger="UnhandledError"
	W0819 19:17:19.978669  438295 out.go:270]   Aug 19 19:12:40 embed-certs-024748 kubelet[936]: W0819 19:12:40.672624     936 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-024748" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-024748' and this object
	W0819 19:17:19.978680  438295 out.go:270]   Aug 19 19:12:40 embed-certs-024748 kubelet[936]: E0819 19:12:40.672667     936 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:embed-certs-024748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-024748' and this object" logger="UnhandledError"
	I0819 19:17:19.978690  438295 out.go:358] Setting ErrFile to fd 2...
	I0819 19:17:19.978699  438295 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:17:23.941399  438245 pod_ready.go:93] pod "coredns-6f6b679f8f-845gx" in "kube-system" namespace has status "Ready":"True"
	I0819 19:17:23.941426  438245 pod_ready.go:82] duration metric: took 9.00859927s for pod "coredns-6f6b679f8f-845gx" in "kube-system" namespace to be "Ready" ...
	I0819 19:17:23.941438  438245 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-tlxtt" in "kube-system" namespace to be "Ready" ...
	I0819 19:17:23.946827  438245 pod_ready.go:93] pod "coredns-6f6b679f8f-tlxtt" in "kube-system" namespace has status "Ready":"True"
	I0819 19:17:23.946848  438245 pod_ready.go:82] duration metric: took 5.40058ms for pod "coredns-6f6b679f8f-tlxtt" in "kube-system" namespace to be "Ready" ...
	I0819 19:17:23.946859  438245 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:17:23.956158  438245 pod_ready.go:93] pod "etcd-default-k8s-diff-port-982795" in "kube-system" namespace has status "Ready":"True"
	I0819 19:17:23.956181  438245 pod_ready.go:82] duration metric: took 9.312871ms for pod "etcd-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:17:23.956193  438245 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:17:23.962573  438245 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-982795" in "kube-system" namespace has status "Ready":"True"
	I0819 19:17:23.962595  438245 pod_ready.go:82] duration metric: took 6.3934ms for pod "kube-apiserver-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:17:23.962607  438245 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:17:23.968186  438245 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-982795" in "kube-system" namespace has status "Ready":"True"
	I0819 19:17:23.968206  438245 pod_ready.go:82] duration metric: took 5.591464ms for pod "kube-controller-manager-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:17:23.968214  438245 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2v4hk" in "kube-system" namespace to be "Ready" ...
	I0819 19:17:24.337409  438245 pod_ready.go:93] pod "kube-proxy-2v4hk" in "kube-system" namespace has status "Ready":"True"
	I0819 19:17:24.337443  438245 pod_ready.go:82] duration metric: took 369.220318ms for pod "kube-proxy-2v4hk" in "kube-system" namespace to be "Ready" ...
	I0819 19:17:24.337460  438245 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:17:24.737326  438245 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-982795" in "kube-system" namespace has status "Ready":"True"
	I0819 19:17:24.737362  438245 pod_ready.go:82] duration metric: took 399.891804ms for pod "kube-scheduler-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:17:24.737375  438245 pod_ready.go:39] duration metric: took 9.817484404s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 19:17:24.737396  438245 api_server.go:52] waiting for apiserver process to appear ...
	I0819 19:17:24.737467  438245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:17:24.753681  438245 api_server.go:72] duration metric: took 10.109231411s to wait for apiserver process to appear ...
	I0819 19:17:24.753711  438245 api_server.go:88] waiting for apiserver healthz status ...
	I0819 19:17:24.753734  438245 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8444/healthz ...
	I0819 19:17:24.757976  438245 api_server.go:279] https://192.168.61.48:8444/healthz returned 200:
	ok
	I0819 19:17:24.758875  438245 api_server.go:141] control plane version: v1.31.0
	I0819 19:17:24.758899  438245 api_server.go:131] duration metric: took 5.179486ms to wait for apiserver health ...
	I0819 19:17:24.758908  438245 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 19:17:24.944008  438245 system_pods.go:59] 9 kube-system pods found
	I0819 19:17:24.944053  438245 system_pods.go:61] "coredns-6f6b679f8f-845gx" [95155dd2-d46c-4445-b735-26eae16aaff9] Running
	I0819 19:17:24.944058  438245 system_pods.go:61] "coredns-6f6b679f8f-tlxtt" [150ac4be-bef1-4f0a-ab16-f085284686cb] Running
	I0819 19:17:24.944062  438245 system_pods.go:61] "etcd-default-k8s-diff-port-982795" [eb29f445-6242-4b60-a8d5-7c684df17926] Running
	I0819 19:17:24.944066  438245 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-982795" [2add6270-bf14-43e7-834b-3e629f46efa3] Running
	I0819 19:17:24.944070  438245 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-982795" [6b636d4b-0efa-4cef-b0d4-d4539ddc5c90] Running
	I0819 19:17:24.944073  438245 system_pods.go:61] "kube-proxy-2v4hk" [042d5d54-6557-4d8e-8f4e-2d56e95882ce] Running
	I0819 19:17:24.944076  438245 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-982795" [6eff3815-26b3-4e95-a754-2dc65fd29126] Running
	I0819 19:17:24.944082  438245 system_pods.go:61] "metrics-server-6867b74b74-2dp5r" [04e0ce68-d9a2-426a-a0e9-47f6f7867efd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 19:17:24.944086  438245 system_pods.go:61] "storage-provisioner" [23fcea86-977e-4eb1-9e5a-23d6bdfb09c0] Running
	I0819 19:17:24.944094  438245 system_pods.go:74] duration metric: took 185.180015ms to wait for pod list to return data ...
	I0819 19:17:24.944104  438245 default_sa.go:34] waiting for default service account to be created ...
	I0819 19:17:25.137108  438245 default_sa.go:45] found service account: "default"
	I0819 19:17:25.137147  438245 default_sa.go:55] duration metric: took 193.033434ms for default service account to be created ...
	I0819 19:17:25.137160  438245 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 19:17:25.340115  438245 system_pods.go:86] 9 kube-system pods found
	I0819 19:17:25.340146  438245 system_pods.go:89] "coredns-6f6b679f8f-845gx" [95155dd2-d46c-4445-b735-26eae16aaff9] Running
	I0819 19:17:25.340155  438245 system_pods.go:89] "coredns-6f6b679f8f-tlxtt" [150ac4be-bef1-4f0a-ab16-f085284686cb] Running
	I0819 19:17:25.340161  438245 system_pods.go:89] "etcd-default-k8s-diff-port-982795" [eb29f445-6242-4b60-a8d5-7c684df17926] Running
	I0819 19:17:25.340167  438245 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-982795" [2add6270-bf14-43e7-834b-3e629f46efa3] Running
	I0819 19:17:25.340173  438245 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-982795" [6b636d4b-0efa-4cef-b0d4-d4539ddc5c90] Running
	I0819 19:17:25.340177  438245 system_pods.go:89] "kube-proxy-2v4hk" [042d5d54-6557-4d8e-8f4e-2d56e95882ce] Running
	I0819 19:17:25.340182  438245 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-982795" [6eff3815-26b3-4e95-a754-2dc65fd29126] Running
	I0819 19:17:25.340192  438245 system_pods.go:89] "metrics-server-6867b74b74-2dp5r" [04e0ce68-d9a2-426a-a0e9-47f6f7867efd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 19:17:25.340198  438245 system_pods.go:89] "storage-provisioner" [23fcea86-977e-4eb1-9e5a-23d6bdfb09c0] Running
	I0819 19:17:25.340211  438245 system_pods.go:126] duration metric: took 203.044324ms to wait for k8s-apps to be running ...
	I0819 19:17:25.340224  438245 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 19:17:25.340278  438245 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 19:17:25.355190  438245 system_svc.go:56] duration metric: took 14.954269ms WaitForService to wait for kubelet
	I0819 19:17:25.355223  438245 kubeadm.go:582] duration metric: took 10.710777567s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 19:17:25.355252  438245 node_conditions.go:102] verifying NodePressure condition ...
	I0819 19:17:25.537425  438245 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 19:17:25.537459  438245 node_conditions.go:123] node cpu capacity is 2
	I0819 19:17:25.537472  438245 node_conditions.go:105] duration metric: took 182.213218ms to run NodePressure ...
	I0819 19:17:25.537491  438245 start.go:241] waiting for startup goroutines ...
	I0819 19:17:25.537501  438245 start.go:246] waiting for cluster config update ...
	I0819 19:17:25.537516  438245 start.go:255] writing updated cluster config ...
	I0819 19:17:25.537851  438245 ssh_runner.go:195] Run: rm -f paused
	I0819 19:17:25.589212  438245 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 19:17:25.591352  438245 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-982795" cluster and "default" namespace by default
	I0819 19:17:22.902846  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:25.401911  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:29.988042  438295 system_pods.go:59] 8 kube-system pods found
	I0819 19:17:29.988074  438295 system_pods.go:61] "coredns-6f6b679f8f-7ww4z" [bbde00d4-6027-4d8d-b51e-bd68915da166] Running
	I0819 19:17:29.988080  438295 system_pods.go:61] "etcd-embed-certs-024748" [846ff0f0-5399-43fd-8e7b-1f64997cd291] Running
	I0819 19:17:29.988084  438295 system_pods.go:61] "kube-apiserver-embed-certs-024748" [3ff558d6-e82e-47a0-bb81-15244bee6470] Running
	I0819 19:17:29.988088  438295 system_pods.go:61] "kube-controller-manager-embed-certs-024748" [993b82ba-e8e7-4896-a06b-87c4f08d5985] Running
	I0819 19:17:29.988092  438295 system_pods.go:61] "kube-proxy-bmmbh" [1f77f152-f5f4-40f6-9632-1eaa36b9ea31] Running
	I0819 19:17:29.988095  438295 system_pods.go:61] "kube-scheduler-embed-certs-024748" [34684d4c-2479-45c5-883b-158cf9f974f5] Running
	I0819 19:17:29.988100  438295 system_pods.go:61] "metrics-server-6867b74b74-kxcwh" [15f86629-d916-4fdc-9ecf-9cb1b6c83f85] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 19:17:29.988104  438295 system_pods.go:61] "storage-provisioner" [7acb6ce1-21b6-4cdd-a5cb-76d694fc0a38] Running
	I0819 19:17:29.988113  438295 system_pods.go:74] duration metric: took 11.492481541s to wait for pod list to return data ...
	I0819 19:17:29.988120  438295 default_sa.go:34] waiting for default service account to be created ...
	I0819 19:17:29.991728  438295 default_sa.go:45] found service account: "default"
	I0819 19:17:29.991755  438295 default_sa.go:55] duration metric: took 3.62838ms for default service account to be created ...
	I0819 19:17:29.991764  438295 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 19:17:29.997212  438295 system_pods.go:86] 8 kube-system pods found
	I0819 19:17:29.997237  438295 system_pods.go:89] "coredns-6f6b679f8f-7ww4z" [bbde00d4-6027-4d8d-b51e-bd68915da166] Running
	I0819 19:17:29.997243  438295 system_pods.go:89] "etcd-embed-certs-024748" [846ff0f0-5399-43fd-8e7b-1f64997cd291] Running
	I0819 19:17:29.997247  438295 system_pods.go:89] "kube-apiserver-embed-certs-024748" [3ff558d6-e82e-47a0-bb81-15244bee6470] Running
	I0819 19:17:29.997252  438295 system_pods.go:89] "kube-controller-manager-embed-certs-024748" [993b82ba-e8e7-4896-a06b-87c4f08d5985] Running
	I0819 19:17:29.997256  438295 system_pods.go:89] "kube-proxy-bmmbh" [1f77f152-f5f4-40f6-9632-1eaa36b9ea31] Running
	I0819 19:17:29.997260  438295 system_pods.go:89] "kube-scheduler-embed-certs-024748" [34684d4c-2479-45c5-883b-158cf9f974f5] Running
	I0819 19:17:29.997267  438295 system_pods.go:89] "metrics-server-6867b74b74-kxcwh" [15f86629-d916-4fdc-9ecf-9cb1b6c83f85] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 19:17:29.997270  438295 system_pods.go:89] "storage-provisioner" [7acb6ce1-21b6-4cdd-a5cb-76d694fc0a38] Running
	I0819 19:17:29.997277  438295 system_pods.go:126] duration metric: took 5.507363ms to wait for k8s-apps to be running ...
	I0819 19:17:29.997283  438295 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 19:17:29.997329  438295 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 19:17:30.015349  438295 system_svc.go:56] duration metric: took 18.05422ms WaitForService to wait for kubelet
	I0819 19:17:30.015385  438295 kubeadm.go:582] duration metric: took 4m46.148274918s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 19:17:30.015408  438295 node_conditions.go:102] verifying NodePressure condition ...
	I0819 19:17:30.019744  438295 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 19:17:30.019767  438295 node_conditions.go:123] node cpu capacity is 2
	I0819 19:17:30.019779  438295 node_conditions.go:105] duration metric: took 4.364435ms to run NodePressure ...
	I0819 19:17:30.019791  438295 start.go:241] waiting for startup goroutines ...
	I0819 19:17:30.019798  438295 start.go:246] waiting for cluster config update ...
	I0819 19:17:30.019809  438295 start.go:255] writing updated cluster config ...
	I0819 19:17:30.020080  438295 ssh_runner.go:195] Run: rm -f paused
	I0819 19:17:30.071945  438295 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 19:17:30.073912  438295 out.go:177] * Done! kubectl is now configured to use "embed-certs-024748" cluster and "default" namespace by default
	I0819 19:17:27.901471  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:29.901560  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:32.401214  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:34.402184  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:36.901979  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:38.902132  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:41.401103  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:43.889122  438716 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0819 19:17:43.889226  438716 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 19:17:43.889441  438716 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 19:17:43.402531  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:45.402739  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:48.889647  438716 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 19:17:48.889896  438716 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 19:17:47.902033  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:48.402784  438001 pod_ready.go:82] duration metric: took 4m0.007573449s for pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace to be "Ready" ...
	E0819 19:17:48.402807  438001 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0819 19:17:48.402814  438001 pod_ready.go:39] duration metric: took 4m5.043625176s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 19:17:48.402837  438001 api_server.go:52] waiting for apiserver process to appear ...
	I0819 19:17:48.402866  438001 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:17:48.402916  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:17:48.465049  438001 cri.go:89] found id: "cdac290df2d44c9b30a9c4378f98137a73e603fccd18bc228cca5d017f0a7094"
	I0819 19:17:48.465072  438001 cri.go:89] found id: ""
	I0819 19:17:48.465081  438001 logs.go:276] 1 containers: [cdac290df2d44c9b30a9c4378f98137a73e603fccd18bc228cca5d017f0a7094]
	I0819 19:17:48.465157  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:48.469640  438001 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:17:48.469708  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:17:48.506800  438001 cri.go:89] found id: "27d104597d0ca1b418bd0cab630536ff2d859717c314b48ea994680b21a5bd9a"
	I0819 19:17:48.506825  438001 cri.go:89] found id: ""
	I0819 19:17:48.506836  438001 logs.go:276] 1 containers: [27d104597d0ca1b418bd0cab630536ff2d859717c314b48ea994680b21a5bd9a]
	I0819 19:17:48.506900  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:48.511810  438001 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:17:48.511899  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:17:48.558215  438001 cri.go:89] found id: "6ad390cacd3d89ad9a5e7af71dab26d472a67971ffda086057b7cf0e0a9560aa"
	I0819 19:17:48.558240  438001 cri.go:89] found id: ""
	I0819 19:17:48.558250  438001 logs.go:276] 1 containers: [6ad390cacd3d89ad9a5e7af71dab26d472a67971ffda086057b7cf0e0a9560aa]
	I0819 19:17:48.558308  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:48.562785  438001 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:17:48.562844  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:17:48.602715  438001 cri.go:89] found id: "123f84ccdc9cf1aa830891307b79d42c9166f018bff19b498a5107e428feb92f"
	I0819 19:17:48.602738  438001 cri.go:89] found id: ""
	I0819 19:17:48.602748  438001 logs.go:276] 1 containers: [123f84ccdc9cf1aa830891307b79d42c9166f018bff19b498a5107e428feb92f]
	I0819 19:17:48.602815  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:48.607456  438001 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:17:48.607512  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:17:48.648285  438001 cri.go:89] found id: "236b4296ad713b251ca958489ebfc4ce41bd2cb64d538cf0cf5f72cc9243e94a"
	I0819 19:17:48.648314  438001 cri.go:89] found id: ""
	I0819 19:17:48.648324  438001 logs.go:276] 1 containers: [236b4296ad713b251ca958489ebfc4ce41bd2cb64d538cf0cf5f72cc9243e94a]
	I0819 19:17:48.648374  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:48.653772  438001 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:17:48.653830  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:17:48.697336  438001 cri.go:89] found id: "390aeac356048873634022bb4093a927ddaf293b994b7316b79cfc2c4c329346"
	I0819 19:17:48.697365  438001 cri.go:89] found id: ""
	I0819 19:17:48.697376  438001 logs.go:276] 1 containers: [390aeac356048873634022bb4093a927ddaf293b994b7316b79cfc2c4c329346]
	I0819 19:17:48.697438  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:48.701661  438001 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:17:48.701726  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:17:48.737952  438001 cri.go:89] found id: ""
	I0819 19:17:48.737990  438001 logs.go:276] 0 containers: []
	W0819 19:17:48.738002  438001 logs.go:278] No container was found matching "kindnet"
	I0819 19:17:48.738010  438001 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 19:17:48.738076  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 19:17:48.780047  438001 cri.go:89] found id: "fd16c88623359ff9e44155c82c7e33b07dc040678d1d6f1915a25d80a5db0bbd"
	I0819 19:17:48.780076  438001 cri.go:89] found id: "482a17643a2dedc658bdc88ca54e2ffb40166833acfc42adf452364226e51dc6"
	I0819 19:17:48.780082  438001 cri.go:89] found id: ""
	I0819 19:17:48.780092  438001 logs.go:276] 2 containers: [fd16c88623359ff9e44155c82c7e33b07dc040678d1d6f1915a25d80a5db0bbd 482a17643a2dedc658bdc88ca54e2ffb40166833acfc42adf452364226e51dc6]
	I0819 19:17:48.780168  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:48.784558  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:48.788803  438001 logs.go:123] Gathering logs for kube-apiserver [cdac290df2d44c9b30a9c4378f98137a73e603fccd18bc228cca5d017f0a7094] ...
	I0819 19:17:48.788826  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cdac290df2d44c9b30a9c4378f98137a73e603fccd18bc228cca5d017f0a7094"
	I0819 19:17:48.843469  438001 logs.go:123] Gathering logs for kube-scheduler [123f84ccdc9cf1aa830891307b79d42c9166f018bff19b498a5107e428feb92f] ...
	I0819 19:17:48.843501  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 123f84ccdc9cf1aa830891307b79d42c9166f018bff19b498a5107e428feb92f"
	I0819 19:17:48.884461  438001 logs.go:123] Gathering logs for kube-proxy [236b4296ad713b251ca958489ebfc4ce41bd2cb64d538cf0cf5f72cc9243e94a] ...
	I0819 19:17:48.884495  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 236b4296ad713b251ca958489ebfc4ce41bd2cb64d538cf0cf5f72cc9243e94a"
	I0819 19:17:48.927064  438001 logs.go:123] Gathering logs for storage-provisioner [fd16c88623359ff9e44155c82c7e33b07dc040678d1d6f1915a25d80a5db0bbd] ...
	I0819 19:17:48.927093  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd16c88623359ff9e44155c82c7e33b07dc040678d1d6f1915a25d80a5db0bbd"
	I0819 19:17:48.963812  438001 logs.go:123] Gathering logs for container status ...
	I0819 19:17:48.963845  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:17:49.017381  438001 logs.go:123] Gathering logs for kubelet ...
	I0819 19:17:49.017420  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:17:49.093572  438001 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:17:49.093614  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 19:17:49.236680  438001 logs.go:123] Gathering logs for coredns [6ad390cacd3d89ad9a5e7af71dab26d472a67971ffda086057b7cf0e0a9560aa] ...
	I0819 19:17:49.236721  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ad390cacd3d89ad9a5e7af71dab26d472a67971ffda086057b7cf0e0a9560aa"
	I0819 19:17:49.274636  438001 logs.go:123] Gathering logs for kube-controller-manager [390aeac356048873634022bb4093a927ddaf293b994b7316b79cfc2c4c329346] ...
	I0819 19:17:49.274677  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 390aeac356048873634022bb4093a927ddaf293b994b7316b79cfc2c4c329346"
	I0819 19:17:49.326208  438001 logs.go:123] Gathering logs for storage-provisioner [482a17643a2dedc658bdc88ca54e2ffb40166833acfc42adf452364226e51dc6] ...
	I0819 19:17:49.326242  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 482a17643a2dedc658bdc88ca54e2ffb40166833acfc42adf452364226e51dc6"
	I0819 19:17:49.363589  438001 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:17:49.363628  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:17:49.841705  438001 logs.go:123] Gathering logs for dmesg ...
	I0819 19:17:49.841757  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:17:49.858466  438001 logs.go:123] Gathering logs for etcd [27d104597d0ca1b418bd0cab630536ff2d859717c314b48ea994680b21a5bd9a] ...
	I0819 19:17:49.858504  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27d104597d0ca1b418bd0cab630536ff2d859717c314b48ea994680b21a5bd9a"
	I0819 19:17:52.406197  438001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:17:52.422951  438001 api_server.go:72] duration metric: took 4m16.822246565s to wait for apiserver process to appear ...
	I0819 19:17:52.422981  438001 api_server.go:88] waiting for apiserver healthz status ...
	I0819 19:17:52.423019  438001 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:17:52.423075  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:17:52.464305  438001 cri.go:89] found id: "cdac290df2d44c9b30a9c4378f98137a73e603fccd18bc228cca5d017f0a7094"
	I0819 19:17:52.464327  438001 cri.go:89] found id: ""
	I0819 19:17:52.464335  438001 logs.go:276] 1 containers: [cdac290df2d44c9b30a9c4378f98137a73e603fccd18bc228cca5d017f0a7094]
	I0819 19:17:52.464387  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:52.468824  438001 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:17:52.468904  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:17:52.508907  438001 cri.go:89] found id: "27d104597d0ca1b418bd0cab630536ff2d859717c314b48ea994680b21a5bd9a"
	I0819 19:17:52.508929  438001 cri.go:89] found id: ""
	I0819 19:17:52.508937  438001 logs.go:276] 1 containers: [27d104597d0ca1b418bd0cab630536ff2d859717c314b48ea994680b21a5bd9a]
	I0819 19:17:52.508998  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:52.513206  438001 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:17:52.513281  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:17:52.553908  438001 cri.go:89] found id: "6ad390cacd3d89ad9a5e7af71dab26d472a67971ffda086057b7cf0e0a9560aa"
	I0819 19:17:52.553940  438001 cri.go:89] found id: ""
	I0819 19:17:52.553948  438001 logs.go:276] 1 containers: [6ad390cacd3d89ad9a5e7af71dab26d472a67971ffda086057b7cf0e0a9560aa]
	I0819 19:17:52.554007  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:52.558420  438001 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:17:52.558487  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:17:52.598450  438001 cri.go:89] found id: "123f84ccdc9cf1aa830891307b79d42c9166f018bff19b498a5107e428feb92f"
	I0819 19:17:52.598480  438001 cri.go:89] found id: ""
	I0819 19:17:52.598491  438001 logs.go:276] 1 containers: [123f84ccdc9cf1aa830891307b79d42c9166f018bff19b498a5107e428feb92f]
	I0819 19:17:52.598564  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:52.603421  438001 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:17:52.603485  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:17:52.639017  438001 cri.go:89] found id: "236b4296ad713b251ca958489ebfc4ce41bd2cb64d538cf0cf5f72cc9243e94a"
	I0819 19:17:52.639049  438001 cri.go:89] found id: ""
	I0819 19:17:52.639060  438001 logs.go:276] 1 containers: [236b4296ad713b251ca958489ebfc4ce41bd2cb64d538cf0cf5f72cc9243e94a]
	I0819 19:17:52.639129  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:52.645313  438001 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:17:52.645392  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:17:52.687266  438001 cri.go:89] found id: "390aeac356048873634022bb4093a927ddaf293b994b7316b79cfc2c4c329346"
	I0819 19:17:52.687296  438001 cri.go:89] found id: ""
	I0819 19:17:52.687305  438001 logs.go:276] 1 containers: [390aeac356048873634022bb4093a927ddaf293b994b7316b79cfc2c4c329346]
	I0819 19:17:52.687369  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:52.691770  438001 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:17:52.691830  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:17:52.734067  438001 cri.go:89] found id: ""
	I0819 19:17:52.734098  438001 logs.go:276] 0 containers: []
	W0819 19:17:52.734107  438001 logs.go:278] No container was found matching "kindnet"
	I0819 19:17:52.734113  438001 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 19:17:52.734171  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 19:17:52.781039  438001 cri.go:89] found id: "fd16c88623359ff9e44155c82c7e33b07dc040678d1d6f1915a25d80a5db0bbd"
	I0819 19:17:52.781062  438001 cri.go:89] found id: "482a17643a2dedc658bdc88ca54e2ffb40166833acfc42adf452364226e51dc6"
	I0819 19:17:52.781066  438001 cri.go:89] found id: ""
	I0819 19:17:52.781074  438001 logs.go:276] 2 containers: [fd16c88623359ff9e44155c82c7e33b07dc040678d1d6f1915a25d80a5db0bbd 482a17643a2dedc658bdc88ca54e2ffb40166833acfc42adf452364226e51dc6]
	I0819 19:17:52.781135  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:52.785730  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:52.789946  438001 logs.go:123] Gathering logs for kube-scheduler [123f84ccdc9cf1aa830891307b79d42c9166f018bff19b498a5107e428feb92f] ...
	I0819 19:17:52.789978  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 123f84ccdc9cf1aa830891307b79d42c9166f018bff19b498a5107e428feb92f"
	I0819 19:17:52.830509  438001 logs.go:123] Gathering logs for kube-controller-manager [390aeac356048873634022bb4093a927ddaf293b994b7316b79cfc2c4c329346] ...
	I0819 19:17:52.830541  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 390aeac356048873634022bb4093a927ddaf293b994b7316b79cfc2c4c329346"
	I0819 19:17:52.892964  438001 logs.go:123] Gathering logs for container status ...
	I0819 19:17:52.893017  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:17:52.947999  438001 logs.go:123] Gathering logs for kubelet ...
	I0819 19:17:52.948028  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:17:53.019377  438001 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:17:53.019423  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 19:17:53.134032  438001 logs.go:123] Gathering logs for kube-apiserver [cdac290df2d44c9b30a9c4378f98137a73e603fccd18bc228cca5d017f0a7094] ...
	I0819 19:17:53.134069  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cdac290df2d44c9b30a9c4378f98137a73e603fccd18bc228cca5d017f0a7094"
	I0819 19:17:53.186159  438001 logs.go:123] Gathering logs for etcd [27d104597d0ca1b418bd0cab630536ff2d859717c314b48ea994680b21a5bd9a] ...
	I0819 19:17:53.186193  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27d104597d0ca1b418bd0cab630536ff2d859717c314b48ea994680b21a5bd9a"
	I0819 19:17:53.236918  438001 logs.go:123] Gathering logs for storage-provisioner [482a17643a2dedc658bdc88ca54e2ffb40166833acfc42adf452364226e51dc6] ...
	I0819 19:17:53.236949  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 482a17643a2dedc658bdc88ca54e2ffb40166833acfc42adf452364226e51dc6"
	I0819 19:17:53.275211  438001 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:17:53.275242  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:17:53.710352  438001 logs.go:123] Gathering logs for dmesg ...
	I0819 19:17:53.710396  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:17:53.726691  438001 logs.go:123] Gathering logs for coredns [6ad390cacd3d89ad9a5e7af71dab26d472a67971ffda086057b7cf0e0a9560aa] ...
	I0819 19:17:53.726731  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ad390cacd3d89ad9a5e7af71dab26d472a67971ffda086057b7cf0e0a9560aa"
	I0819 19:17:53.768322  438001 logs.go:123] Gathering logs for kube-proxy [236b4296ad713b251ca958489ebfc4ce41bd2cb64d538cf0cf5f72cc9243e94a] ...
	I0819 19:17:53.768361  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 236b4296ad713b251ca958489ebfc4ce41bd2cb64d538cf0cf5f72cc9243e94a"
	I0819 19:17:53.808546  438001 logs.go:123] Gathering logs for storage-provisioner [fd16c88623359ff9e44155c82c7e33b07dc040678d1d6f1915a25d80a5db0bbd] ...
	I0819 19:17:53.808577  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd16c88623359ff9e44155c82c7e33b07dc040678d1d6f1915a25d80a5db0bbd"
	I0819 19:17:56.362339  438001 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8443/healthz ...
	I0819 19:17:56.366636  438001 api_server.go:279] https://192.168.39.106:8443/healthz returned 200:
	ok
	I0819 19:17:56.367838  438001 api_server.go:141] control plane version: v1.31.0
	I0819 19:17:56.367867  438001 api_server.go:131] duration metric: took 3.944877317s to wait for apiserver health ...
	I0819 19:17:56.367891  438001 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 19:17:56.367925  438001 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:17:56.367991  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:17:56.412151  438001 cri.go:89] found id: "cdac290df2d44c9b30a9c4378f98137a73e603fccd18bc228cca5d017f0a7094"
	I0819 19:17:56.412179  438001 cri.go:89] found id: ""
	I0819 19:17:56.412187  438001 logs.go:276] 1 containers: [cdac290df2d44c9b30a9c4378f98137a73e603fccd18bc228cca5d017f0a7094]
	I0819 19:17:56.412247  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:56.416620  438001 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:17:56.416795  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:17:56.456888  438001 cri.go:89] found id: "27d104597d0ca1b418bd0cab630536ff2d859717c314b48ea994680b21a5bd9a"
	I0819 19:17:56.456918  438001 cri.go:89] found id: ""
	I0819 19:17:56.456927  438001 logs.go:276] 1 containers: [27d104597d0ca1b418bd0cab630536ff2d859717c314b48ea994680b21a5bd9a]
	I0819 19:17:56.456984  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:56.461563  438001 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:17:56.461667  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:17:56.506990  438001 cri.go:89] found id: "6ad390cacd3d89ad9a5e7af71dab26d472a67971ffda086057b7cf0e0a9560aa"
	I0819 19:17:56.507018  438001 cri.go:89] found id: ""
	I0819 19:17:56.507028  438001 logs.go:276] 1 containers: [6ad390cacd3d89ad9a5e7af71dab26d472a67971ffda086057b7cf0e0a9560aa]
	I0819 19:17:56.507099  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:56.511547  438001 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:17:56.511616  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:17:56.551734  438001 cri.go:89] found id: "123f84ccdc9cf1aa830891307b79d42c9166f018bff19b498a5107e428feb92f"
	I0819 19:17:56.551761  438001 cri.go:89] found id: ""
	I0819 19:17:56.551772  438001 logs.go:276] 1 containers: [123f84ccdc9cf1aa830891307b79d42c9166f018bff19b498a5107e428feb92f]
	I0819 19:17:56.551837  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:56.556963  438001 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:17:56.557039  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:17:56.601862  438001 cri.go:89] found id: "236b4296ad713b251ca958489ebfc4ce41bd2cb64d538cf0cf5f72cc9243e94a"
	I0819 19:17:56.601892  438001 cri.go:89] found id: ""
	I0819 19:17:56.601902  438001 logs.go:276] 1 containers: [236b4296ad713b251ca958489ebfc4ce41bd2cb64d538cf0cf5f72cc9243e94a]
	I0819 19:17:56.601971  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:56.606618  438001 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:17:56.606706  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:17:56.649476  438001 cri.go:89] found id: "390aeac356048873634022bb4093a927ddaf293b994b7316b79cfc2c4c329346"
	I0819 19:17:56.649501  438001 cri.go:89] found id: ""
	I0819 19:17:56.649510  438001 logs.go:276] 1 containers: [390aeac356048873634022bb4093a927ddaf293b994b7316b79cfc2c4c329346]
	I0819 19:17:56.649561  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:56.654009  438001 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:17:56.654071  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:17:56.707479  438001 cri.go:89] found id: ""
	I0819 19:17:56.707506  438001 logs.go:276] 0 containers: []
	W0819 19:17:56.707518  438001 logs.go:278] No container was found matching "kindnet"
	I0819 19:17:56.707527  438001 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 19:17:56.707585  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 19:17:56.749937  438001 cri.go:89] found id: "fd16c88623359ff9e44155c82c7e33b07dc040678d1d6f1915a25d80a5db0bbd"
	I0819 19:17:56.749961  438001 cri.go:89] found id: "482a17643a2dedc658bdc88ca54e2ffb40166833acfc42adf452364226e51dc6"
	I0819 19:17:56.749966  438001 cri.go:89] found id: ""
	I0819 19:17:56.749973  438001 logs.go:276] 2 containers: [fd16c88623359ff9e44155c82c7e33b07dc040678d1d6f1915a25d80a5db0bbd 482a17643a2dedc658bdc88ca54e2ffb40166833acfc42adf452364226e51dc6]
	I0819 19:17:56.750026  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:56.754791  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:56.758672  438001 logs.go:123] Gathering logs for etcd [27d104597d0ca1b418bd0cab630536ff2d859717c314b48ea994680b21a5bd9a] ...
	I0819 19:17:56.758700  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27d104597d0ca1b418bd0cab630536ff2d859717c314b48ea994680b21a5bd9a"
	I0819 19:17:56.811420  438001 logs.go:123] Gathering logs for kube-controller-manager [390aeac356048873634022bb4093a927ddaf293b994b7316b79cfc2c4c329346] ...
	I0819 19:17:56.811461  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 390aeac356048873634022bb4093a927ddaf293b994b7316b79cfc2c4c329346"
	I0819 19:17:56.871550  438001 logs.go:123] Gathering logs for storage-provisioner [482a17643a2dedc658bdc88ca54e2ffb40166833acfc42adf452364226e51dc6] ...
	I0819 19:17:56.871588  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 482a17643a2dedc658bdc88ca54e2ffb40166833acfc42adf452364226e51dc6"
	I0819 19:17:56.918183  438001 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:17:56.918224  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:17:57.297614  438001 logs.go:123] Gathering logs for container status ...
	I0819 19:17:57.297653  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:17:57.339092  438001 logs.go:123] Gathering logs for dmesg ...
	I0819 19:17:57.339127  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:17:57.355787  438001 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:17:57.355820  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 19:17:57.486287  438001 logs.go:123] Gathering logs for kube-apiserver [cdac290df2d44c9b30a9c4378f98137a73e603fccd18bc228cca5d017f0a7094] ...
	I0819 19:17:57.486328  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cdac290df2d44c9b30a9c4378f98137a73e603fccd18bc228cca5d017f0a7094"
	I0819 19:17:57.535864  438001 logs.go:123] Gathering logs for coredns [6ad390cacd3d89ad9a5e7af71dab26d472a67971ffda086057b7cf0e0a9560aa] ...
	I0819 19:17:57.535903  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ad390cacd3d89ad9a5e7af71dab26d472a67971ffda086057b7cf0e0a9560aa"
	I0819 19:17:57.577211  438001 logs.go:123] Gathering logs for kube-scheduler [123f84ccdc9cf1aa830891307b79d42c9166f018bff19b498a5107e428feb92f] ...
	I0819 19:17:57.577248  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 123f84ccdc9cf1aa830891307b79d42c9166f018bff19b498a5107e428feb92f"
	I0819 19:17:57.615928  438001 logs.go:123] Gathering logs for kube-proxy [236b4296ad713b251ca958489ebfc4ce41bd2cb64d538cf0cf5f72cc9243e94a] ...
	I0819 19:17:57.615962  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 236b4296ad713b251ca958489ebfc4ce41bd2cb64d538cf0cf5f72cc9243e94a"
	I0819 19:17:57.655413  438001 logs.go:123] Gathering logs for storage-provisioner [fd16c88623359ff9e44155c82c7e33b07dc040678d1d6f1915a25d80a5db0bbd] ...
	I0819 19:17:57.655445  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd16c88623359ff9e44155c82c7e33b07dc040678d1d6f1915a25d80a5db0bbd"
	I0819 19:17:57.704470  438001 logs.go:123] Gathering logs for kubelet ...
	I0819 19:17:57.704502  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:18:00.281191  438001 system_pods.go:59] 8 kube-system pods found
	I0819 19:18:00.281223  438001 system_pods.go:61] "coredns-6f6b679f8f-22lbt" [c8a5cabd-41d4-41cb-91c1-2db1f3471db3] Running
	I0819 19:18:00.281228  438001 system_pods.go:61] "etcd-no-preload-278232" [36d555a1-33e4-4c6c-b24e-2fee4fd84f2b] Running
	I0819 19:18:00.281232  438001 system_pods.go:61] "kube-apiserver-no-preload-278232" [af7173e5-c4ac-4ece-b8b9-bb81cb6b9bfd] Running
	I0819 19:18:00.281235  438001 system_pods.go:61] "kube-controller-manager-no-preload-278232" [2463d97a-5221-40ce-8fd7-08151165d6f7] Running
	I0819 19:18:00.281238  438001 system_pods.go:61] "kube-proxy-rcf49" [85d5814a-1ba9-46be-ab11-17bf40c0f029] Running
	I0819 19:18:00.281241  438001 system_pods.go:61] "kube-scheduler-no-preload-278232" [3b327704-f70c-4d6f-a774-15427a305472] Running
	I0819 19:18:00.281247  438001 system_pods.go:61] "metrics-server-6867b74b74-vxwrs" [e8b74128-b393-4f0f-90fe-e05f20d54acd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 19:18:00.281252  438001 system_pods.go:61] "storage-provisioner" [24766475-1a5b-4f1a-9350-3e891b5272cc] Running
	I0819 19:18:00.281260  438001 system_pods.go:74] duration metric: took 3.913361626s to wait for pod list to return data ...
	I0819 19:18:00.281267  438001 default_sa.go:34] waiting for default service account to be created ...
	I0819 19:18:00.283873  438001 default_sa.go:45] found service account: "default"
	I0819 19:18:00.283898  438001 default_sa.go:55] duration metric: took 2.625775ms for default service account to be created ...
	I0819 19:18:00.283907  438001 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 19:18:00.288985  438001 system_pods.go:86] 8 kube-system pods found
	I0819 19:18:00.289012  438001 system_pods.go:89] "coredns-6f6b679f8f-22lbt" [c8a5cabd-41d4-41cb-91c1-2db1f3471db3] Running
	I0819 19:18:00.289018  438001 system_pods.go:89] "etcd-no-preload-278232" [36d555a1-33e4-4c6c-b24e-2fee4fd84f2b] Running
	I0819 19:18:00.289022  438001 system_pods.go:89] "kube-apiserver-no-preload-278232" [af7173e5-c4ac-4ece-b8b9-bb81cb6b9bfd] Running
	I0819 19:18:00.289028  438001 system_pods.go:89] "kube-controller-manager-no-preload-278232" [2463d97a-5221-40ce-8fd7-08151165d6f7] Running
	I0819 19:18:00.289033  438001 system_pods.go:89] "kube-proxy-rcf49" [85d5814a-1ba9-46be-ab11-17bf40c0f029] Running
	I0819 19:18:00.289038  438001 system_pods.go:89] "kube-scheduler-no-preload-278232" [3b327704-f70c-4d6f-a774-15427a305472] Running
	I0819 19:18:00.289047  438001 system_pods.go:89] "metrics-server-6867b74b74-vxwrs" [e8b74128-b393-4f0f-90fe-e05f20d54acd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 19:18:00.289056  438001 system_pods.go:89] "storage-provisioner" [24766475-1a5b-4f1a-9350-3e891b5272cc] Running
	I0819 19:18:00.289067  438001 system_pods.go:126] duration metric: took 5.154385ms to wait for k8s-apps to be running ...
	I0819 19:18:00.289081  438001 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 19:18:00.289132  438001 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 19:18:00.307128  438001 system_svc.go:56] duration metric: took 18.036826ms WaitForService to wait for kubelet
	I0819 19:18:00.307160  438001 kubeadm.go:582] duration metric: took 4m24.706461383s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 19:18:00.307183  438001 node_conditions.go:102] verifying NodePressure condition ...
	I0819 19:18:00.309818  438001 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 19:18:00.309866  438001 node_conditions.go:123] node cpu capacity is 2
	I0819 19:18:00.309879  438001 node_conditions.go:105] duration metric: took 2.691554ms to run NodePressure ...
	I0819 19:18:00.309892  438001 start.go:241] waiting for startup goroutines ...
	I0819 19:18:00.309901  438001 start.go:246] waiting for cluster config update ...
	I0819 19:18:00.309918  438001 start.go:255] writing updated cluster config ...
	I0819 19:18:00.310268  438001 ssh_runner.go:195] Run: rm -f paused
	I0819 19:18:00.366211  438001 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 19:18:00.368280  438001 out.go:177] * Done! kubectl is now configured to use "no-preload-278232" cluster and "default" namespace by default
	I0819 19:17:58.890611  438716 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 19:17:58.890832  438716 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 19:18:18.891960  438716 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 19:18:18.892243  438716 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 19:18:58.894609  438716 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 19:18:58.894854  438716 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 19:18:58.894869  438716 kubeadm.go:310] 
	I0819 19:18:58.894912  438716 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0819 19:18:58.894967  438716 kubeadm.go:310] 		timed out waiting for the condition
	I0819 19:18:58.894981  438716 kubeadm.go:310] 
	I0819 19:18:58.895024  438716 kubeadm.go:310] 	This error is likely caused by:
	I0819 19:18:58.895072  438716 kubeadm.go:310] 		- The kubelet is not running
	I0819 19:18:58.895344  438716 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0819 19:18:58.895388  438716 kubeadm.go:310] 
	I0819 19:18:58.895518  438716 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0819 19:18:58.895613  438716 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0819 19:18:58.895668  438716 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0819 19:18:58.895695  438716 kubeadm.go:310] 
	I0819 19:18:58.895839  438716 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0819 19:18:58.895959  438716 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0819 19:18:58.895972  438716 kubeadm.go:310] 
	I0819 19:18:58.896072  438716 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0819 19:18:58.896154  438716 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0819 19:18:58.896220  438716 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0819 19:18:58.896284  438716 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0819 19:18:58.896314  438716 kubeadm.go:310] 
	I0819 19:18:58.896819  438716 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 19:18:58.896946  438716 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0819 19:18:58.897028  438716 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0819 19:18:58.897193  438716 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0819 19:18:58.897249  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 19:18:59.361073  438716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 19:18:59.375791  438716 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 19:18:59.387650  438716 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 19:18:59.387697  438716 kubeadm.go:157] found existing configuration files:
	
	I0819 19:18:59.387756  438716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 19:18:59.397345  438716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 19:18:59.397409  438716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 19:18:59.408060  438716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 19:18:59.417658  438716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 19:18:59.417731  438716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 19:18:59.427765  438716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 19:18:59.437636  438716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 19:18:59.437712  438716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 19:18:59.447506  438716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 19:18:59.457100  438716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 19:18:59.457165  438716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 19:18:59.467185  438716 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 19:18:59.540706  438716 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0819 19:18:59.541005  438716 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 19:18:59.694109  438716 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 19:18:59.694238  438716 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 19:18:59.694350  438716 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0819 19:18:59.874268  438716 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 19:18:59.876259  438716 out.go:235]   - Generating certificates and keys ...
	I0819 19:18:59.876362  438716 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 19:18:59.876441  438716 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 19:18:59.876569  438716 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 19:18:59.876654  438716 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 19:18:59.876751  438716 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 19:18:59.876824  438716 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 19:18:59.876900  438716 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 19:18:59.877076  438716 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 19:18:59.877571  438716 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 19:18:59.877997  438716 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 19:18:59.878139  438716 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 19:18:59.878241  438716 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 19:19:00.153380  438716 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 19:19:00.359863  438716 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 19:19:00.470797  438716 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 19:19:00.590041  438716 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 19:19:00.614332  438716 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 19:19:00.615415  438716 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 19:19:00.615473  438716 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 19:19:00.756167  438716 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 19:19:00.757737  438716 out.go:235]   - Booting up control plane ...
	I0819 19:19:00.757873  438716 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 19:19:00.761484  438716 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 19:19:00.762431  438716 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 19:19:00.763241  438716 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 19:19:00.766155  438716 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0819 19:19:40.770166  438716 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0819 19:19:40.770378  438716 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 19:19:40.770543  438716 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 19:19:45.771352  438716 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 19:19:45.771587  438716 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 19:19:55.772027  438716 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 19:19:55.772243  438716 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 19:20:15.773008  438716 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 19:20:15.773238  438716 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 19:20:55.771311  438716 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 19:20:55.771517  438716 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 19:20:55.771530  438716 kubeadm.go:310] 
	I0819 19:20:55.771578  438716 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0819 19:20:55.771750  438716 kubeadm.go:310] 		timed out waiting for the condition
	I0819 19:20:55.771784  438716 kubeadm.go:310] 
	I0819 19:20:55.771845  438716 kubeadm.go:310] 	This error is likely caused by:
	I0819 19:20:55.771891  438716 kubeadm.go:310] 		- The kubelet is not running
	I0819 19:20:55.772014  438716 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0819 19:20:55.772027  438716 kubeadm.go:310] 
	I0819 19:20:55.772125  438716 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0819 19:20:55.772162  438716 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0819 19:20:55.772188  438716 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0819 19:20:55.772196  438716 kubeadm.go:310] 
	I0819 19:20:55.772272  438716 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0819 19:20:55.772336  438716 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0819 19:20:55.772343  438716 kubeadm.go:310] 
	I0819 19:20:55.772439  438716 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0819 19:20:55.772520  438716 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0819 19:20:55.772581  438716 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0819 19:20:55.772637  438716 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0819 19:20:55.772645  438716 kubeadm.go:310] 
	I0819 19:20:55.773758  438716 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 19:20:55.773880  438716 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0819 19:20:55.773971  438716 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0819 19:20:55.774067  438716 kubeadm.go:394] duration metric: took 7m57.361589371s to StartCluster
	I0819 19:20:55.774157  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:20:55.774243  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:20:55.818428  438716 cri.go:89] found id: ""
	I0819 19:20:55.818460  438716 logs.go:276] 0 containers: []
	W0819 19:20:55.818468  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:20:55.818475  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:20:55.818535  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:20:55.857714  438716 cri.go:89] found id: ""
	I0819 19:20:55.857747  438716 logs.go:276] 0 containers: []
	W0819 19:20:55.857758  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:20:55.857766  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:20:55.857841  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:20:55.891917  438716 cri.go:89] found id: ""
	I0819 19:20:55.891948  438716 logs.go:276] 0 containers: []
	W0819 19:20:55.891967  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:20:55.891976  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:20:55.892046  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:20:55.930608  438716 cri.go:89] found id: ""
	I0819 19:20:55.930643  438716 logs.go:276] 0 containers: []
	W0819 19:20:55.930656  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:20:55.930665  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:20:55.930734  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:20:55.966563  438716 cri.go:89] found id: ""
	I0819 19:20:55.966591  438716 logs.go:276] 0 containers: []
	W0819 19:20:55.966600  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:20:55.966607  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:20:55.966670  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:20:56.010392  438716 cri.go:89] found id: ""
	I0819 19:20:56.010421  438716 logs.go:276] 0 containers: []
	W0819 19:20:56.010430  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:20:56.010436  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:20:56.010491  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:20:56.066940  438716 cri.go:89] found id: ""
	I0819 19:20:56.066973  438716 logs.go:276] 0 containers: []
	W0819 19:20:56.066985  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:20:56.066994  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:20:56.067062  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:20:56.118852  438716 cri.go:89] found id: ""
	I0819 19:20:56.118881  438716 logs.go:276] 0 containers: []
	W0819 19:20:56.118894  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:20:56.118909  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:20:56.118925  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:20:56.158224  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:20:56.158263  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:20:56.211882  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:20:56.211925  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:20:56.228082  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:20:56.228124  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:20:56.307857  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:20:56.307880  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:20:56.307893  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0819 19:20:56.414797  438716 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0819 19:20:56.414885  438716 out.go:270] * 
	W0819 19:20:56.415020  438716 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0819 19:20:56.415039  438716 out.go:270] * 
	W0819 19:20:56.416031  438716 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 19:20:56.419869  438716 out.go:201] 
	W0819 19:20:56.421262  438716 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0819 19:20:56.421319  438716 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0819 19:20:56.421351  438716 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0819 19:20:56.422942  438716 out.go:201] 
	
	
	==> CRI-O <==
	Aug 19 19:26:27 default-k8s-diff-port-982795 crio[730]: time="2024-08-19 19:26:27.707986222Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095587707956364,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b086ff63-6796-48e0-a8d4-6e65af575694 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:26:27 default-k8s-diff-port-982795 crio[730]: time="2024-08-19 19:26:27.708942980Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1e50dca3-03f5-4ffb-87f6-771a82f42c20 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:26:27 default-k8s-diff-port-982795 crio[730]: time="2024-08-19 19:26:27.708987137Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1e50dca3-03f5-4ffb-87f6-771a82f42c20 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:26:27 default-k8s-diff-port-982795 crio[730]: time="2024-08-19 19:26:27.709160733Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:969ba38e33a57295fe0ae35077eb098948d9ad14a5eadeb75d70d2df5295289a,PodSandboxId:9fc5843fbb153651155598b90a297dc31af3ac7da5cf76946dc7bafad7908fda,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724095036707990631,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23fcea86-977e-4eb1-9e5a-23d6bdfb09c0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b9401ae3bfc5674655e1d13bb7496bd41ccca20b8d278eab6f128e796427c95,PodSandboxId:4d054d1fbff16b77f9957a639a00f5eafaf828414580bbd6e0930987680b80d5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724095036066638960,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-tlxtt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 150ac4be-bef1-4f0a-ab16-f085284686cb,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74c639aa1e86b4636654569e0285a63b33a3e00dc9fe1f174401d1f5b786fa6f,PodSandboxId:43809f9e43e622bfc096fb062f9c820f2a9a4133e01e9834ac65acb2d5baa7c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724095036114813559,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-845gx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 95155dd2-d46c-4445-b735-26eae16aaff9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fd4382f412f381b51dabe019ff226d5c821e8a8ff170e0870e36423dfba1070,PodSandboxId:33766eb0695bf1448d52e8380ba516bb7cbfb3623c2af674317df9995a9a5c26,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING
,CreatedAt:1724095035417699307,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2v4hk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 042d5d54-6557-4d8e-8f4e-2d56e95882ce,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ad4e1a87c8dd92fb96e88c93c58df7a4370d5c0378707ef83c3589ab5634291,PodSandboxId:aa68f2ac4de4e6586a17408820a3721534fbf7afa250e286b33d905c8a9e553b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724095024442132437,Labels:ma
p[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-982795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20a29fea437a40035b4f2101b4f2c4a4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d64840f8fd90aeb0fa49f228ebece3532df4e5564f7e40e26bb86d9aaf6dcfb5,PodSandboxId:f89efc21d3dcdf6467aeda97c1fac841e1763c22eb314cd0e76f0a71c72347aa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724095024382648303,Labels:map[string]string{io.kub
ernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-982795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b2cf4c315e3b88d41e6dc986691274f,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:494eae14eb51724902d43998d3f810f2370cd662ade302b988430c49a2785885,PodSandboxId:e7a2601c52192cf94a970f9f4944af034bf847d70d4801df7da08e0c655f46b9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724095024336594430,Labels:map[string]string
{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-982795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a253c1469ea0e730b1065f9d733602a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2f82cdbdd75559bad4071795545a9340bffc837ab2b039e2b7b11e799cd2c1d,PodSandboxId:285a74dbebcdb93622c6bc3448972534eb4080f3defc38febd351dc052763d9b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724095024346517617,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-982795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b56b6e9c850523092949a2b7ecd02a24,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30d8daf89a4b14fad81c80fd2205c514469407210646e4b9675cfa492e267324,PodSandboxId:b92379252bea8c871f830691bc4109470cd1db1d675e62e2fa3efc30511bc314,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724094737429475109,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-982795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b2cf4c315e3b88d41e6dc986691274f,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1e50dca3-03f5-4ffb-87f6-771a82f42c20 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:26:27 default-k8s-diff-port-982795 crio[730]: time="2024-08-19 19:26:27.754361028Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bd9d7a7e-3d31-4a55-b6df-e1189ced6355 name=/runtime.v1.RuntimeService/Version
	Aug 19 19:26:27 default-k8s-diff-port-982795 crio[730]: time="2024-08-19 19:26:27.754441906Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bd9d7a7e-3d31-4a55-b6df-e1189ced6355 name=/runtime.v1.RuntimeService/Version
	Aug 19 19:26:27 default-k8s-diff-port-982795 crio[730]: time="2024-08-19 19:26:27.755573725Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d7af7eb0-7429-4a9a-83c7-4102dd333921 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:26:27 default-k8s-diff-port-982795 crio[730]: time="2024-08-19 19:26:27.755975002Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095587755952047,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d7af7eb0-7429-4a9a-83c7-4102dd333921 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:26:27 default-k8s-diff-port-982795 crio[730]: time="2024-08-19 19:26:27.756512857Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4281fb72-ffc9-4409-8429-2f9567a9919e name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:26:27 default-k8s-diff-port-982795 crio[730]: time="2024-08-19 19:26:27.756584614Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4281fb72-ffc9-4409-8429-2f9567a9919e name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:26:27 default-k8s-diff-port-982795 crio[730]: time="2024-08-19 19:26:27.756787481Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:969ba38e33a57295fe0ae35077eb098948d9ad14a5eadeb75d70d2df5295289a,PodSandboxId:9fc5843fbb153651155598b90a297dc31af3ac7da5cf76946dc7bafad7908fda,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724095036707990631,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23fcea86-977e-4eb1-9e5a-23d6bdfb09c0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b9401ae3bfc5674655e1d13bb7496bd41ccca20b8d278eab6f128e796427c95,PodSandboxId:4d054d1fbff16b77f9957a639a00f5eafaf828414580bbd6e0930987680b80d5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724095036066638960,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-tlxtt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 150ac4be-bef1-4f0a-ab16-f085284686cb,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74c639aa1e86b4636654569e0285a63b33a3e00dc9fe1f174401d1f5b786fa6f,PodSandboxId:43809f9e43e622bfc096fb062f9c820f2a9a4133e01e9834ac65acb2d5baa7c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724095036114813559,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-845gx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 95155dd2-d46c-4445-b735-26eae16aaff9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fd4382f412f381b51dabe019ff226d5c821e8a8ff170e0870e36423dfba1070,PodSandboxId:33766eb0695bf1448d52e8380ba516bb7cbfb3623c2af674317df9995a9a5c26,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING
,CreatedAt:1724095035417699307,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2v4hk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 042d5d54-6557-4d8e-8f4e-2d56e95882ce,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ad4e1a87c8dd92fb96e88c93c58df7a4370d5c0378707ef83c3589ab5634291,PodSandboxId:aa68f2ac4de4e6586a17408820a3721534fbf7afa250e286b33d905c8a9e553b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724095024442132437,Labels:ma
p[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-982795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20a29fea437a40035b4f2101b4f2c4a4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d64840f8fd90aeb0fa49f228ebece3532df4e5564f7e40e26bb86d9aaf6dcfb5,PodSandboxId:f89efc21d3dcdf6467aeda97c1fac841e1763c22eb314cd0e76f0a71c72347aa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724095024382648303,Labels:map[string]string{io.kub
ernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-982795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b2cf4c315e3b88d41e6dc986691274f,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:494eae14eb51724902d43998d3f810f2370cd662ade302b988430c49a2785885,PodSandboxId:e7a2601c52192cf94a970f9f4944af034bf847d70d4801df7da08e0c655f46b9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724095024336594430,Labels:map[string]string
{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-982795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a253c1469ea0e730b1065f9d733602a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2f82cdbdd75559bad4071795545a9340bffc837ab2b039e2b7b11e799cd2c1d,PodSandboxId:285a74dbebcdb93622c6bc3448972534eb4080f3defc38febd351dc052763d9b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724095024346517617,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-982795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b56b6e9c850523092949a2b7ecd02a24,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30d8daf89a4b14fad81c80fd2205c514469407210646e4b9675cfa492e267324,PodSandboxId:b92379252bea8c871f830691bc4109470cd1db1d675e62e2fa3efc30511bc314,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724094737429475109,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-982795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b2cf4c315e3b88d41e6dc986691274f,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4281fb72-ffc9-4409-8429-2f9567a9919e name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:26:27 default-k8s-diff-port-982795 crio[730]: time="2024-08-19 19:26:27.796413185Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7eed4a6b-e7e8-486b-843a-4471d586d306 name=/runtime.v1.RuntimeService/Version
	Aug 19 19:26:27 default-k8s-diff-port-982795 crio[730]: time="2024-08-19 19:26:27.796508861Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7eed4a6b-e7e8-486b-843a-4471d586d306 name=/runtime.v1.RuntimeService/Version
	Aug 19 19:26:27 default-k8s-diff-port-982795 crio[730]: time="2024-08-19 19:26:27.797452438Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a0e610d7-8fd4-4f5f-b661-747ece7cb6bf name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:26:27 default-k8s-diff-port-982795 crio[730]: time="2024-08-19 19:26:27.797833651Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095587797813704,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a0e610d7-8fd4-4f5f-b661-747ece7cb6bf name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:26:27 default-k8s-diff-port-982795 crio[730]: time="2024-08-19 19:26:27.798489612Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5175026a-0257-482e-a6ee-c1fcc25d1982 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:26:27 default-k8s-diff-port-982795 crio[730]: time="2024-08-19 19:26:27.798590344Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5175026a-0257-482e-a6ee-c1fcc25d1982 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:26:27 default-k8s-diff-port-982795 crio[730]: time="2024-08-19 19:26:27.798857050Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:969ba38e33a57295fe0ae35077eb098948d9ad14a5eadeb75d70d2df5295289a,PodSandboxId:9fc5843fbb153651155598b90a297dc31af3ac7da5cf76946dc7bafad7908fda,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724095036707990631,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23fcea86-977e-4eb1-9e5a-23d6bdfb09c0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b9401ae3bfc5674655e1d13bb7496bd41ccca20b8d278eab6f128e796427c95,PodSandboxId:4d054d1fbff16b77f9957a639a00f5eafaf828414580bbd6e0930987680b80d5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724095036066638960,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-tlxtt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 150ac4be-bef1-4f0a-ab16-f085284686cb,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74c639aa1e86b4636654569e0285a63b33a3e00dc9fe1f174401d1f5b786fa6f,PodSandboxId:43809f9e43e622bfc096fb062f9c820f2a9a4133e01e9834ac65acb2d5baa7c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724095036114813559,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-845gx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 95155dd2-d46c-4445-b735-26eae16aaff9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fd4382f412f381b51dabe019ff226d5c821e8a8ff170e0870e36423dfba1070,PodSandboxId:33766eb0695bf1448d52e8380ba516bb7cbfb3623c2af674317df9995a9a5c26,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING
,CreatedAt:1724095035417699307,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2v4hk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 042d5d54-6557-4d8e-8f4e-2d56e95882ce,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ad4e1a87c8dd92fb96e88c93c58df7a4370d5c0378707ef83c3589ab5634291,PodSandboxId:aa68f2ac4de4e6586a17408820a3721534fbf7afa250e286b33d905c8a9e553b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724095024442132437,Labels:ma
p[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-982795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20a29fea437a40035b4f2101b4f2c4a4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d64840f8fd90aeb0fa49f228ebece3532df4e5564f7e40e26bb86d9aaf6dcfb5,PodSandboxId:f89efc21d3dcdf6467aeda97c1fac841e1763c22eb314cd0e76f0a71c72347aa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724095024382648303,Labels:map[string]string{io.kub
ernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-982795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b2cf4c315e3b88d41e6dc986691274f,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:494eae14eb51724902d43998d3f810f2370cd662ade302b988430c49a2785885,PodSandboxId:e7a2601c52192cf94a970f9f4944af034bf847d70d4801df7da08e0c655f46b9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724095024336594430,Labels:map[string]string
{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-982795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a253c1469ea0e730b1065f9d733602a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2f82cdbdd75559bad4071795545a9340bffc837ab2b039e2b7b11e799cd2c1d,PodSandboxId:285a74dbebcdb93622c6bc3448972534eb4080f3defc38febd351dc052763d9b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724095024346517617,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-982795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b56b6e9c850523092949a2b7ecd02a24,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30d8daf89a4b14fad81c80fd2205c514469407210646e4b9675cfa492e267324,PodSandboxId:b92379252bea8c871f830691bc4109470cd1db1d675e62e2fa3efc30511bc314,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724094737429475109,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-982795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b2cf4c315e3b88d41e6dc986691274f,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5175026a-0257-482e-a6ee-c1fcc25d1982 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:26:27 default-k8s-diff-port-982795 crio[730]: time="2024-08-19 19:26:27.839361258Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=32ffb18f-3a33-4dfa-9163-924882883bab name=/runtime.v1.RuntimeService/Version
	Aug 19 19:26:27 default-k8s-diff-port-982795 crio[730]: time="2024-08-19 19:26:27.839459329Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=32ffb18f-3a33-4dfa-9163-924882883bab name=/runtime.v1.RuntimeService/Version
	Aug 19 19:26:27 default-k8s-diff-port-982795 crio[730]: time="2024-08-19 19:26:27.840450682Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=596af0e7-a304-4170-a3ff-e8e6192fceb4 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:26:27 default-k8s-diff-port-982795 crio[730]: time="2024-08-19 19:26:27.840872283Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095587840850529,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=596af0e7-a304-4170-a3ff-e8e6192fceb4 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:26:27 default-k8s-diff-port-982795 crio[730]: time="2024-08-19 19:26:27.841599270Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9ed3d29c-4597-4068-b1b1-248a28195890 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:26:27 default-k8s-diff-port-982795 crio[730]: time="2024-08-19 19:26:27.841673347Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9ed3d29c-4597-4068-b1b1-248a28195890 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:26:27 default-k8s-diff-port-982795 crio[730]: time="2024-08-19 19:26:27.841865873Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:969ba38e33a57295fe0ae35077eb098948d9ad14a5eadeb75d70d2df5295289a,PodSandboxId:9fc5843fbb153651155598b90a297dc31af3ac7da5cf76946dc7bafad7908fda,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724095036707990631,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23fcea86-977e-4eb1-9e5a-23d6bdfb09c0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b9401ae3bfc5674655e1d13bb7496bd41ccca20b8d278eab6f128e796427c95,PodSandboxId:4d054d1fbff16b77f9957a639a00f5eafaf828414580bbd6e0930987680b80d5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724095036066638960,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-tlxtt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 150ac4be-bef1-4f0a-ab16-f085284686cb,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74c639aa1e86b4636654569e0285a63b33a3e00dc9fe1f174401d1f5b786fa6f,PodSandboxId:43809f9e43e622bfc096fb062f9c820f2a9a4133e01e9834ac65acb2d5baa7c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724095036114813559,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-845gx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 95155dd2-d46c-4445-b735-26eae16aaff9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fd4382f412f381b51dabe019ff226d5c821e8a8ff170e0870e36423dfba1070,PodSandboxId:33766eb0695bf1448d52e8380ba516bb7cbfb3623c2af674317df9995a9a5c26,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING
,CreatedAt:1724095035417699307,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2v4hk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 042d5d54-6557-4d8e-8f4e-2d56e95882ce,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ad4e1a87c8dd92fb96e88c93c58df7a4370d5c0378707ef83c3589ab5634291,PodSandboxId:aa68f2ac4de4e6586a17408820a3721534fbf7afa250e286b33d905c8a9e553b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724095024442132437,Labels:ma
p[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-982795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20a29fea437a40035b4f2101b4f2c4a4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d64840f8fd90aeb0fa49f228ebece3532df4e5564f7e40e26bb86d9aaf6dcfb5,PodSandboxId:f89efc21d3dcdf6467aeda97c1fac841e1763c22eb314cd0e76f0a71c72347aa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724095024382648303,Labels:map[string]string{io.kub
ernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-982795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b2cf4c315e3b88d41e6dc986691274f,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:494eae14eb51724902d43998d3f810f2370cd662ade302b988430c49a2785885,PodSandboxId:e7a2601c52192cf94a970f9f4944af034bf847d70d4801df7da08e0c655f46b9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724095024336594430,Labels:map[string]string
{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-982795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a253c1469ea0e730b1065f9d733602a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2f82cdbdd75559bad4071795545a9340bffc837ab2b039e2b7b11e799cd2c1d,PodSandboxId:285a74dbebcdb93622c6bc3448972534eb4080f3defc38febd351dc052763d9b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724095024346517617,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-982795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b56b6e9c850523092949a2b7ecd02a24,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30d8daf89a4b14fad81c80fd2205c514469407210646e4b9675cfa492e267324,PodSandboxId:b92379252bea8c871f830691bc4109470cd1db1d675e62e2fa3efc30511bc314,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724094737429475109,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-982795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b2cf4c315e3b88d41e6dc986691274f,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9ed3d29c-4597-4068-b1b1-248a28195890 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	969ba38e33a57       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   9fc5843fbb153       storage-provisioner
	74c639aa1e86b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   43809f9e43e62       coredns-6f6b679f8f-845gx
	8b9401ae3bfc5       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   4d054d1fbff16       coredns-6f6b679f8f-tlxtt
	5fd4382f412f3       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   9 minutes ago       Running             kube-proxy                0                   33766eb0695bf       kube-proxy-2v4hk
	0ad4e1a87c8dd       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   aa68f2ac4de4e       etcd-default-k8s-diff-port-982795
	d64840f8fd90a       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   9 minutes ago       Running             kube-apiserver            2                   f89efc21d3dcd       kube-apiserver-default-k8s-diff-port-982795
	a2f82cdbdd755       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   9 minutes ago       Running             kube-scheduler            2                   285a74dbebcdb       kube-scheduler-default-k8s-diff-port-982795
	494eae14eb517       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   9 minutes ago       Running             kube-controller-manager   2                   e7a2601c52192       kube-controller-manager-default-k8s-diff-port-982795
	30d8daf89a4b1       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   14 minutes ago      Exited              kube-apiserver            1                   b92379252bea8       kube-apiserver-default-k8s-diff-port-982795
	
	
	==> coredns [74c639aa1e86b4636654569e0285a63b33a3e00dc9fe1f174401d1f5b786fa6f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [8b9401ae3bfc5674655e1d13bb7496bd41ccca20b8d278eab6f128e796427c95] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-982795
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-982795
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9c2db9d51ec33b5c53a86e9ba3d384ee332e3411
	                    minikube.k8s.io/name=default-k8s-diff-port-982795
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T19_17_10_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 19:17:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-982795
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 19:26:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 19:22:25 +0000   Mon, 19 Aug 2024 19:17:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 19:22:25 +0000   Mon, 19 Aug 2024 19:17:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 19:22:25 +0000   Mon, 19 Aug 2024 19:17:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 19:22:25 +0000   Mon, 19 Aug 2024 19:17:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.48
	  Hostname:    default-k8s-diff-port-982795
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5fe42ac5581841238013e0b5a8d735d5
	  System UUID:                5fe42ac5-5818-4123-8013-e0b5a8d735d5
	  Boot ID:                    0ef2d057-cbc7-4e03-9f12-efbb79dcf255
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-845gx                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m14s
	  kube-system                 coredns-6f6b679f8f-tlxtt                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m14s
	  kube-system                 etcd-default-k8s-diff-port-982795                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m19s
	  kube-system                 kube-apiserver-default-k8s-diff-port-982795             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m19s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-982795    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m19s
	  kube-system                 kube-proxy-2v4hk                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m14s
	  kube-system                 kube-scheduler-default-k8s-diff-port-982795             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m19s
	  kube-system                 metrics-server-6867b74b74-2dp5r                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m12s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m11s  kube-proxy       
	  Normal  Starting                 9m19s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m19s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m19s  kubelet          Node default-k8s-diff-port-982795 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m19s  kubelet          Node default-k8s-diff-port-982795 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m19s  kubelet          Node default-k8s-diff-port-982795 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m15s  node-controller  Node default-k8s-diff-port-982795 event: Registered Node default-k8s-diff-port-982795 in Controller
	
	
	==> dmesg <==
	[  +0.050691] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040093] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.788295] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.554258] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[Aug19 19:12] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.614118] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[  +0.059749] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065134] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[  +0.166021] systemd-fstab-generator[672]: Ignoring "noauto" option for root device
	[  +0.130130] systemd-fstab-generator[684]: Ignoring "noauto" option for root device
	[  +0.311198] systemd-fstab-generator[713]: Ignoring "noauto" option for root device
	[  +4.251324] systemd-fstab-generator[811]: Ignoring "noauto" option for root device
	[  +0.070295] kauditd_printk_skb: 148 callbacks suppressed
	[  +2.297766] systemd-fstab-generator[931]: Ignoring "noauto" option for root device
	[  +4.589476] kauditd_printk_skb: 79 callbacks suppressed
	[  +6.948777] kauditd_printk_skb: 85 callbacks suppressed
	[Aug19 19:17] systemd-fstab-generator[2590]: Ignoring "noauto" option for root device
	[  +0.062731] kauditd_printk_skb: 8 callbacks suppressed
	[  +6.012148] systemd-fstab-generator[2912]: Ignoring "noauto" option for root device
	[  +0.076737] kauditd_printk_skb: 54 callbacks suppressed
	[  +5.338268] systemd-fstab-generator[3044]: Ignoring "noauto" option for root device
	[  +0.127902] kauditd_printk_skb: 12 callbacks suppressed
	[  +8.851934] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [0ad4e1a87c8dd92fb96e88c93c58df7a4370d5c0378707ef83c3589ab5634291] <==
	{"level":"info","ts":"2024-08-19T19:17:04.827036Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-19T19:17:04.827354Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"f76d6fbad492a1d6","initial-advertise-peer-urls":["https://192.168.61.48:2380"],"listen-peer-urls":["https://192.168.61.48:2380"],"advertise-client-urls":["https://192.168.61.48:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.48:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-19T19:17:04.827423Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-19T19:17:04.827652Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.61.48:2380"}
	{"level":"info","ts":"2024-08-19T19:17:04.827743Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.61.48:2380"}
	{"level":"info","ts":"2024-08-19T19:17:05.444398Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f76d6fbad492a1d6 is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-19T19:17:05.444450Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f76d6fbad492a1d6 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-19T19:17:05.444483Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f76d6fbad492a1d6 received MsgPreVoteResp from f76d6fbad492a1d6 at term 1"}
	{"level":"info","ts":"2024-08-19T19:17:05.444504Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f76d6fbad492a1d6 became candidate at term 2"}
	{"level":"info","ts":"2024-08-19T19:17:05.444512Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f76d6fbad492a1d6 received MsgVoteResp from f76d6fbad492a1d6 at term 2"}
	{"level":"info","ts":"2024-08-19T19:17:05.444525Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f76d6fbad492a1d6 became leader at term 2"}
	{"level":"info","ts":"2024-08-19T19:17:05.444556Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f76d6fbad492a1d6 elected leader f76d6fbad492a1d6 at term 2"}
	{"level":"info","ts":"2024-08-19T19:17:05.448430Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T19:17:05.451011Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"f76d6fbad492a1d6","local-member-attributes":"{Name:default-k8s-diff-port-982795 ClientURLs:[https://192.168.61.48:2379]}","request-path":"/0/members/f76d6fbad492a1d6/attributes","cluster-id":"6f0fba60f4785994","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-19T19:17:05.451146Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T19:17:05.451514Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T19:17:05.451674Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-19T19:17:05.451718Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-19T19:17:05.452434Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T19:17:05.453176Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.48:2379"}
	{"level":"info","ts":"2024-08-19T19:17:05.453412Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f0fba60f4785994","local-member-id":"f76d6fbad492a1d6","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T19:17:05.453559Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T19:17:05.453602Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T19:17:05.454000Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T19:17:05.454760Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 19:26:28 up 14 min,  0 users,  load average: 0.19, 0.14, 0.11
	Linux default-k8s-diff-port-982795 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [30d8daf89a4b14fad81c80fd2205c514469407210646e4b9675cfa492e267324] <==
	W0819 19:16:57.273761       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:16:57.291939       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:16:57.327531       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:16:57.359369       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:16:57.420464       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:16:57.440951       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:16:57.473158       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:16:57.513524       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:16:57.515031       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:16:57.566601       1 logging.go:55] [core] [Channel #2 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:16:57.635403       1 logging.go:55] [core] [Channel #9 SubChannel #10]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:16:57.660378       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:16:57.719013       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:16:57.783024       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:16:57.833956       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:16:57.874545       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:16:57.947765       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:16:58.083189       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:16:58.160783       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:16:58.167264       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:16:58.312596       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:16:58.325773       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:16:58.460880       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:17:01.656387       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:17:01.978055       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [d64840f8fd90aeb0fa49f228ebece3532df4e5564f7e40e26bb86d9aaf6dcfb5] <==
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0819 19:22:08.056600       1 handler_proxy.go:99] no RequestInfo found in the context
	E0819 19:22:08.056928       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0819 19:22:08.058045       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0819 19:22:08.058114       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0819 19:23:08.058414       1 handler_proxy.go:99] no RequestInfo found in the context
	E0819 19:23:08.058556       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0819 19:23:08.058660       1 handler_proxy.go:99] no RequestInfo found in the context
	E0819 19:23:08.058702       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0819 19:23:08.059731       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0819 19:23:08.059743       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0819 19:25:08.060579       1 handler_proxy.go:99] no RequestInfo found in the context
	E0819 19:25:08.060670       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0819 19:25:08.060905       1 handler_proxy.go:99] no RequestInfo found in the context
	E0819 19:25:08.061043       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0819 19:25:08.061885       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0819 19:25:08.063047       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [494eae14eb51724902d43998d3f810f2370cd662ade302b988430c49a2785885] <==
	E0819 19:21:14.046525       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 19:21:14.493711       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 19:21:44.052474       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 19:21:44.501473       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 19:22:14.059537       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 19:22:14.510795       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0819 19:22:25.825613       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-982795"
	E0819 19:22:44.066886       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 19:22:44.518721       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 19:23:14.072684       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 19:23:14.526409       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0819 19:23:23.588779       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="239.843µs"
	I0819 19:23:37.587133       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="88.799µs"
	E0819 19:23:44.078610       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 19:23:44.534867       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 19:24:14.087996       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 19:24:14.545403       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 19:24:44.093944       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 19:24:44.553419       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 19:25:14.101454       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 19:25:14.562631       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 19:25:44.107423       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 19:25:44.573083       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 19:26:14.114409       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 19:26:14.581176       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [5fd4382f412f381b51dabe019ff226d5c821e8a8ff170e0870e36423dfba1070] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 19:17:16.067455       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0819 19:17:16.115926       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.48"]
	E0819 19:17:16.116011       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 19:17:16.392901       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 19:17:16.396726       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 19:17:16.396879       1 server_linux.go:169] "Using iptables Proxier"
	I0819 19:17:16.405847       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 19:17:16.406223       1 server.go:483] "Version info" version="v1.31.0"
	I0819 19:17:16.406281       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 19:17:16.417002       1 config.go:197] "Starting service config controller"
	I0819 19:17:16.417200       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 19:17:16.417350       1 config.go:104] "Starting endpoint slice config controller"
	I0819 19:17:16.417377       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 19:17:16.420542       1 config.go:326] "Starting node config controller"
	I0819 19:17:16.420654       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 19:17:16.526471       1 shared_informer.go:320] Caches are synced for node config
	I0819 19:17:16.526527       1 shared_informer.go:320] Caches are synced for service config
	I0819 19:17:16.526571       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [a2f82cdbdd75559bad4071795545a9340bffc837ab2b039e2b7b11e799cd2c1d] <==
	E0819 19:17:07.115471       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0819 19:17:07.115651       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 19:17:07.110696       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0819 19:17:07.115689       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 19:17:07.110885       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0819 19:17:07.115777       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0819 19:17:07.936235       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0819 19:17:07.936328       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 19:17:08.049412       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0819 19:17:08.049642       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0819 19:17:08.106830       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0819 19:17:08.106884       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 19:17:08.161548       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0819 19:17:08.161603       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 19:17:08.183931       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0819 19:17:08.183964       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 19:17:08.204353       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0819 19:17:08.204415       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0819 19:17:08.240854       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0819 19:17:08.240922       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 19:17:08.297136       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0819 19:17:08.297233       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 19:17:08.328797       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0819 19:17:08.328863       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0819 19:17:09.892179       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 19 19:25:19 default-k8s-diff-port-982795 kubelet[2919]: E0819 19:25:19.749498    2919 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095519749238643,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:25:19 default-k8s-diff-port-982795 kubelet[2919]: E0819 19:25:19.749544    2919 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095519749238643,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:25:21 default-k8s-diff-port-982795 kubelet[2919]: E0819 19:25:21.572369    2919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-2dp5r" podUID="04e0ce68-d9a2-426a-a0e9-47f6f7867efd"
	Aug 19 19:25:29 default-k8s-diff-port-982795 kubelet[2919]: E0819 19:25:29.750466    2919 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095529750198745,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:25:29 default-k8s-diff-port-982795 kubelet[2919]: E0819 19:25:29.750490    2919 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095529750198745,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:25:34 default-k8s-diff-port-982795 kubelet[2919]: E0819 19:25:34.571235    2919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-2dp5r" podUID="04e0ce68-d9a2-426a-a0e9-47f6f7867efd"
	Aug 19 19:25:39 default-k8s-diff-port-982795 kubelet[2919]: E0819 19:25:39.752057    2919 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095539751817717,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:25:39 default-k8s-diff-port-982795 kubelet[2919]: E0819 19:25:39.752079    2919 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095539751817717,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:25:48 default-k8s-diff-port-982795 kubelet[2919]: E0819 19:25:48.571443    2919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-2dp5r" podUID="04e0ce68-d9a2-426a-a0e9-47f6f7867efd"
	Aug 19 19:25:49 default-k8s-diff-port-982795 kubelet[2919]: E0819 19:25:49.755033    2919 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095549753205198,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:25:49 default-k8s-diff-port-982795 kubelet[2919]: E0819 19:25:49.755079    2919 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095549753205198,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:25:59 default-k8s-diff-port-982795 kubelet[2919]: E0819 19:25:59.760089    2919 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095559757389401,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:25:59 default-k8s-diff-port-982795 kubelet[2919]: E0819 19:25:59.760591    2919 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095559757389401,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:26:01 default-k8s-diff-port-982795 kubelet[2919]: E0819 19:26:01.572954    2919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-2dp5r" podUID="04e0ce68-d9a2-426a-a0e9-47f6f7867efd"
	Aug 19 19:26:09 default-k8s-diff-port-982795 kubelet[2919]: E0819 19:26:09.592618    2919 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 19:26:09 default-k8s-diff-port-982795 kubelet[2919]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 19:26:09 default-k8s-diff-port-982795 kubelet[2919]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 19:26:09 default-k8s-diff-port-982795 kubelet[2919]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 19:26:09 default-k8s-diff-port-982795 kubelet[2919]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 19:26:09 default-k8s-diff-port-982795 kubelet[2919]: E0819 19:26:09.762965    2919 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095569762471629,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:26:09 default-k8s-diff-port-982795 kubelet[2919]: E0819 19:26:09.762995    2919 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095569762471629,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:26:12 default-k8s-diff-port-982795 kubelet[2919]: E0819 19:26:12.572348    2919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-2dp5r" podUID="04e0ce68-d9a2-426a-a0e9-47f6f7867efd"
	Aug 19 19:26:19 default-k8s-diff-port-982795 kubelet[2919]: E0819 19:26:19.764419    2919 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095579764110642,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:26:19 default-k8s-diff-port-982795 kubelet[2919]: E0819 19:26:19.764442    2919 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095579764110642,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:26:26 default-k8s-diff-port-982795 kubelet[2919]: E0819 19:26:26.573768    2919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-2dp5r" podUID="04e0ce68-d9a2-426a-a0e9-47f6f7867efd"
	
	
	==> storage-provisioner [969ba38e33a57295fe0ae35077eb098948d9ad14a5eadeb75d70d2df5295289a] <==
	I0819 19:17:16.881890       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0819 19:17:16.904892       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0819 19:17:16.904949       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0819 19:17:16.924712       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0819 19:17:16.924902       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-982795_35cf21ea-e4cf-494e-9cf4-a85c0b6ad5c5!
	I0819 19:17:16.927587       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3ad7ea45-7ee9-466d-bf0b-37c20ee983b7", APIVersion:"v1", ResourceVersion:"396", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-982795_35cf21ea-e4cf-494e-9cf4-a85c0b6ad5c5 became leader
	I0819 19:17:17.025699       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-982795_35cf21ea-e4cf-494e-9cf4-a85c0b6ad5c5!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-982795 -n default-k8s-diff-port-982795
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-982795 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-2dp5r
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-982795 describe pod metrics-server-6867b74b74-2dp5r
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-982795 describe pod metrics-server-6867b74b74-2dp5r: exit status 1 (63.505718ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-2dp5r" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-982795 describe pod metrics-server-6867b74b74-2dp5r: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-024748 -n embed-certs-024748
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-08-19 19:26:30.620720931 +0000 UTC m=+6137.820647738
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-024748 -n embed-certs-024748
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-024748 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-024748 logs -n 25: (2.073035787s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-571803                           | enable-default-cni-571803    | jenkins | v1.33.1 | 19 Aug 24 19:03 UTC | 19 Aug 24 19:03 UTC |
	|         | sudo cat                                               |                              |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-571803                           | enable-default-cni-571803    | jenkins | v1.33.1 | 19 Aug 24 19:03 UTC | 19 Aug 24 19:03 UTC |
	|         | sudo containerd config dump                            |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-571803                           | enable-default-cni-571803    | jenkins | v1.33.1 | 19 Aug 24 19:03 UTC | 19 Aug 24 19:03 UTC |
	|         | sudo systemctl status crio                             |                              |         |         |                     |                     |
	|         | --all --full --no-pager                                |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-571803                           | enable-default-cni-571803    | jenkins | v1.33.1 | 19 Aug 24 19:03 UTC | 19 Aug 24 19:03 UTC |
	|         | sudo systemctl cat crio                                |                              |         |         |                     |                     |
	|         | --no-pager                                             |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-571803                           | enable-default-cni-571803    | jenkins | v1.33.1 | 19 Aug 24 19:03 UTC | 19 Aug 24 19:03 UTC |
	|         | sudo find /etc/crio -type f                            |                              |         |         |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                          |                              |         |         |                     |                     |
	|         | \;                                                     |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-571803                           | enable-default-cni-571803    | jenkins | v1.33.1 | 19 Aug 24 19:03 UTC | 19 Aug 24 19:03 UTC |
	|         | sudo crio config                                       |                              |         |         |                     |                     |
	| delete  | -p enable-default-cni-571803                           | enable-default-cni-571803    | jenkins | v1.33.1 | 19 Aug 24 19:03 UTC | 19 Aug 24 19:03 UTC |
	| delete  | -p                                                     | disable-driver-mounts-737091 | jenkins | v1.33.1 | 19 Aug 24 19:03 UTC | 19 Aug 24 19:03 UTC |
	|         | disable-driver-mounts-737091                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-982795 | jenkins | v1.33.1 | 19 Aug 24 19:03 UTC | 19 Aug 24 19:04 UTC |
	|         | default-k8s-diff-port-982795                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-278232             | no-preload-278232            | jenkins | v1.33.1 | 19 Aug 24 19:04 UTC | 19 Aug 24 19:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-278232                                   | no-preload-278232            | jenkins | v1.33.1 | 19 Aug 24 19:04 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-982795  | default-k8s-diff-port-982795 | jenkins | v1.33.1 | 19 Aug 24 19:04 UTC | 19 Aug 24 19:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-982795 | jenkins | v1.33.1 | 19 Aug 24 19:04 UTC |                     |
	|         | default-k8s-diff-port-982795                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-024748            | embed-certs-024748           | jenkins | v1.33.1 | 19 Aug 24 19:04 UTC | 19 Aug 24 19:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-024748                                  | embed-certs-024748           | jenkins | v1.33.1 | 19 Aug 24 19:04 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-104669        | old-k8s-version-104669       | jenkins | v1.33.1 | 19 Aug 24 19:06 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-278232                  | no-preload-278232            | jenkins | v1.33.1 | 19 Aug 24 19:07 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-278232                                   | no-preload-278232            | jenkins | v1.33.1 | 19 Aug 24 19:07 UTC | 19 Aug 24 19:18 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-982795       | default-k8s-diff-port-982795 | jenkins | v1.33.1 | 19 Aug 24 19:07 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-024748                 | embed-certs-024748           | jenkins | v1.33.1 | 19 Aug 24 19:07 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-982795 | jenkins | v1.33.1 | 19 Aug 24 19:07 UTC | 19 Aug 24 19:17 UTC |
	|         | default-k8s-diff-port-982795                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-024748                                  | embed-certs-024748           | jenkins | v1.33.1 | 19 Aug 24 19:07 UTC | 19 Aug 24 19:17 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-104669                              | old-k8s-version-104669       | jenkins | v1.33.1 | 19 Aug 24 19:08 UTC | 19 Aug 24 19:08 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-104669             | old-k8s-version-104669       | jenkins | v1.33.1 | 19 Aug 24 19:08 UTC | 19 Aug 24 19:08 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-104669                              | old-k8s-version-104669       | jenkins | v1.33.1 | 19 Aug 24 19:08 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 19:08:30
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 19:08:30.532545  438716 out.go:345] Setting OutFile to fd 1 ...
	I0819 19:08:30.532649  438716 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:08:30.532657  438716 out.go:358] Setting ErrFile to fd 2...
	I0819 19:08:30.532661  438716 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:08:30.532811  438716 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19468-372744/.minikube/bin
	I0819 19:08:30.533379  438716 out.go:352] Setting JSON to false
	I0819 19:08:30.534373  438716 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":10253,"bootTime":1724084257,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 19:08:30.534451  438716 start.go:139] virtualization: kvm guest
	I0819 19:08:30.536658  438716 out.go:177] * [old-k8s-version-104669] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 19:08:30.537921  438716 out.go:177]   - MINIKUBE_LOCATION=19468
	I0819 19:08:30.537959  438716 notify.go:220] Checking for updates...
	I0819 19:08:30.540501  438716 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 19:08:30.541864  438716 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19468-372744/kubeconfig
	I0819 19:08:30.543170  438716 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19468-372744/.minikube
	I0819 19:08:30.544395  438716 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 19:08:30.545614  438716 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 19:08:30.547072  438716 config.go:182] Loaded profile config "old-k8s-version-104669": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0819 19:08:30.547468  438716 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:08:30.547570  438716 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:08:30.563059  438716 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34139
	I0819 19:08:30.563506  438716 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:08:30.564068  438716 main.go:141] libmachine: Using API Version  1
	I0819 19:08:30.564091  438716 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:08:30.564474  438716 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:08:30.564719  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .DriverName
	I0819 19:08:30.566599  438716 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0819 19:08:30.568124  438716 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 19:08:30.568503  438716 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:08:30.568541  438716 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:08:30.583805  438716 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35313
	I0819 19:08:30.584314  438716 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:08:30.584805  438716 main.go:141] libmachine: Using API Version  1
	I0819 19:08:30.584827  438716 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:08:30.585131  438716 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:08:30.585320  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .DriverName
	I0819 19:08:30.621020  438716 out.go:177] * Using the kvm2 driver based on existing profile
	I0819 19:08:30.622137  438716 start.go:297] selected driver: kvm2
	I0819 19:08:30.622158  438716 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-104669 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-104669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.32 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 19:08:30.622252  438716 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 19:08:30.622998  438716 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 19:08:30.623082  438716 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19468-372744/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 19:08:30.638616  438716 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 19:08:30.638998  438716 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 19:08:30.639047  438716 cni.go:84] Creating CNI manager for ""
	I0819 19:08:30.639059  438716 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 19:08:30.639097  438716 start.go:340] cluster config:
	{Name:old-k8s-version-104669 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-104669 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.32 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 19:08:30.639243  438716 iso.go:125] acquiring lock: {Name:mk4c0ac1c3202b1a296739df622960e7a0bd8566 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 19:08:30.641823  438716 out.go:177] * Starting "old-k8s-version-104669" primary control-plane node in "old-k8s-version-104669" cluster
	I0819 19:08:30.915976  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:08:30.643167  438716 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0819 19:08:30.643197  438716 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0819 19:08:30.643205  438716 cache.go:56] Caching tarball of preloaded images
	I0819 19:08:30.643300  438716 preload.go:172] Found /home/jenkins/minikube-integration/19468-372744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 19:08:30.643311  438716 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0819 19:08:30.643409  438716 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/config.json ...
	I0819 19:08:30.643583  438716 start.go:360] acquireMachinesLock for old-k8s-version-104669: {Name:mk24ba67a747357e9ce40f1e460d2bb0bc59cc75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 19:08:33.988031  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:08:40.067999  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:08:43.140051  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:08:49.219991  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:08:52.292013  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:08:58.371952  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:09:01.444061  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:09:07.523958  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:09:10.595977  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:09:16.675955  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:09:19.748037  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:09:25.828064  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:09:28.899972  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:09:34.980044  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:09:38.052066  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:09:44.131960  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:09:47.203926  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:09:53.283992  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:09:56.355952  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:10:02.435994  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:10:05.508042  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:10:11.587960  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:10:14.660027  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:10:20.740007  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:10:23.811991  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:10:29.891998  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:10:32.963959  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:10:39.043942  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:10:42.116029  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:10:48.195984  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:10:51.267954  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:10:57.347922  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:11:00.419952  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:11:06.499978  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:11:09.572013  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:11:15.652066  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:11:18.724012  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:11:24.804001  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:11:27.875961  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:11:33.956046  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:11:37.027998  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:11:43.108014  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:11:46.179987  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:11:49.184190  438245 start.go:364] duration metric: took 4m21.835882225s to acquireMachinesLock for "default-k8s-diff-port-982795"
	I0819 19:11:49.184280  438245 start.go:96] Skipping create...Using existing machine configuration
	I0819 19:11:49.184296  438245 fix.go:54] fixHost starting: 
	I0819 19:11:49.184628  438245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:11:49.184661  438245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:11:49.200544  438245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38241
	I0819 19:11:49.200994  438245 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:11:49.201530  438245 main.go:141] libmachine: Using API Version  1
	I0819 19:11:49.201560  438245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:11:49.201953  438245 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:11:49.202151  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .DriverName
	I0819 19:11:49.202296  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetState
	I0819 19:11:49.203841  438245 fix.go:112] recreateIfNeeded on default-k8s-diff-port-982795: state=Stopped err=<nil>
	I0819 19:11:49.203875  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .DriverName
	W0819 19:11:49.204042  438245 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 19:11:49.205721  438245 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-982795" ...
	I0819 19:11:49.181717  438001 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 19:11:49.181755  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetMachineName
	I0819 19:11:49.182097  438001 buildroot.go:166] provisioning hostname "no-preload-278232"
	I0819 19:11:49.182131  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetMachineName
	I0819 19:11:49.182392  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHHostname
	I0819 19:11:49.184006  438001 machine.go:96] duration metric: took 4m37.423775019s to provisionDockerMachine
	I0819 19:11:49.184078  438001 fix.go:56] duration metric: took 4m37.445408913s for fixHost
	I0819 19:11:49.184091  438001 start.go:83] releasing machines lock for "no-preload-278232", held for 4m37.44544277s
	W0819 19:11:49.184116  438001 start.go:714] error starting host: provision: host is not running
	W0819 19:11:49.184274  438001 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0819 19:11:49.184288  438001 start.go:729] Will try again in 5 seconds ...
	I0819 19:11:49.206739  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .Start
	I0819 19:11:49.206892  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Ensuring networks are active...
	I0819 19:11:49.207586  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Ensuring network default is active
	I0819 19:11:49.207947  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Ensuring network mk-default-k8s-diff-port-982795 is active
	I0819 19:11:49.208368  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Getting domain xml...
	I0819 19:11:49.209114  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Creating domain...
	I0819 19:11:50.421290  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting to get IP...
	I0819 19:11:50.422082  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:11:50.422490  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | unable to find current IP address of domain default-k8s-diff-port-982795 in network mk-default-k8s-diff-port-982795
	I0819 19:11:50.422562  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | I0819 19:11:50.422473  439403 retry.go:31] will retry after 273.434317ms: waiting for machine to come up
	I0819 19:11:50.698167  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:11:50.698598  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | unable to find current IP address of domain default-k8s-diff-port-982795 in network mk-default-k8s-diff-port-982795
	I0819 19:11:50.698635  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | I0819 19:11:50.698569  439403 retry.go:31] will retry after 367.841325ms: waiting for machine to come up
	I0819 19:11:51.068401  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:11:51.068996  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | unable to find current IP address of domain default-k8s-diff-port-982795 in network mk-default-k8s-diff-port-982795
	I0819 19:11:51.069019  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | I0819 19:11:51.068942  439403 retry.go:31] will retry after 460.053559ms: waiting for machine to come up
	I0819 19:11:51.530228  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:11:51.530700  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | unable to find current IP address of domain default-k8s-diff-port-982795 in network mk-default-k8s-diff-port-982795
	I0819 19:11:51.530730  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | I0819 19:11:51.530636  439403 retry.go:31] will retry after 498.222116ms: waiting for machine to come up
	I0819 19:11:52.030322  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:11:52.030771  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | unable to find current IP address of domain default-k8s-diff-port-982795 in network mk-default-k8s-diff-port-982795
	I0819 19:11:52.030808  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | I0819 19:11:52.030710  439403 retry.go:31] will retry after 750.75175ms: waiting for machine to come up
	I0819 19:11:54.186765  438001 start.go:360] acquireMachinesLock for no-preload-278232: {Name:mk24ba67a747357e9ce40f1e460d2bb0bc59cc75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 19:11:52.782638  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:11:52.783001  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | unable to find current IP address of domain default-k8s-diff-port-982795 in network mk-default-k8s-diff-port-982795
	I0819 19:11:52.783027  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | I0819 19:11:52.782952  439403 retry.go:31] will retry after 576.883195ms: waiting for machine to come up
	I0819 19:11:53.361702  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:11:53.362105  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | unable to find current IP address of domain default-k8s-diff-port-982795 in network mk-default-k8s-diff-port-982795
	I0819 19:11:53.362138  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | I0819 19:11:53.362035  439403 retry.go:31] will retry after 900.512446ms: waiting for machine to come up
	I0819 19:11:54.264656  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:11:54.265032  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | unable to find current IP address of domain default-k8s-diff-port-982795 in network mk-default-k8s-diff-port-982795
	I0819 19:11:54.265052  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | I0819 19:11:54.264984  439403 retry.go:31] will retry after 1.339005367s: waiting for machine to come up
	I0819 19:11:55.605816  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:11:55.606348  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | unable to find current IP address of domain default-k8s-diff-port-982795 in network mk-default-k8s-diff-port-982795
	I0819 19:11:55.606378  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | I0819 19:11:55.606304  439403 retry.go:31] will retry after 1.517824531s: waiting for machine to come up
	I0819 19:11:57.126027  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:11:57.126400  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | unable to find current IP address of domain default-k8s-diff-port-982795 in network mk-default-k8s-diff-port-982795
	I0819 19:11:57.126426  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | I0819 19:11:57.126340  439403 retry.go:31] will retry after 2.220939365s: waiting for machine to come up
	I0819 19:11:59.348649  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:11:59.349041  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | unable to find current IP address of domain default-k8s-diff-port-982795 in network mk-default-k8s-diff-port-982795
	I0819 19:11:59.349072  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | I0819 19:11:59.348987  439403 retry.go:31] will retry after 2.830298687s: waiting for machine to come up
	I0819 19:12:02.182934  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:02.183398  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | unable to find current IP address of domain default-k8s-diff-port-982795 in network mk-default-k8s-diff-port-982795
	I0819 19:12:02.183422  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | I0819 19:12:02.183348  439403 retry.go:31] will retry after 2.302725829s: waiting for machine to come up
	I0819 19:12:04.487648  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:04.488074  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | unable to find current IP address of domain default-k8s-diff-port-982795 in network mk-default-k8s-diff-port-982795
	I0819 19:12:04.488108  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | I0819 19:12:04.488016  439403 retry.go:31] will retry after 2.932250361s: waiting for machine to come up
	I0819 19:12:08.736669  438295 start.go:364] duration metric: took 4m39.596501254s to acquireMachinesLock for "embed-certs-024748"
	I0819 19:12:08.736755  438295 start.go:96] Skipping create...Using existing machine configuration
	I0819 19:12:08.736776  438295 fix.go:54] fixHost starting: 
	I0819 19:12:08.737277  438295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:08.737326  438295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:08.754873  438295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36829
	I0819 19:12:08.755301  438295 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:08.755839  438295 main.go:141] libmachine: Using API Version  1
	I0819 19:12:08.755866  438295 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:08.756184  438295 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:08.756383  438295 main.go:141] libmachine: (embed-certs-024748) Calling .DriverName
	I0819 19:12:08.756525  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetState
	I0819 19:12:08.758092  438295 fix.go:112] recreateIfNeeded on embed-certs-024748: state=Stopped err=<nil>
	I0819 19:12:08.758134  438295 main.go:141] libmachine: (embed-certs-024748) Calling .DriverName
	W0819 19:12:08.758299  438295 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 19:12:08.760922  438295 out.go:177] * Restarting existing kvm2 VM for "embed-certs-024748" ...
	I0819 19:12:08.762335  438295 main.go:141] libmachine: (embed-certs-024748) Calling .Start
	I0819 19:12:08.762509  438295 main.go:141] libmachine: (embed-certs-024748) Ensuring networks are active...
	I0819 19:12:08.763274  438295 main.go:141] libmachine: (embed-certs-024748) Ensuring network default is active
	I0819 19:12:08.763647  438295 main.go:141] libmachine: (embed-certs-024748) Ensuring network mk-embed-certs-024748 is active
	I0819 19:12:08.764057  438295 main.go:141] libmachine: (embed-certs-024748) Getting domain xml...
	I0819 19:12:08.764765  438295 main.go:141] libmachine: (embed-certs-024748) Creating domain...
	I0819 19:12:07.424132  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.424589  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Found IP for machine: 192.168.61.48
	I0819 19:12:07.424615  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Reserving static IP address...
	I0819 19:12:07.424634  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has current primary IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.425178  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Reserved static IP address: 192.168.61.48
	I0819 19:12:07.425205  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for SSH to be available...
	I0819 19:12:07.425237  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-982795", mac: "52:54:00:d4:19:cd", ip: "192.168.61.48"} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:07.425283  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | skip adding static IP to network mk-default-k8s-diff-port-982795 - found existing host DHCP lease matching {name: "default-k8s-diff-port-982795", mac: "52:54:00:d4:19:cd", ip: "192.168.61.48"}
	I0819 19:12:07.425304  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | Getting to WaitForSSH function...
	I0819 19:12:07.427600  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.427969  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:07.428001  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.428179  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | Using SSH client type: external
	I0819 19:12:07.428245  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | Using SSH private key: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/default-k8s-diff-port-982795/id_rsa (-rw-------)
	I0819 19:12:07.428297  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.48 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19468-372744/.minikube/machines/default-k8s-diff-port-982795/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 19:12:07.428321  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | About to run SSH command:
	I0819 19:12:07.428339  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | exit 0
	I0819 19:12:07.547727  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | SSH cmd err, output: <nil>: 
	I0819 19:12:07.548095  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetConfigRaw
	I0819 19:12:07.548741  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetIP
	I0819 19:12:07.551308  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.551700  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:07.551733  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.551967  438245 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/default-k8s-diff-port-982795/config.json ...
	I0819 19:12:07.552164  438245 machine.go:93] provisionDockerMachine start ...
	I0819 19:12:07.552186  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .DriverName
	I0819 19:12:07.552427  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHHostname
	I0819 19:12:07.554782  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.555062  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:07.555080  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.555219  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHPort
	I0819 19:12:07.555427  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:12:07.555586  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:12:07.555767  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHUsername
	I0819 19:12:07.555912  438245 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:07.556152  438245 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.48 22 <nil> <nil>}
	I0819 19:12:07.556168  438245 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 19:12:07.655996  438245 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 19:12:07.656027  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetMachineName
	I0819 19:12:07.656301  438245 buildroot.go:166] provisioning hostname "default-k8s-diff-port-982795"
	I0819 19:12:07.656329  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetMachineName
	I0819 19:12:07.656530  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHHostname
	I0819 19:12:07.658956  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.659311  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:07.659344  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.659439  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHPort
	I0819 19:12:07.659617  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:12:07.659813  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:12:07.659937  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHUsername
	I0819 19:12:07.660112  438245 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:07.660291  438245 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.48 22 <nil> <nil>}
	I0819 19:12:07.660302  438245 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-982795 && echo "default-k8s-diff-port-982795" | sudo tee /etc/hostname
	I0819 19:12:07.773590  438245 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-982795
	
	I0819 19:12:07.773615  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHHostname
	I0819 19:12:07.776994  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.777360  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:07.777399  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.777580  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHPort
	I0819 19:12:07.777860  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:12:07.778060  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:12:07.778273  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHUsername
	I0819 19:12:07.778457  438245 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:07.778665  438245 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.48 22 <nil> <nil>}
	I0819 19:12:07.778687  438245 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-982795' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-982795/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-982795' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 19:12:07.884662  438245 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 19:12:07.884718  438245 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19468-372744/.minikube CaCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19468-372744/.minikube}
	I0819 19:12:07.884751  438245 buildroot.go:174] setting up certificates
	I0819 19:12:07.884768  438245 provision.go:84] configureAuth start
	I0819 19:12:07.884782  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetMachineName
	I0819 19:12:07.885101  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetIP
	I0819 19:12:07.887844  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.888262  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:07.888293  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.888439  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHHostname
	I0819 19:12:07.890581  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.890977  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:07.891005  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.891136  438245 provision.go:143] copyHostCerts
	I0819 19:12:07.891219  438245 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem, removing ...
	I0819 19:12:07.891240  438245 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem
	I0819 19:12:07.891306  438245 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem (1082 bytes)
	I0819 19:12:07.891398  438245 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem, removing ...
	I0819 19:12:07.891406  438245 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem
	I0819 19:12:07.891430  438245 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem (1123 bytes)
	I0819 19:12:07.891487  438245 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem, removing ...
	I0819 19:12:07.891494  438245 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem
	I0819 19:12:07.891517  438245 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem (1675 bytes)
	I0819 19:12:07.891570  438245 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-982795 san=[127.0.0.1 192.168.61.48 default-k8s-diff-port-982795 localhost minikube]
	I0819 19:12:08.083963  438245 provision.go:177] copyRemoteCerts
	I0819 19:12:08.084024  438245 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 19:12:08.084086  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHHostname
	I0819 19:12:08.086637  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:08.086961  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:08.087005  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:08.087144  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHPort
	I0819 19:12:08.087357  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:12:08.087507  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHUsername
	I0819 19:12:08.087694  438245 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/default-k8s-diff-port-982795/id_rsa Username:docker}
	I0819 19:12:08.166312  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 19:12:08.194124  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0819 19:12:08.221817  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 19:12:08.249674  438245 provision.go:87] duration metric: took 364.885827ms to configureAuth
	I0819 19:12:08.249709  438245 buildroot.go:189] setting minikube options for container-runtime
	I0819 19:12:08.249891  438245 config.go:182] Loaded profile config "default-k8s-diff-port-982795": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:12:08.249983  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHHostname
	I0819 19:12:08.253045  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:08.253438  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:08.253469  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:08.253647  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHPort
	I0819 19:12:08.253856  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:12:08.254071  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:12:08.254266  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHUsername
	I0819 19:12:08.254481  438245 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:08.254700  438245 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.48 22 <nil> <nil>}
	I0819 19:12:08.254722  438245 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 19:12:08.508775  438245 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 19:12:08.508808  438245 machine.go:96] duration metric: took 956.629475ms to provisionDockerMachine
	I0819 19:12:08.508824  438245 start.go:293] postStartSetup for "default-k8s-diff-port-982795" (driver="kvm2")
	I0819 19:12:08.508838  438245 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 19:12:08.508868  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .DriverName
	I0819 19:12:08.509214  438245 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 19:12:08.509259  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHHostname
	I0819 19:12:08.512004  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:08.512341  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:08.512378  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:08.512517  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHPort
	I0819 19:12:08.512688  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:12:08.512867  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHUsername
	I0819 19:12:08.513059  438245 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/default-k8s-diff-port-982795/id_rsa Username:docker}
	I0819 19:12:08.594287  438245 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 19:12:08.598742  438245 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 19:12:08.598774  438245 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-372744/.minikube/addons for local assets ...
	I0819 19:12:08.598849  438245 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-372744/.minikube/files for local assets ...
	I0819 19:12:08.598943  438245 filesync.go:149] local asset: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem -> 3800092.pem in /etc/ssl/certs
	I0819 19:12:08.599029  438245 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 19:12:08.608416  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem --> /etc/ssl/certs/3800092.pem (1708 bytes)
	I0819 19:12:08.633880  438245 start.go:296] duration metric: took 125.036785ms for postStartSetup
	I0819 19:12:08.633930  438245 fix.go:56] duration metric: took 19.449641939s for fixHost
	I0819 19:12:08.633955  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHHostname
	I0819 19:12:08.636729  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:08.637006  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:08.637030  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:08.637248  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHPort
	I0819 19:12:08.637483  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:12:08.637672  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:12:08.637791  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHUsername
	I0819 19:12:08.637954  438245 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:08.638170  438245 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.48 22 <nil> <nil>}
	I0819 19:12:08.638186  438245 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 19:12:08.736519  438245 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724094728.710064462
	
	I0819 19:12:08.736540  438245 fix.go:216] guest clock: 1724094728.710064462
	I0819 19:12:08.736548  438245 fix.go:229] Guest: 2024-08-19 19:12:08.710064462 +0000 UTC Remote: 2024-08-19 19:12:08.633934039 +0000 UTC m=+281.422189217 (delta=76.130423ms)
	I0819 19:12:08.736568  438245 fix.go:200] guest clock delta is within tolerance: 76.130423ms
	I0819 19:12:08.736580  438245 start.go:83] releasing machines lock for "default-k8s-diff-port-982795", held for 19.552337255s
	I0819 19:12:08.736604  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .DriverName
	I0819 19:12:08.736918  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetIP
	I0819 19:12:08.739570  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:08.740030  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:08.740057  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:08.740222  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .DriverName
	I0819 19:12:08.740762  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .DriverName
	I0819 19:12:08.740960  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .DriverName
	I0819 19:12:08.741037  438245 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 19:12:08.741100  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHHostname
	I0819 19:12:08.741185  438245 ssh_runner.go:195] Run: cat /version.json
	I0819 19:12:08.741206  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHHostname
	I0819 19:12:08.743899  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:08.744037  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:08.744282  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:08.744304  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:08.744439  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHPort
	I0819 19:12:08.744576  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:08.744599  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:12:08.744607  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:08.744689  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHPort
	I0819 19:12:08.744786  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHUsername
	I0819 19:12:08.744858  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:12:08.744923  438245 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/default-k8s-diff-port-982795/id_rsa Username:docker}
	I0819 19:12:08.744997  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHUsername
	I0819 19:12:08.745143  438245 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/default-k8s-diff-port-982795/id_rsa Username:docker}
	I0819 19:12:08.820672  438245 ssh_runner.go:195] Run: systemctl --version
	I0819 19:12:08.847046  438245 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 19:12:08.989725  438245 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 19:12:08.996607  438245 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 19:12:08.996680  438245 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 19:12:09.013017  438245 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 19:12:09.013067  438245 start.go:495] detecting cgroup driver to use...
	I0819 19:12:09.013144  438245 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 19:12:09.030338  438245 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 19:12:09.044580  438245 docker.go:217] disabling cri-docker service (if available) ...
	I0819 19:12:09.044635  438245 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 19:12:09.058825  438245 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 19:12:09.073358  438245 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 19:12:09.194611  438245 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 19:12:09.333368  438245 docker.go:233] disabling docker service ...
	I0819 19:12:09.333446  438245 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 19:12:09.348775  438245 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 19:12:09.362911  438245 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 19:12:09.503015  438245 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 19:12:09.621246  438245 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 19:12:09.638480  438245 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 19:12:09.659346  438245 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 19:12:09.659406  438245 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:09.672088  438245 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 19:12:09.672166  438245 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:09.683704  438245 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:09.694847  438245 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:09.706339  438245 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 19:12:09.718658  438245 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:09.730645  438245 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:09.750843  438245 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:09.762551  438245 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 19:12:09.772960  438245 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 19:12:09.773037  438245 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 19:12:09.788362  438245 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 19:12:09.798695  438245 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:12:09.923389  438245 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 19:12:10.063317  438245 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 19:12:10.063413  438245 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 19:12:10.068449  438245 start.go:563] Will wait 60s for crictl version
	I0819 19:12:10.068540  438245 ssh_runner.go:195] Run: which crictl
	I0819 19:12:10.072807  438245 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 19:12:10.114058  438245 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 19:12:10.114151  438245 ssh_runner.go:195] Run: crio --version
	I0819 19:12:10.147919  438245 ssh_runner.go:195] Run: crio --version
	I0819 19:12:10.180009  438245 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 19:12:10.181218  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetIP
	I0819 19:12:10.184626  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:10.185015  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:10.185049  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:10.185243  438245 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0819 19:12:10.189653  438245 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 19:12:10.203439  438245 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-982795 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-982795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.48 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 19:12:10.203608  438245 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 19:12:10.203668  438245 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 19:12:10.241427  438245 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0819 19:12:10.241511  438245 ssh_runner.go:195] Run: which lz4
	I0819 19:12:10.245734  438245 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 19:12:10.250082  438245 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 19:12:10.250112  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0819 19:12:11.694285  438245 crio.go:462] duration metric: took 1.448590086s to copy over tarball
	I0819 19:12:11.694371  438245 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 19:12:10.028225  438295 main.go:141] libmachine: (embed-certs-024748) Waiting to get IP...
	I0819 19:12:10.029208  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:10.029696  438295 main.go:141] libmachine: (embed-certs-024748) DBG | unable to find current IP address of domain embed-certs-024748 in network mk-embed-certs-024748
	I0819 19:12:10.029752  438295 main.go:141] libmachine: (embed-certs-024748) DBG | I0819 19:12:10.029666  439540 retry.go:31] will retry after 276.66184ms: waiting for machine to come up
	I0819 19:12:10.308339  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:10.308762  438295 main.go:141] libmachine: (embed-certs-024748) DBG | unable to find current IP address of domain embed-certs-024748 in network mk-embed-certs-024748
	I0819 19:12:10.308804  438295 main.go:141] libmachine: (embed-certs-024748) DBG | I0819 19:12:10.308710  439540 retry.go:31] will retry after 279.376198ms: waiting for machine to come up
	I0819 19:12:10.590326  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:10.591084  438295 main.go:141] libmachine: (embed-certs-024748) DBG | unable to find current IP address of domain embed-certs-024748 in network mk-embed-certs-024748
	I0819 19:12:10.591117  438295 main.go:141] libmachine: (embed-certs-024748) DBG | I0819 19:12:10.590861  439540 retry.go:31] will retry after 364.735563ms: waiting for machine to come up
	I0819 19:12:10.957592  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:10.958075  438295 main.go:141] libmachine: (embed-certs-024748) DBG | unable to find current IP address of domain embed-certs-024748 in network mk-embed-certs-024748
	I0819 19:12:10.958100  438295 main.go:141] libmachine: (embed-certs-024748) DBG | I0819 19:12:10.958033  439540 retry.go:31] will retry after 384.275284ms: waiting for machine to come up
	I0819 19:12:11.343631  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:11.344169  438295 main.go:141] libmachine: (embed-certs-024748) DBG | unable to find current IP address of domain embed-certs-024748 in network mk-embed-certs-024748
	I0819 19:12:11.344192  438295 main.go:141] libmachine: (embed-certs-024748) DBG | I0819 19:12:11.344125  439540 retry.go:31] will retry after 572.182522ms: waiting for machine to come up
	I0819 19:12:11.917660  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:11.918150  438295 main.go:141] libmachine: (embed-certs-024748) DBG | unable to find current IP address of domain embed-certs-024748 in network mk-embed-certs-024748
	I0819 19:12:11.918179  438295 main.go:141] libmachine: (embed-certs-024748) DBG | I0819 19:12:11.918093  439540 retry.go:31] will retry after 767.807058ms: waiting for machine to come up
	I0819 19:12:12.687256  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:12.687782  438295 main.go:141] libmachine: (embed-certs-024748) DBG | unable to find current IP address of domain embed-certs-024748 in network mk-embed-certs-024748
	I0819 19:12:12.687815  438295 main.go:141] libmachine: (embed-certs-024748) DBG | I0819 19:12:12.687728  439540 retry.go:31] will retry after 715.897037ms: waiting for machine to come up
	I0819 19:12:13.406041  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:13.406653  438295 main.go:141] libmachine: (embed-certs-024748) DBG | unable to find current IP address of domain embed-certs-024748 in network mk-embed-certs-024748
	I0819 19:12:13.406690  438295 main.go:141] libmachine: (embed-certs-024748) DBG | I0819 19:12:13.406577  439540 retry.go:31] will retry after 1.301579737s: waiting for machine to come up
	I0819 19:12:13.847779  438245 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.153373496s)
	I0819 19:12:13.847810  438245 crio.go:469] duration metric: took 2.153488101s to extract the tarball
	I0819 19:12:13.847817  438245 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 19:12:13.885520  438245 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 19:12:13.929775  438245 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 19:12:13.929809  438245 cache_images.go:84] Images are preloaded, skipping loading
	I0819 19:12:13.929838  438245 kubeadm.go:934] updating node { 192.168.61.48 8444 v1.31.0 crio true true} ...
	I0819 19:12:13.930019  438245 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-982795 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.48
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-982795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 19:12:13.930113  438245 ssh_runner.go:195] Run: crio config
	I0819 19:12:13.977098  438245 cni.go:84] Creating CNI manager for ""
	I0819 19:12:13.977123  438245 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 19:12:13.977136  438245 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 19:12:13.977176  438245 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.48 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-982795 NodeName:default-k8s-diff-port-982795 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.48"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.48 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 19:12:13.977382  438245 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.48
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-982795"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.48
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.48"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 19:12:13.977461  438245 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 19:12:13.987276  438245 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 19:12:13.987381  438245 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 19:12:13.996666  438245 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0819 19:12:14.013822  438245 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 19:12:14.030936  438245 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0819 19:12:14.048575  438245 ssh_runner.go:195] Run: grep 192.168.61.48	control-plane.minikube.internal$ /etc/hosts
	I0819 19:12:14.052809  438245 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.48	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 19:12:14.065177  438245 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:12:14.185159  438245 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 19:12:14.202906  438245 certs.go:68] Setting up /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/default-k8s-diff-port-982795 for IP: 192.168.61.48
	I0819 19:12:14.202934  438245 certs.go:194] generating shared ca certs ...
	I0819 19:12:14.202966  438245 certs.go:226] acquiring lock for ca certs: {Name:mk639e03f593e0bccac045f6e9f5ba3b96cc81e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:12:14.203184  438245 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key
	I0819 19:12:14.203266  438245 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key
	I0819 19:12:14.203282  438245 certs.go:256] generating profile certs ...
	I0819 19:12:14.203399  438245 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/default-k8s-diff-port-982795/client.key
	I0819 19:12:14.203487  438245 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/default-k8s-diff-port-982795/apiserver.key.a3c7a519
	I0819 19:12:14.203552  438245 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/default-k8s-diff-port-982795/proxy-client.key
	I0819 19:12:14.203757  438245 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem (1338 bytes)
	W0819 19:12:14.203820  438245 certs.go:480] ignoring /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009_empty.pem, impossibly tiny 0 bytes
	I0819 19:12:14.203834  438245 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 19:12:14.203866  438245 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem (1082 bytes)
	I0819 19:12:14.203899  438245 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem (1123 bytes)
	I0819 19:12:14.203929  438245 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem (1675 bytes)
	I0819 19:12:14.203994  438245 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem (1708 bytes)
	I0819 19:12:14.205025  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 19:12:14.258243  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 19:12:14.295380  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 19:12:14.330511  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 19:12:14.358547  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/default-k8s-diff-port-982795/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0819 19:12:14.386938  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/default-k8s-diff-port-982795/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 19:12:14.415021  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/default-k8s-diff-port-982795/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 19:12:14.439531  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/default-k8s-diff-port-982795/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 19:12:14.463969  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 19:12:14.487638  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem --> /usr/share/ca-certificates/380009.pem (1338 bytes)
	I0819 19:12:14.511571  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem --> /usr/share/ca-certificates/3800092.pem (1708 bytes)
	I0819 19:12:14.535223  438245 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 19:12:14.552922  438245 ssh_runner.go:195] Run: openssl version
	I0819 19:12:14.559078  438245 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 19:12:14.570605  438245 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:12:14.575411  438245 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:12:14.575484  438245 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:12:14.581714  438245 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 19:12:14.592896  438245 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/380009.pem && ln -fs /usr/share/ca-certificates/380009.pem /etc/ssl/certs/380009.pem"
	I0819 19:12:14.604306  438245 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/380009.pem
	I0819 19:12:14.609139  438245 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 17:56 /usr/share/ca-certificates/380009.pem
	I0819 19:12:14.609212  438245 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/380009.pem
	I0819 19:12:14.615160  438245 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/380009.pem /etc/ssl/certs/51391683.0"
	I0819 19:12:14.626010  438245 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3800092.pem && ln -fs /usr/share/ca-certificates/3800092.pem /etc/ssl/certs/3800092.pem"
	I0819 19:12:14.636821  438245 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3800092.pem
	I0819 19:12:14.641308  438245 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 17:56 /usr/share/ca-certificates/3800092.pem
	I0819 19:12:14.641358  438245 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3800092.pem
	I0819 19:12:14.646898  438245 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3800092.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 19:12:14.657905  438245 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 19:12:14.662780  438245 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 19:12:14.668934  438245 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 19:12:14.674693  438245 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 19:12:14.680683  438245 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 19:12:14.686689  438245 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 19:12:14.692678  438245 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 19:12:14.698784  438245 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-982795 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-982795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.48 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 19:12:14.698930  438245 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 19:12:14.699006  438245 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 19:12:14.740881  438245 cri.go:89] found id: ""
	I0819 19:12:14.740964  438245 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 19:12:14.751589  438245 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 19:12:14.751613  438245 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 19:12:14.751665  438245 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 19:12:14.761837  438245 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 19:12:14.762870  438245 kubeconfig.go:125] found "default-k8s-diff-port-982795" server: "https://192.168.61.48:8444"
	I0819 19:12:14.765176  438245 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 19:12:14.775114  438245 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.48
	I0819 19:12:14.775147  438245 kubeadm.go:1160] stopping kube-system containers ...
	I0819 19:12:14.775161  438245 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0819 19:12:14.775228  438245 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 19:12:14.811373  438245 cri.go:89] found id: ""
	I0819 19:12:14.811442  438245 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0819 19:12:14.829656  438245 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 19:12:14.840215  438245 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 19:12:14.840236  438245 kubeadm.go:157] found existing configuration files:
	
	I0819 19:12:14.840288  438245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0819 19:12:14.850017  438245 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 19:12:14.850075  438245 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 19:12:14.860060  438245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0819 19:12:14.869589  438245 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 19:12:14.869645  438245 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 19:12:14.879249  438245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0819 19:12:14.888475  438245 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 19:12:14.888532  438245 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 19:12:14.898151  438245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0819 19:12:14.907628  438245 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 19:12:14.907737  438245 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 19:12:14.917581  438245 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 19:12:14.927119  438245 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:12:15.037162  438245 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:12:16.355430  438245 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.318225023s)
	I0819 19:12:16.355461  438245 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:12:16.566565  438245 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:12:16.649402  438245 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:12:16.775956  438245 api_server.go:52] waiting for apiserver process to appear ...
	I0819 19:12:16.776067  438245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:12:14.709988  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:14.710397  438295 main.go:141] libmachine: (embed-certs-024748) DBG | unable to find current IP address of domain embed-certs-024748 in network mk-embed-certs-024748
	I0819 19:12:14.710429  438295 main.go:141] libmachine: (embed-certs-024748) DBG | I0819 19:12:14.710338  439540 retry.go:31] will retry after 1.420823505s: waiting for machine to come up
	I0819 19:12:16.133160  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:16.133558  438295 main.go:141] libmachine: (embed-certs-024748) DBG | unable to find current IP address of domain embed-certs-024748 in network mk-embed-certs-024748
	I0819 19:12:16.133587  438295 main.go:141] libmachine: (embed-certs-024748) DBG | I0819 19:12:16.133531  439540 retry.go:31] will retry after 1.71697779s: waiting for machine to come up
	I0819 19:12:17.852342  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:17.852884  438295 main.go:141] libmachine: (embed-certs-024748) DBG | unable to find current IP address of domain embed-certs-024748 in network mk-embed-certs-024748
	I0819 19:12:17.852922  438295 main.go:141] libmachine: (embed-certs-024748) DBG | I0819 19:12:17.852836  439540 retry.go:31] will retry after 2.816782354s: waiting for machine to come up
	I0819 19:12:17.277067  438245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:12:17.777027  438245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:12:17.797513  438245 api_server.go:72] duration metric: took 1.021572879s to wait for apiserver process to appear ...
	I0819 19:12:17.797554  438245 api_server.go:88] waiting for apiserver healthz status ...
	I0819 19:12:17.797596  438245 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8444/healthz ...
	I0819 19:12:17.798191  438245 api_server.go:269] stopped: https://192.168.61.48:8444/healthz: Get "https://192.168.61.48:8444/healthz": dial tcp 192.168.61.48:8444: connect: connection refused
	I0819 19:12:18.297907  438245 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8444/healthz ...
	I0819 19:12:20.177305  438245 api_server.go:279] https://192.168.61.48:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 19:12:20.177345  438245 api_server.go:103] status: https://192.168.61.48:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 19:12:20.177367  438245 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8444/healthz ...
	I0819 19:12:20.244091  438245 api_server.go:279] https://192.168.61.48:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 19:12:20.244140  438245 api_server.go:103] status: https://192.168.61.48:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 19:12:20.298403  438245 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8444/healthz ...
	I0819 19:12:20.304289  438245 api_server.go:279] https://192.168.61.48:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 19:12:20.304325  438245 api_server.go:103] status: https://192.168.61.48:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 19:12:20.797876  438245 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8444/healthz ...
	I0819 19:12:20.803894  438245 api_server.go:279] https://192.168.61.48:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 19:12:20.803935  438245 api_server.go:103] status: https://192.168.61.48:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 19:12:21.298284  438245 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8444/healthz ...
	I0819 19:12:21.320292  438245 api_server.go:279] https://192.168.61.48:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 19:12:21.320320  438245 api_server.go:103] status: https://192.168.61.48:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 19:12:21.797829  438245 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8444/healthz ...
	I0819 19:12:21.802183  438245 api_server.go:279] https://192.168.61.48:8444/healthz returned 200:
	ok
	I0819 19:12:21.809866  438245 api_server.go:141] control plane version: v1.31.0
	I0819 19:12:21.809902  438245 api_server.go:131] duration metric: took 4.012339897s to wait for apiserver health ...
	I0819 19:12:21.809914  438245 cni.go:84] Creating CNI manager for ""
	I0819 19:12:21.809944  438245 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 19:12:21.811668  438245 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 19:12:21.813183  438245 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 19:12:21.826170  438245 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 19:12:21.850473  438245 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 19:12:21.865379  438245 system_pods.go:59] 8 kube-system pods found
	I0819 19:12:21.865422  438245 system_pods.go:61] "coredns-6f6b679f8f-dwbnt" [9b8d7ee3-15ca-475b-b659-d5c3b10890fe] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0819 19:12:21.865442  438245 system_pods.go:61] "etcd-default-k8s-diff-port-982795" [6686e6f6-485d-4c57-89a1-af4f27b6216e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0819 19:12:21.865455  438245 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-982795" [fcfb5a0d-6d6c-4c30-a17f-43106f3dd5ae] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0819 19:12:21.865475  438245 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-982795" [346bf3b5-57e7-4f30-a6ed-959dc9e8941d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0819 19:12:21.865485  438245 system_pods.go:61] "kube-proxy-wrczx" [acabdc8e-5397-4531-afcb-57a8f4c48618] Running
	I0819 19:12:21.865493  438245 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-982795" [82de0c57-e712-4c0c-b751-a17cb0dd75b2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0819 19:12:21.865503  438245 system_pods.go:61] "metrics-server-6867b74b74-5hlnx" [394c87af-a198-4fea-8a30-32a8c3e80884] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 19:12:21.865522  438245 system_pods.go:61] "storage-provisioner" [35f70989-846d-4ec5-b879-a22625ee94ce] Running
	I0819 19:12:21.865534  438245 system_pods.go:74] duration metric: took 15.035147ms to wait for pod list to return data ...
	I0819 19:12:21.865545  438245 node_conditions.go:102] verifying NodePressure condition ...
	I0819 19:12:21.870314  438245 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 19:12:21.870350  438245 node_conditions.go:123] node cpu capacity is 2
	I0819 19:12:21.870366  438245 node_conditions.go:105] duration metric: took 4.813819ms to run NodePressure ...
	I0819 19:12:21.870390  438245 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:12:22.130916  438245 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0819 19:12:22.134889  438245 kubeadm.go:739] kubelet initialised
	I0819 19:12:22.134912  438245 kubeadm.go:740] duration metric: took 3.970465ms waiting for restarted kubelet to initialise ...
	I0819 19:12:22.134920  438245 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 19:12:22.139345  438245 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-dwbnt" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:20.672189  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:20.672655  438295 main.go:141] libmachine: (embed-certs-024748) DBG | unable to find current IP address of domain embed-certs-024748 in network mk-embed-certs-024748
	I0819 19:12:20.672682  438295 main.go:141] libmachine: (embed-certs-024748) DBG | I0819 19:12:20.672613  439540 retry.go:31] will retry after 2.76896974s: waiting for machine to come up
	I0819 19:12:23.442804  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:23.443223  438295 main.go:141] libmachine: (embed-certs-024748) DBG | unable to find current IP address of domain embed-certs-024748 in network mk-embed-certs-024748
	I0819 19:12:23.443268  438295 main.go:141] libmachine: (embed-certs-024748) DBG | I0819 19:12:23.443170  439540 retry.go:31] will retry after 4.199459292s: waiting for machine to come up
	I0819 19:12:24.145329  438245 pod_ready.go:103] pod "coredns-6f6b679f8f-dwbnt" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:26.645695  438245 pod_ready.go:103] pod "coredns-6f6b679f8f-dwbnt" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:27.644842  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:27.645376  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has current primary IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:27.645403  438295 main.go:141] libmachine: (embed-certs-024748) Found IP for machine: 192.168.72.96
	I0819 19:12:27.645417  438295 main.go:141] libmachine: (embed-certs-024748) Reserving static IP address...
	I0819 19:12:27.645874  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "embed-certs-024748", mac: "52:54:00:f0:8b:43", ip: "192.168.72.96"} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:27.645902  438295 main.go:141] libmachine: (embed-certs-024748) Reserved static IP address: 192.168.72.96
	I0819 19:12:27.645919  438295 main.go:141] libmachine: (embed-certs-024748) DBG | skip adding static IP to network mk-embed-certs-024748 - found existing host DHCP lease matching {name: "embed-certs-024748", mac: "52:54:00:f0:8b:43", ip: "192.168.72.96"}
	I0819 19:12:27.645952  438295 main.go:141] libmachine: (embed-certs-024748) Waiting for SSH to be available...
	I0819 19:12:27.645974  438295 main.go:141] libmachine: (embed-certs-024748) DBG | Getting to WaitForSSH function...
	I0819 19:12:27.648195  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:27.648471  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:27.648496  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:27.648717  438295 main.go:141] libmachine: (embed-certs-024748) DBG | Using SSH client type: external
	I0819 19:12:27.648744  438295 main.go:141] libmachine: (embed-certs-024748) DBG | Using SSH private key: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/embed-certs-024748/id_rsa (-rw-------)
	I0819 19:12:27.648773  438295 main.go:141] libmachine: (embed-certs-024748) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.96 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19468-372744/.minikube/machines/embed-certs-024748/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 19:12:27.648792  438295 main.go:141] libmachine: (embed-certs-024748) DBG | About to run SSH command:
	I0819 19:12:27.648808  438295 main.go:141] libmachine: (embed-certs-024748) DBG | exit 0
	I0819 19:12:27.775964  438295 main.go:141] libmachine: (embed-certs-024748) DBG | SSH cmd err, output: <nil>: 
	I0819 19:12:27.776344  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetConfigRaw
	I0819 19:12:27.777100  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetIP
	I0819 19:12:27.780096  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:27.780535  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:27.780570  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:27.780936  438295 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/embed-certs-024748/config.json ...
	I0819 19:12:27.781721  438295 machine.go:93] provisionDockerMachine start ...
	I0819 19:12:27.781748  438295 main.go:141] libmachine: (embed-certs-024748) Calling .DriverName
	I0819 19:12:27.781974  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHHostname
	I0819 19:12:27.784482  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:27.784838  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:27.784868  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:27.785066  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHPort
	I0819 19:12:27.785254  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:27.785452  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:27.785617  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHUsername
	I0819 19:12:27.785789  438295 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:27.786038  438295 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.96 22 <nil> <nil>}
	I0819 19:12:27.786059  438295 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 19:12:27.904337  438295 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 19:12:27.904375  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetMachineName
	I0819 19:12:27.904675  438295 buildroot.go:166] provisioning hostname "embed-certs-024748"
	I0819 19:12:27.904711  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetMachineName
	I0819 19:12:27.904932  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHHostname
	I0819 19:12:27.907960  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:27.908325  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:27.908354  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:27.908446  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHPort
	I0819 19:12:27.908659  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:27.908825  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:27.909012  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHUsername
	I0819 19:12:27.909234  438295 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:27.909441  438295 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.96 22 <nil> <nil>}
	I0819 19:12:27.909458  438295 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-024748 && echo "embed-certs-024748" | sudo tee /etc/hostname
	I0819 19:12:28.036564  438295 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-024748
	
	I0819 19:12:28.036597  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHHostname
	I0819 19:12:28.039385  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:28.039798  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:28.039827  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:28.040071  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHPort
	I0819 19:12:28.040327  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:28.040493  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:28.040652  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHUsername
	I0819 19:12:28.040882  438295 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:28.041113  438295 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.96 22 <nil> <nil>}
	I0819 19:12:28.041138  438295 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-024748' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-024748/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-024748' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 19:12:28.162311  438295 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 19:12:28.162348  438295 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19468-372744/.minikube CaCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19468-372744/.minikube}
	I0819 19:12:28.162368  438295 buildroot.go:174] setting up certificates
	I0819 19:12:28.162376  438295 provision.go:84] configureAuth start
	I0819 19:12:28.162385  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetMachineName
	I0819 19:12:28.162703  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetIP
	I0819 19:12:28.165171  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:28.165563  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:28.165593  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:28.165727  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHHostname
	I0819 19:12:28.167917  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:28.168199  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:28.168221  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:28.168411  438295 provision.go:143] copyHostCerts
	I0819 19:12:28.168469  438295 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem, removing ...
	I0819 19:12:28.168491  438295 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem
	I0819 19:12:28.168560  438295 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem (1082 bytes)
	I0819 19:12:28.168693  438295 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem, removing ...
	I0819 19:12:28.168704  438295 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem
	I0819 19:12:28.168736  438295 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem (1123 bytes)
	I0819 19:12:28.168814  438295 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem, removing ...
	I0819 19:12:28.168824  438295 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem
	I0819 19:12:28.168853  438295 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem (1675 bytes)
	I0819 19:12:28.168942  438295 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem org=jenkins.embed-certs-024748 san=[127.0.0.1 192.168.72.96 embed-certs-024748 localhost minikube]
	I0819 19:12:28.447064  438295 provision.go:177] copyRemoteCerts
	I0819 19:12:28.447129  438295 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 19:12:28.447158  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHHostname
	I0819 19:12:28.449851  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:28.450138  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:28.450163  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:28.450344  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHPort
	I0819 19:12:28.450541  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:28.450713  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHUsername
	I0819 19:12:28.450832  438295 sshutil.go:53] new ssh client: &{IP:192.168.72.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/embed-certs-024748/id_rsa Username:docker}
	I0819 19:12:28.537815  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 19:12:28.562408  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0819 19:12:28.586728  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 19:12:28.611119  438295 provision.go:87] duration metric: took 448.726133ms to configureAuth
	I0819 19:12:28.611158  438295 buildroot.go:189] setting minikube options for container-runtime
	I0819 19:12:28.611351  438295 config.go:182] Loaded profile config "embed-certs-024748": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:12:28.611428  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHHostname
	I0819 19:12:28.614168  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:28.614543  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:28.614571  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:28.614736  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHPort
	I0819 19:12:28.614941  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:28.615083  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:28.615192  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHUsername
	I0819 19:12:28.615302  438295 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:28.615454  438295 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.96 22 <nil> <nil>}
	I0819 19:12:28.615469  438295 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 19:12:28.890054  438295 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 19:12:28.890086  438295 machine.go:96] duration metric: took 1.10834874s to provisionDockerMachine
	I0819 19:12:28.890100  438295 start.go:293] postStartSetup for "embed-certs-024748" (driver="kvm2")
	I0819 19:12:28.890120  438295 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 19:12:28.890146  438295 main.go:141] libmachine: (embed-certs-024748) Calling .DriverName
	I0819 19:12:28.890469  438295 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 19:12:28.890499  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHHostname
	I0819 19:12:28.893251  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:28.893579  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:28.893605  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:28.893733  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHPort
	I0819 19:12:28.893895  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:28.894102  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHUsername
	I0819 19:12:28.894220  438295 sshutil.go:53] new ssh client: &{IP:192.168.72.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/embed-certs-024748/id_rsa Username:docker}
	I0819 19:12:28.979381  438295 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 19:12:28.983921  438295 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 19:12:28.983952  438295 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-372744/.minikube/addons for local assets ...
	I0819 19:12:28.984048  438295 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-372744/.minikube/files for local assets ...
	I0819 19:12:28.984156  438295 filesync.go:149] local asset: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem -> 3800092.pem in /etc/ssl/certs
	I0819 19:12:28.984250  438295 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 19:12:28.994964  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem --> /etc/ssl/certs/3800092.pem (1708 bytes)
	I0819 19:12:29.018801  438295 start.go:296] duration metric: took 128.685446ms for postStartSetup
	I0819 19:12:29.018843  438295 fix.go:56] duration metric: took 20.282076509s for fixHost
	I0819 19:12:29.018870  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHHostname
	I0819 19:12:29.021554  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:29.021848  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:29.021875  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:29.022066  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHPort
	I0819 19:12:29.022261  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:29.022428  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:29.022526  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHUsername
	I0819 19:12:29.022678  438295 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:29.022900  438295 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.96 22 <nil> <nil>}
	I0819 19:12:29.022915  438295 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 19:12:29.132976  438716 start.go:364] duration metric: took 3m58.489348567s to acquireMachinesLock for "old-k8s-version-104669"
	I0819 19:12:29.133047  438716 start.go:96] Skipping create...Using existing machine configuration
	I0819 19:12:29.133055  438716 fix.go:54] fixHost starting: 
	I0819 19:12:29.133485  438716 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:29.133524  438716 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:29.151330  438716 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39213
	I0819 19:12:29.151778  438716 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:29.152271  438716 main.go:141] libmachine: Using API Version  1
	I0819 19:12:29.152301  438716 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:29.152682  438716 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:29.152883  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .DriverName
	I0819 19:12:29.153065  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetState
	I0819 19:12:29.154399  438716 fix.go:112] recreateIfNeeded on old-k8s-version-104669: state=Stopped err=<nil>
	I0819 19:12:29.154444  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .DriverName
	W0819 19:12:29.154684  438716 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 19:12:29.156349  438716 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-104669" ...
	I0819 19:12:29.157631  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .Start
	I0819 19:12:29.157825  438716 main.go:141] libmachine: (old-k8s-version-104669) Ensuring networks are active...
	I0819 19:12:29.158635  438716 main.go:141] libmachine: (old-k8s-version-104669) Ensuring network default is active
	I0819 19:12:29.159041  438716 main.go:141] libmachine: (old-k8s-version-104669) Ensuring network mk-old-k8s-version-104669 is active
	I0819 19:12:29.159509  438716 main.go:141] libmachine: (old-k8s-version-104669) Getting domain xml...
	I0819 19:12:29.160383  438716 main.go:141] libmachine: (old-k8s-version-104669) Creating domain...
	I0819 19:12:30.452488  438716 main.go:141] libmachine: (old-k8s-version-104669) Waiting to get IP...
	I0819 19:12:30.453743  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:30.454237  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:30.454323  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:30.454193  439728 retry.go:31] will retry after 197.440033ms: waiting for machine to come up
	I0819 19:12:29.132812  438295 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724094749.105537362
	
	I0819 19:12:29.132839  438295 fix.go:216] guest clock: 1724094749.105537362
	I0819 19:12:29.132850  438295 fix.go:229] Guest: 2024-08-19 19:12:29.105537362 +0000 UTC Remote: 2024-08-19 19:12:29.018848957 +0000 UTC m=+300.015027560 (delta=86.688405ms)
	I0819 19:12:29.132877  438295 fix.go:200] guest clock delta is within tolerance: 86.688405ms
	I0819 19:12:29.132884  438295 start.go:83] releasing machines lock for "embed-certs-024748", held for 20.396159242s
	I0819 19:12:29.132912  438295 main.go:141] libmachine: (embed-certs-024748) Calling .DriverName
	I0819 19:12:29.133179  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetIP
	I0819 19:12:29.136110  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:29.136532  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:29.136565  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:29.136750  438295 main.go:141] libmachine: (embed-certs-024748) Calling .DriverName
	I0819 19:12:29.137307  438295 main.go:141] libmachine: (embed-certs-024748) Calling .DriverName
	I0819 19:12:29.137532  438295 main.go:141] libmachine: (embed-certs-024748) Calling .DriverName
	I0819 19:12:29.137616  438295 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 19:12:29.137690  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHHostname
	I0819 19:12:29.137758  438295 ssh_runner.go:195] Run: cat /version.json
	I0819 19:12:29.137781  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHHostname
	I0819 19:12:29.140500  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:29.140820  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:29.140870  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:29.140903  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:29.141067  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHPort
	I0819 19:12:29.141266  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:29.141385  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:29.141430  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:29.141443  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHUsername
	I0819 19:12:29.141586  438295 sshutil.go:53] new ssh client: &{IP:192.168.72.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/embed-certs-024748/id_rsa Username:docker}
	I0819 19:12:29.141639  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHPort
	I0819 19:12:29.141790  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:29.141957  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHUsername
	I0819 19:12:29.142123  438295 sshutil.go:53] new ssh client: &{IP:192.168.72.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/embed-certs-024748/id_rsa Username:docker}
	I0819 19:12:29.242886  438295 ssh_runner.go:195] Run: systemctl --version
	I0819 19:12:29.249276  438295 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 19:12:29.393872  438295 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 19:12:29.401874  438295 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 19:12:29.401954  438295 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 19:12:29.421973  438295 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 19:12:29.422004  438295 start.go:495] detecting cgroup driver to use...
	I0819 19:12:29.422081  438295 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 19:12:29.442823  438295 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 19:12:29.462663  438295 docker.go:217] disabling cri-docker service (if available) ...
	I0819 19:12:29.462720  438295 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 19:12:29.477896  438295 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 19:12:29.492591  438295 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 19:12:29.613759  438295 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 19:12:29.770719  438295 docker.go:233] disabling docker service ...
	I0819 19:12:29.770805  438295 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 19:12:29.785787  438295 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 19:12:29.802879  438295 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 19:12:29.947633  438295 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 19:12:30.082602  438295 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 19:12:30.097628  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 19:12:30.118671  438295 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 19:12:30.118735  438295 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:30.131287  438295 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 19:12:30.131354  438295 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:30.143008  438295 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:30.156358  438295 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:30.172123  438295 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 19:12:30.188196  438295 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:30.201487  438295 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:30.219887  438295 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:30.235685  438295 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 19:12:30.246112  438295 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 19:12:30.246202  438295 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 19:12:30.259732  438295 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 19:12:30.269866  438295 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:12:30.397522  438295 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 19:12:30.545249  438295 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 19:12:30.545349  438295 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 19:12:30.550473  438295 start.go:563] Will wait 60s for crictl version
	I0819 19:12:30.550528  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:12:30.554782  438295 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 19:12:30.597634  438295 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 19:12:30.597736  438295 ssh_runner.go:195] Run: crio --version
	I0819 19:12:30.628137  438295 ssh_runner.go:195] Run: crio --version
	I0819 19:12:30.660912  438295 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 19:12:29.146475  438245 pod_ready.go:103] pod "coredns-6f6b679f8f-dwbnt" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:31.147618  438245 pod_ready.go:93] pod "coredns-6f6b679f8f-dwbnt" in "kube-system" namespace has status "Ready":"True"
	I0819 19:12:31.147651  438245 pod_ready.go:82] duration metric: took 9.00827926s for pod "coredns-6f6b679f8f-dwbnt" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:31.147665  438245 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:31.153305  438245 pod_ready.go:93] pod "etcd-default-k8s-diff-port-982795" in "kube-system" namespace has status "Ready":"True"
	I0819 19:12:31.153331  438245 pod_ready.go:82] duration metric: took 5.657625ms for pod "etcd-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:31.153347  438245 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:31.159009  438245 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-982795" in "kube-system" namespace has status "Ready":"True"
	I0819 19:12:31.159037  438245 pod_ready.go:82] duration metric: took 5.680194ms for pod "kube-apiserver-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:31.159050  438245 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:31.165478  438245 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-982795" in "kube-system" namespace has status "Ready":"True"
	I0819 19:12:31.165504  438245 pod_ready.go:82] duration metric: took 6.444529ms for pod "kube-controller-manager-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:31.165517  438245 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-wrczx" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:31.180293  438245 pod_ready.go:93] pod "kube-proxy-wrczx" in "kube-system" namespace has status "Ready":"True"
	I0819 19:12:31.180324  438245 pod_ready.go:82] duration metric: took 14.798883ms for pod "kube-proxy-wrczx" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:31.180337  438245 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:30.662168  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetIP
	I0819 19:12:30.665057  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:30.665455  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:30.665486  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:30.665660  438295 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0819 19:12:30.669911  438295 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 19:12:30.682755  438295 kubeadm.go:883] updating cluster {Name:embed-certs-024748 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-024748 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.96 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 19:12:30.682883  438295 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 19:12:30.682936  438295 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 19:12:30.724160  438295 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0819 19:12:30.724233  438295 ssh_runner.go:195] Run: which lz4
	I0819 19:12:30.728710  438295 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 19:12:30.733279  438295 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 19:12:30.733317  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0819 19:12:32.178568  438295 crio.go:462] duration metric: took 1.449881121s to copy over tarball
	I0819 19:12:32.178642  438295 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 19:12:30.653917  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:30.654521  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:30.654566  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:30.654436  439728 retry.go:31] will retry after 317.038756ms: waiting for machine to come up
	I0819 19:12:30.973003  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:30.973530  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:30.973560  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:30.973487  439728 retry.go:31] will retry after 486.945032ms: waiting for machine to come up
	I0819 19:12:31.461937  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:31.462438  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:31.462470  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:31.462389  439728 retry.go:31] will retry after 441.288745ms: waiting for machine to come up
	I0819 19:12:31.904947  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:31.905564  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:31.905617  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:31.905472  439728 retry.go:31] will retry after 752.583403ms: waiting for machine to come up
	I0819 19:12:32.659642  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:32.660175  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:32.660207  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:32.660128  439728 retry.go:31] will retry after 932.705928ms: waiting for machine to come up
	I0819 19:12:33.594983  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:33.595529  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:33.595556  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:33.595466  439728 retry.go:31] will retry after 936.558157ms: waiting for machine to come up
	I0819 19:12:34.533158  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:34.533717  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:34.533743  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:34.533656  439728 retry.go:31] will retry after 1.435945188s: waiting for machine to come up
	I0819 19:12:33.186835  438245 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-982795" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:35.187500  438245 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-982795" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:35.686905  438245 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-982795" in "kube-system" namespace has status "Ready":"True"
	I0819 19:12:35.686932  438245 pod_ready.go:82] duration metric: took 4.50658625s for pod "kube-scheduler-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:35.686945  438245 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:34.321347  438295 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.14267077s)
	I0819 19:12:34.321379  438295 crio.go:469] duration metric: took 2.142777016s to extract the tarball
	I0819 19:12:34.321390  438295 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 19:12:34.357670  438295 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 19:12:34.403313  438295 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 19:12:34.403344  438295 cache_images.go:84] Images are preloaded, skipping loading
	I0819 19:12:34.403358  438295 kubeadm.go:934] updating node { 192.168.72.96 8443 v1.31.0 crio true true} ...
	I0819 19:12:34.403495  438295 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-024748 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.96
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-024748 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 19:12:34.403576  438295 ssh_runner.go:195] Run: crio config
	I0819 19:12:34.450415  438295 cni.go:84] Creating CNI manager for ""
	I0819 19:12:34.450443  438295 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 19:12:34.450461  438295 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 19:12:34.450490  438295 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.96 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-024748 NodeName:embed-certs-024748 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.96"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.96 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 19:12:34.450646  438295 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.96
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-024748"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.96
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.96"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 19:12:34.450723  438295 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 19:12:34.461183  438295 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 19:12:34.461313  438295 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 19:12:34.470516  438295 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0819 19:12:34.488844  438295 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 19:12:34.505450  438295 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0819 19:12:34.522456  438295 ssh_runner.go:195] Run: grep 192.168.72.96	control-plane.minikube.internal$ /etc/hosts
	I0819 19:12:34.526272  438295 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.96	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 19:12:34.539079  438295 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:12:34.665665  438295 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 19:12:34.683237  438295 certs.go:68] Setting up /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/embed-certs-024748 for IP: 192.168.72.96
	I0819 19:12:34.683265  438295 certs.go:194] generating shared ca certs ...
	I0819 19:12:34.683287  438295 certs.go:226] acquiring lock for ca certs: {Name:mk639e03f593e0bccac045f6e9f5ba3b96cc81e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:12:34.683471  438295 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key
	I0819 19:12:34.683536  438295 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key
	I0819 19:12:34.683550  438295 certs.go:256] generating profile certs ...
	I0819 19:12:34.683687  438295 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/embed-certs-024748/client.key
	I0819 19:12:34.683776  438295 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/embed-certs-024748/apiserver.key.89193d03
	I0819 19:12:34.683828  438295 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/embed-certs-024748/proxy-client.key
	I0819 19:12:34.683991  438295 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem (1338 bytes)
	W0819 19:12:34.684035  438295 certs.go:480] ignoring /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009_empty.pem, impossibly tiny 0 bytes
	I0819 19:12:34.684047  438295 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 19:12:34.684074  438295 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem (1082 bytes)
	I0819 19:12:34.684112  438295 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem (1123 bytes)
	I0819 19:12:34.684159  438295 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem (1675 bytes)
	I0819 19:12:34.684224  438295 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem (1708 bytes)
	I0819 19:12:34.685127  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 19:12:34.718591  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 19:12:34.758439  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 19:12:34.790143  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 19:12:34.828113  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/embed-certs-024748/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0819 19:12:34.860389  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/embed-certs-024748/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 19:12:34.898361  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/embed-certs-024748/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 19:12:34.924677  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/embed-certs-024748/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 19:12:34.951630  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem --> /usr/share/ca-certificates/3800092.pem (1708 bytes)
	I0819 19:12:34.977435  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 19:12:35.002048  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem --> /usr/share/ca-certificates/380009.pem (1338 bytes)
	I0819 19:12:35.026934  438295 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 19:12:35.044476  438295 ssh_runner.go:195] Run: openssl version
	I0819 19:12:35.050174  438295 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3800092.pem && ln -fs /usr/share/ca-certificates/3800092.pem /etc/ssl/certs/3800092.pem"
	I0819 19:12:35.061299  438295 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3800092.pem
	I0819 19:12:35.065978  438295 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 17:56 /usr/share/ca-certificates/3800092.pem
	I0819 19:12:35.066047  438295 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3800092.pem
	I0819 19:12:35.072572  438295 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3800092.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 19:12:35.083760  438295 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 19:12:35.094492  438295 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:12:35.099152  438295 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:12:35.099229  438295 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:12:35.105124  438295 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 19:12:35.115950  438295 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/380009.pem && ln -fs /usr/share/ca-certificates/380009.pem /etc/ssl/certs/380009.pem"
	I0819 19:12:35.126845  438295 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/380009.pem
	I0819 19:12:35.131568  438295 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 17:56 /usr/share/ca-certificates/380009.pem
	I0819 19:12:35.131650  438295 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/380009.pem
	I0819 19:12:35.137851  438295 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/380009.pem /etc/ssl/certs/51391683.0"
	I0819 19:12:35.148818  438295 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 19:12:35.153800  438295 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 19:12:35.159720  438295 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 19:12:35.165740  438295 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 19:12:35.171705  438295 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 19:12:35.177574  438295 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 19:12:35.183935  438295 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 19:12:35.192681  438295 kubeadm.go:392] StartCluster: {Name:embed-certs-024748 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-024748 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.96 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 19:12:35.192845  438295 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 19:12:35.192908  438295 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 19:12:35.231688  438295 cri.go:89] found id: ""
	I0819 19:12:35.231791  438295 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 19:12:35.242835  438295 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 19:12:35.242859  438295 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 19:12:35.242944  438295 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 19:12:35.255695  438295 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 19:12:35.257036  438295 kubeconfig.go:125] found "embed-certs-024748" server: "https://192.168.72.96:8443"
	I0819 19:12:35.259422  438295 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 19:12:35.271730  438295 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.96
	I0819 19:12:35.271758  438295 kubeadm.go:1160] stopping kube-system containers ...
	I0819 19:12:35.271772  438295 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0819 19:12:35.271820  438295 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 19:12:35.321065  438295 cri.go:89] found id: ""
	I0819 19:12:35.321155  438295 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0819 19:12:35.337802  438295 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 19:12:35.347699  438295 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 19:12:35.347726  438295 kubeadm.go:157] found existing configuration files:
	
	I0819 19:12:35.347785  438295 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 19:12:35.357108  438295 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 19:12:35.357178  438295 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 19:12:35.366805  438295 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 19:12:35.376864  438295 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 19:12:35.376938  438295 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 19:12:35.387018  438295 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 19:12:35.396966  438295 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 19:12:35.397045  438295 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 19:12:35.406192  438295 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 19:12:35.415325  438295 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 19:12:35.415401  438295 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 19:12:35.424450  438295 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 19:12:35.433931  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:12:35.549294  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:12:36.306930  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:12:36.517086  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:12:36.587680  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:12:36.680728  438295 api_server.go:52] waiting for apiserver process to appear ...
	I0819 19:12:36.680825  438295 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:12:37.181054  438295 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:12:37.681059  438295 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:12:38.181588  438295 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:12:38.197155  438295 api_server.go:72] duration metric: took 1.516436456s to wait for apiserver process to appear ...
	I0819 19:12:38.197184  438295 api_server.go:88] waiting for apiserver healthz status ...
	I0819 19:12:38.197212  438295 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0819 19:12:35.971138  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:35.971576  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:35.971607  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:35.971514  439728 retry.go:31] will retry after 1.521077744s: waiting for machine to come up
	I0819 19:12:37.493931  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:37.494389  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:37.494415  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:37.494361  439728 retry.go:31] will retry after 1.632508579s: waiting for machine to come up
	I0819 19:12:39.128939  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:39.129429  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:39.129456  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:39.129392  439728 retry.go:31] will retry after 2.634061376s: waiting for machine to come up
	I0819 19:12:40.567608  438295 api_server.go:279] https://192.168.72.96:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 19:12:40.567654  438295 api_server.go:103] status: https://192.168.72.96:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 19:12:40.567669  438295 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0819 19:12:40.593405  438295 api_server.go:279] https://192.168.72.96:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 19:12:40.593456  438295 api_server.go:103] status: https://192.168.72.96:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 19:12:40.697607  438295 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0819 19:12:40.713767  438295 api_server.go:279] https://192.168.72.96:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 19:12:40.713806  438295 api_server.go:103] status: https://192.168.72.96:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 19:12:41.197299  438295 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0819 19:12:41.203307  438295 api_server.go:279] https://192.168.72.96:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 19:12:41.203338  438295 api_server.go:103] status: https://192.168.72.96:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 19:12:41.697903  438295 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0819 19:12:41.705142  438295 api_server.go:279] https://192.168.72.96:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 19:12:41.705174  438295 api_server.go:103] status: https://192.168.72.96:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 19:12:42.197361  438295 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0819 19:12:42.202272  438295 api_server.go:279] https://192.168.72.96:8443/healthz returned 200:
	ok
	I0819 19:12:42.209788  438295 api_server.go:141] control plane version: v1.31.0
	I0819 19:12:42.209819  438295 api_server.go:131] duration metric: took 4.012627755s to wait for apiserver health ...
	I0819 19:12:42.209829  438295 cni.go:84] Creating CNI manager for ""
	I0819 19:12:42.209836  438295 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 19:12:42.211612  438295 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 19:12:37.693171  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:39.693397  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:41.693523  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:42.212889  438295 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 19:12:42.223277  438295 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 19:12:42.242392  438295 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 19:12:42.256273  438295 system_pods.go:59] 8 kube-system pods found
	I0819 19:12:42.256321  438295 system_pods.go:61] "coredns-6f6b679f8f-7ww4z" [bbde00d4-6027-4d8d-b51e-bd68915da166] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0819 19:12:42.256331  438295 system_pods.go:61] "etcd-embed-certs-024748" [846ff0f0-5399-43fd-8e7b-1f64997cd291] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0819 19:12:42.256348  438295 system_pods.go:61] "kube-apiserver-embed-certs-024748" [3ff558d6-e82e-47a0-bb81-15244bee6470] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0819 19:12:42.256366  438295 system_pods.go:61] "kube-controller-manager-embed-certs-024748" [993b82ba-e8e7-4896-a06b-87c4f08d5985] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0819 19:12:42.256383  438295 system_pods.go:61] "kube-proxy-bmmbh" [1f77f152-f5f4-40f6-9632-1eaa36b9ea31] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0819 19:12:42.256393  438295 system_pods.go:61] "kube-scheduler-embed-certs-024748" [34684d4c-2479-45c5-883b-158cf9f974f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0819 19:12:42.256403  438295 system_pods.go:61] "metrics-server-6867b74b74-kxcwh" [15f86629-d916-4fdc-9ecf-9cb1b6c83f85] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 19:12:42.256409  438295 system_pods.go:61] "storage-provisioner" [7acb6ce1-21b6-4cdd-a5cb-76d694fc0a38] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0819 19:12:42.256418  438295 system_pods.go:74] duration metric: took 14.004598ms to wait for pod list to return data ...
	I0819 19:12:42.256428  438295 node_conditions.go:102] verifying NodePressure condition ...
	I0819 19:12:42.263308  438295 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 19:12:42.263340  438295 node_conditions.go:123] node cpu capacity is 2
	I0819 19:12:42.263354  438295 node_conditions.go:105] duration metric: took 6.920993ms to run NodePressure ...
	I0819 19:12:42.263376  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:12:42.533917  438295 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0819 19:12:42.545853  438295 kubeadm.go:739] kubelet initialised
	I0819 19:12:42.545886  438295 kubeadm.go:740] duration metric: took 11.931664ms waiting for restarted kubelet to initialise ...
	I0819 19:12:42.545899  438295 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 19:12:42.553125  438295 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-7ww4z" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:42.559120  438295 pod_ready.go:98] node "embed-certs-024748" hosting pod "coredns-6f6b679f8f-7ww4z" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:42.559148  438295 pod_ready.go:82] duration metric: took 5.984169ms for pod "coredns-6f6b679f8f-7ww4z" in "kube-system" namespace to be "Ready" ...
	E0819 19:12:42.559158  438295 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-024748" hosting pod "coredns-6f6b679f8f-7ww4z" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:42.559164  438295 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:42.564830  438295 pod_ready.go:98] node "embed-certs-024748" hosting pod "etcd-embed-certs-024748" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:42.564852  438295 pod_ready.go:82] duration metric: took 5.681326ms for pod "etcd-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	E0819 19:12:42.564860  438295 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-024748" hosting pod "etcd-embed-certs-024748" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:42.564867  438295 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:42.571982  438295 pod_ready.go:98] node "embed-certs-024748" hosting pod "kube-apiserver-embed-certs-024748" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:42.572027  438295 pod_ready.go:82] duration metric: took 7.150945ms for pod "kube-apiserver-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	E0819 19:12:42.572038  438295 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-024748" hosting pod "kube-apiserver-embed-certs-024748" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:42.572045  438295 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:42.648692  438295 pod_ready.go:98] node "embed-certs-024748" hosting pod "kube-controller-manager-embed-certs-024748" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:42.648721  438295 pod_ready.go:82] duration metric: took 76.665633ms for pod "kube-controller-manager-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	E0819 19:12:42.648730  438295 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-024748" hosting pod "kube-controller-manager-embed-certs-024748" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:42.648737  438295 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-bmmbh" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:43.045619  438295 pod_ready.go:98] node "embed-certs-024748" hosting pod "kube-proxy-bmmbh" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:43.045648  438295 pod_ready.go:82] duration metric: took 396.90414ms for pod "kube-proxy-bmmbh" in "kube-system" namespace to be "Ready" ...
	E0819 19:12:43.045658  438295 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-024748" hosting pod "kube-proxy-bmmbh" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:43.045665  438295 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:43.446302  438295 pod_ready.go:98] node "embed-certs-024748" hosting pod "kube-scheduler-embed-certs-024748" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:43.446331  438295 pod_ready.go:82] duration metric: took 400.658861ms for pod "kube-scheduler-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	E0819 19:12:43.446342  438295 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-024748" hosting pod "kube-scheduler-embed-certs-024748" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:43.446359  438295 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:43.845457  438295 pod_ready.go:98] node "embed-certs-024748" hosting pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:43.845488  438295 pod_ready.go:82] duration metric: took 399.120328ms for pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace to be "Ready" ...
	E0819 19:12:43.845499  438295 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-024748" hosting pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:43.845506  438295 pod_ready.go:39] duration metric: took 1.299593775s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 19:12:43.845526  438295 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 19:12:43.864357  438295 ops.go:34] apiserver oom_adj: -16
	I0819 19:12:43.864384  438295 kubeadm.go:597] duration metric: took 8.621518076s to restartPrimaryControlPlane
	I0819 19:12:43.864394  438295 kubeadm.go:394] duration metric: took 8.671725617s to StartCluster
	I0819 19:12:43.864414  438295 settings.go:142] acquiring lock: {Name:mk396fcf49a1d0e69583cf37ff3c819e37118163 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:12:43.864495  438295 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19468-372744/kubeconfig
	I0819 19:12:43.866775  438295 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/kubeconfig: {Name:mk8e7b4e1bb7da665111d2acd83eb48882c66853 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:12:43.867073  438295 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.96 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 19:12:43.867296  438295 config.go:182] Loaded profile config "embed-certs-024748": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:12:43.867195  438295 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 19:12:43.867354  438295 addons.go:69] Setting metrics-server=true in profile "embed-certs-024748"
	I0819 19:12:43.867362  438295 addons.go:69] Setting default-storageclass=true in profile "embed-certs-024748"
	I0819 19:12:43.867397  438295 addons.go:234] Setting addon metrics-server=true in "embed-certs-024748"
	I0819 19:12:43.867402  438295 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-024748"
	W0819 19:12:43.867409  438295 addons.go:243] addon metrics-server should already be in state true
	I0819 19:12:43.867437  438295 host.go:66] Checking if "embed-certs-024748" exists ...
	I0819 19:12:43.867354  438295 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-024748"
	I0819 19:12:43.867502  438295 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-024748"
	W0819 19:12:43.867514  438295 addons.go:243] addon storage-provisioner should already be in state true
	I0819 19:12:43.867538  438295 host.go:66] Checking if "embed-certs-024748" exists ...
	I0819 19:12:43.867761  438295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:43.867796  438295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:43.867839  438295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:43.867873  438295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:43.867889  438295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:43.867908  438295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:43.869989  438295 out.go:177] * Verifying Kubernetes components...
	I0819 19:12:43.871464  438295 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:12:43.883655  438295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33557
	I0819 19:12:43.883871  438295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33763
	I0819 19:12:43.884279  438295 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:43.884323  438295 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:43.884790  438295 main.go:141] libmachine: Using API Version  1
	I0819 19:12:43.884809  438295 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:43.884935  438295 main.go:141] libmachine: Using API Version  1
	I0819 19:12:43.884953  438295 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:43.885204  438295 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:43.885275  438295 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:43.885380  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetState
	I0819 19:12:43.885886  438295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:43.885928  438295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:43.886840  438295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40467
	I0819 19:12:43.887309  438295 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:43.887792  438295 main.go:141] libmachine: Using API Version  1
	I0819 19:12:43.887802  438295 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:43.888109  438295 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:43.888670  438295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:43.888697  438295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:43.888973  438295 addons.go:234] Setting addon default-storageclass=true in "embed-certs-024748"
	W0819 19:12:43.888988  438295 addons.go:243] addon default-storageclass should already be in state true
	I0819 19:12:43.889020  438295 host.go:66] Checking if "embed-certs-024748" exists ...
	I0819 19:12:43.889270  438295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:43.889304  438295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:43.905278  438295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40907
	I0819 19:12:43.905278  438295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41133
	I0819 19:12:43.905734  438295 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:43.905877  438295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33393
	I0819 19:12:43.905983  438295 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:43.906299  438295 main.go:141] libmachine: Using API Version  1
	I0819 19:12:43.906320  438295 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:43.906366  438295 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:43.906443  438295 main.go:141] libmachine: Using API Version  1
	I0819 19:12:43.906457  438295 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:43.906822  438295 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:43.906898  438295 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:43.906995  438295 main.go:141] libmachine: Using API Version  1
	I0819 19:12:43.907006  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetState
	I0819 19:12:43.907012  438295 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:43.907371  438295 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:43.907473  438295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:43.907523  438295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:43.907534  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetState
	I0819 19:12:43.909443  438295 main.go:141] libmachine: (embed-certs-024748) Calling .DriverName
	I0819 19:12:43.909529  438295 main.go:141] libmachine: (embed-certs-024748) Calling .DriverName
	I0819 19:12:43.911431  438295 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0819 19:12:43.911437  438295 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:12:43.913061  438295 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0819 19:12:43.913090  438295 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0819 19:12:43.913115  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHHostname
	I0819 19:12:43.913180  438295 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 19:12:43.913199  438295 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 19:12:43.913216  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHHostname
	I0819 19:12:43.916642  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:43.916813  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:43.917110  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:43.917135  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:43.917166  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:43.917193  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:43.917463  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHPort
	I0819 19:12:43.917668  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHPort
	I0819 19:12:43.917671  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:43.917846  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:43.917867  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHUsername
	I0819 19:12:43.918014  438295 sshutil.go:53] new ssh client: &{IP:192.168.72.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/embed-certs-024748/id_rsa Username:docker}
	I0819 19:12:43.918032  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHUsername
	I0819 19:12:43.918148  438295 sshutil.go:53] new ssh client: &{IP:192.168.72.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/embed-certs-024748/id_rsa Username:docker}
	I0819 19:12:43.926337  438295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46687
	I0819 19:12:43.926813  438295 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:43.927333  438295 main.go:141] libmachine: Using API Version  1
	I0819 19:12:43.927354  438295 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:43.927762  438295 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:43.927965  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetState
	I0819 19:12:43.929591  438295 main.go:141] libmachine: (embed-certs-024748) Calling .DriverName
	I0819 19:12:43.929910  438295 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 19:12:43.929926  438295 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 19:12:43.929942  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHHostname
	I0819 19:12:43.933032  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:43.933387  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:43.933406  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:43.933626  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHPort
	I0819 19:12:43.933850  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:43.933992  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHUsername
	I0819 19:12:43.934118  438295 sshutil.go:53] new ssh client: &{IP:192.168.72.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/embed-certs-024748/id_rsa Username:docker}
	I0819 19:12:44.078901  438295 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 19:12:44.098542  438295 node_ready.go:35] waiting up to 6m0s for node "embed-certs-024748" to be "Ready" ...
	I0819 19:12:44.180050  438295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 19:12:44.196186  438295 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0819 19:12:44.196210  438295 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0819 19:12:44.220001  438295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 19:12:44.231145  438295 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0819 19:12:44.231180  438295 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0819 19:12:44.267800  438295 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 19:12:44.267831  438295 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0819 19:12:44.323078  438295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 19:12:45.276298  438295 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.096199779s)
	I0819 19:12:45.276336  438295 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.056298773s)
	I0819 19:12:45.276383  438295 main.go:141] libmachine: Making call to close driver server
	I0819 19:12:45.276395  438295 main.go:141] libmachine: (embed-certs-024748) Calling .Close
	I0819 19:12:45.276385  438295 main.go:141] libmachine: Making call to close driver server
	I0819 19:12:45.276462  438295 main.go:141] libmachine: (embed-certs-024748) Calling .Close
	I0819 19:12:45.276714  438295 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:12:45.276757  438295 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:12:45.276777  438295 main.go:141] libmachine: Making call to close driver server
	I0819 19:12:45.276793  438295 main.go:141] libmachine: (embed-certs-024748) Calling .Close
	I0819 19:12:45.276860  438295 main.go:141] libmachine: (embed-certs-024748) DBG | Closing plugin on server side
	I0819 19:12:45.276874  438295 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:12:45.276940  438295 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:12:45.276956  438295 main.go:141] libmachine: Making call to close driver server
	I0819 19:12:45.276964  438295 main.go:141] libmachine: (embed-certs-024748) Calling .Close
	I0819 19:12:45.277134  438295 main.go:141] libmachine: (embed-certs-024748) DBG | Closing plugin on server side
	I0819 19:12:45.277195  438295 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:12:45.277239  438295 main.go:141] libmachine: (embed-certs-024748) DBG | Closing plugin on server side
	I0819 19:12:45.277258  438295 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:12:45.277277  438295 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:12:45.277304  438295 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:12:45.284982  438295 main.go:141] libmachine: Making call to close driver server
	I0819 19:12:45.285007  438295 main.go:141] libmachine: (embed-certs-024748) Calling .Close
	I0819 19:12:45.285304  438295 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:12:45.285324  438295 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:12:45.293973  438295 main.go:141] libmachine: Making call to close driver server
	I0819 19:12:45.293994  438295 main.go:141] libmachine: (embed-certs-024748) Calling .Close
	I0819 19:12:45.294247  438295 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:12:45.294265  438295 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:12:45.294274  438295 main.go:141] libmachine: Making call to close driver server
	I0819 19:12:45.294282  438295 main.go:141] libmachine: (embed-certs-024748) Calling .Close
	I0819 19:12:45.295704  438295 main.go:141] libmachine: (embed-certs-024748) DBG | Closing plugin on server side
	I0819 19:12:45.295787  438295 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:12:45.295813  438295 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:12:45.295828  438295 addons.go:475] Verifying addon metrics-server=true in "embed-certs-024748"
	I0819 19:12:45.297684  438295 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0819 19:12:41.765706  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:41.766129  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:41.766182  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:41.766093  439728 retry.go:31] will retry after 3.464758587s: waiting for machine to come up
	I0819 19:12:45.232640  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:45.233118  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:45.233151  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:45.233066  439728 retry.go:31] will retry after 3.551527195s: waiting for machine to come up
	I0819 19:12:43.694387  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:46.194627  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:45.298844  438295 addons.go:510] duration metric: took 1.431699078s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0819 19:12:46.103096  438295 node_ready.go:53] node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:48.603205  438295 node_ready.go:53] node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:50.084809  438001 start.go:364] duration metric: took 55.89796214s to acquireMachinesLock for "no-preload-278232"
	I0819 19:12:50.084884  438001 start.go:96] Skipping create...Using existing machine configuration
	I0819 19:12:50.084895  438001 fix.go:54] fixHost starting: 
	I0819 19:12:50.085416  438001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:50.085459  438001 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:50.103796  438001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41569
	I0819 19:12:50.104278  438001 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:50.104900  438001 main.go:141] libmachine: Using API Version  1
	I0819 19:12:50.104934  438001 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:50.105335  438001 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:50.105544  438001 main.go:141] libmachine: (no-preload-278232) Calling .DriverName
	I0819 19:12:50.105703  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetState
	I0819 19:12:50.107422  438001 fix.go:112] recreateIfNeeded on no-preload-278232: state=Stopped err=<nil>
	I0819 19:12:50.107444  438001 main.go:141] libmachine: (no-preload-278232) Calling .DriverName
	W0819 19:12:50.107602  438001 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 19:12:50.109328  438001 out.go:177] * Restarting existing kvm2 VM for "no-preload-278232" ...
	I0819 19:12:48.787197  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:48.787586  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has current primary IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:48.787611  438716 main.go:141] libmachine: (old-k8s-version-104669) Found IP for machine: 192.168.50.32
	I0819 19:12:48.787625  438716 main.go:141] libmachine: (old-k8s-version-104669) Reserving static IP address...
	I0819 19:12:48.788104  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "old-k8s-version-104669", mac: "52:54:00:8c:ff:a3", ip: "192.168.50.32"} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:48.788140  438716 main.go:141] libmachine: (old-k8s-version-104669) Reserved static IP address: 192.168.50.32
	I0819 19:12:48.788164  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | skip adding static IP to network mk-old-k8s-version-104669 - found existing host DHCP lease matching {name: "old-k8s-version-104669", mac: "52:54:00:8c:ff:a3", ip: "192.168.50.32"}
	I0819 19:12:48.788186  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | Getting to WaitForSSH function...
	I0819 19:12:48.788202  438716 main.go:141] libmachine: (old-k8s-version-104669) Waiting for SSH to be available...
	I0819 19:12:48.790365  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:48.790765  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:48.790793  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:48.790994  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | Using SSH client type: external
	I0819 19:12:48.791034  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | Using SSH private key: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/old-k8s-version-104669/id_rsa (-rw-------)
	I0819 19:12:48.791073  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.32 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19468-372744/.minikube/machines/old-k8s-version-104669/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 19:12:48.791087  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | About to run SSH command:
	I0819 19:12:48.791103  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | exit 0
	I0819 19:12:48.920087  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | SSH cmd err, output: <nil>: 
	I0819 19:12:48.920464  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetConfigRaw
	I0819 19:12:48.921105  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetIP
	I0819 19:12:48.923637  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:48.924022  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:48.924053  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:48.924242  438716 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/config.json ...
	I0819 19:12:48.924429  438716 machine.go:93] provisionDockerMachine start ...
	I0819 19:12:48.924447  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .DriverName
	I0819 19:12:48.924655  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHHostname
	I0819 19:12:48.926885  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:48.927345  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:48.927376  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:48.927527  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHPort
	I0819 19:12:48.927723  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:48.927846  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:48.927968  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHUsername
	I0819 19:12:48.928241  438716 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:48.928453  438716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I0819 19:12:48.928475  438716 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 19:12:49.039908  438716 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 19:12:49.039944  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetMachineName
	I0819 19:12:49.040200  438716 buildroot.go:166] provisioning hostname "old-k8s-version-104669"
	I0819 19:12:49.040236  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetMachineName
	I0819 19:12:49.040454  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHHostname
	I0819 19:12:49.043462  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.043860  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:49.043892  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.044061  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHPort
	I0819 19:12:49.044256  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:49.044472  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:49.044613  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHUsername
	I0819 19:12:49.044837  438716 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:49.045014  438716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I0819 19:12:49.045027  438716 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-104669 && echo "old-k8s-version-104669" | sudo tee /etc/hostname
	I0819 19:12:49.170660  438716 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-104669
	
	I0819 19:12:49.170695  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHHostname
	I0819 19:12:49.173564  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.173855  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:49.173882  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.174059  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHPort
	I0819 19:12:49.174239  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:49.174432  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:49.174564  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHUsername
	I0819 19:12:49.174732  438716 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:49.174923  438716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I0819 19:12:49.174941  438716 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-104669' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-104669/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-104669' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 19:12:49.298689  438716 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 19:12:49.298731  438716 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19468-372744/.minikube CaCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19468-372744/.minikube}
	I0819 19:12:49.298764  438716 buildroot.go:174] setting up certificates
	I0819 19:12:49.298778  438716 provision.go:84] configureAuth start
	I0819 19:12:49.298793  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetMachineName
	I0819 19:12:49.299157  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetIP
	I0819 19:12:49.301897  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.302290  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:49.302326  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.302462  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHHostname
	I0819 19:12:49.304592  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.304960  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:49.304987  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.305150  438716 provision.go:143] copyHostCerts
	I0819 19:12:49.305219  438716 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem, removing ...
	I0819 19:12:49.305243  438716 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem
	I0819 19:12:49.305310  438716 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem (1082 bytes)
	I0819 19:12:49.305437  438716 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem, removing ...
	I0819 19:12:49.305449  438716 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem
	I0819 19:12:49.305477  438716 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem (1123 bytes)
	I0819 19:12:49.305571  438716 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem, removing ...
	I0819 19:12:49.305583  438716 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem
	I0819 19:12:49.305612  438716 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem (1675 bytes)
	I0819 19:12:49.305699  438716 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-104669 san=[127.0.0.1 192.168.50.32 localhost minikube old-k8s-version-104669]
	I0819 19:12:49.394004  438716 provision.go:177] copyRemoteCerts
	I0819 19:12:49.394074  438716 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 19:12:49.394112  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHHostname
	I0819 19:12:49.396645  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.396906  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:49.396951  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.397108  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHPort
	I0819 19:12:49.397321  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:49.397504  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHUsername
	I0819 19:12:49.397709  438716 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/old-k8s-version-104669/id_rsa Username:docker}
	I0819 19:12:49.483061  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 19:12:49.508297  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 19:12:49.533821  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0819 19:12:49.560064  438716 provision.go:87] duration metric: took 261.270909ms to configureAuth
	I0819 19:12:49.560093  438716 buildroot.go:189] setting minikube options for container-runtime
	I0819 19:12:49.560310  438716 config.go:182] Loaded profile config "old-k8s-version-104669": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0819 19:12:49.560409  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHHostname
	I0819 19:12:49.563173  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.563604  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:49.563633  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.563882  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHPort
	I0819 19:12:49.564075  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:49.564274  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:49.564479  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHUsername
	I0819 19:12:49.564707  438716 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:49.564925  438716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I0819 19:12:49.564948  438716 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 19:12:49.837237  438716 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 19:12:49.837267  438716 machine.go:96] duration metric: took 912.825625ms to provisionDockerMachine
	I0819 19:12:49.837281  438716 start.go:293] postStartSetup for "old-k8s-version-104669" (driver="kvm2")
	I0819 19:12:49.837297  438716 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 19:12:49.837341  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .DriverName
	I0819 19:12:49.837716  438716 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 19:12:49.837757  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHHostname
	I0819 19:12:49.840409  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.840759  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:49.840789  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.840988  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHPort
	I0819 19:12:49.841183  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:49.841345  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHUsername
	I0819 19:12:49.841473  438716 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/old-k8s-version-104669/id_rsa Username:docker}
	I0819 19:12:49.931067  438716 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 19:12:49.935562  438716 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 19:12:49.935590  438716 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-372744/.minikube/addons for local assets ...
	I0819 19:12:49.935694  438716 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-372744/.minikube/files for local assets ...
	I0819 19:12:49.935815  438716 filesync.go:149] local asset: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem -> 3800092.pem in /etc/ssl/certs
	I0819 19:12:49.935941  438716 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 19:12:49.945418  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem --> /etc/ssl/certs/3800092.pem (1708 bytes)
	I0819 19:12:49.969454  438716 start.go:296] duration metric: took 132.15677ms for postStartSetup
	I0819 19:12:49.969494  438716 fix.go:56] duration metric: took 20.836438665s for fixHost
	I0819 19:12:49.969517  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHHostname
	I0819 19:12:49.972127  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.972502  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:49.972542  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.972758  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHPort
	I0819 19:12:49.973000  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:49.973190  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:49.973355  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHUsername
	I0819 19:12:49.973548  438716 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:49.973753  438716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I0819 19:12:49.973766  438716 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 19:12:50.084645  438716 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724094770.056929881
	
	I0819 19:12:50.084672  438716 fix.go:216] guest clock: 1724094770.056929881
	I0819 19:12:50.084681  438716 fix.go:229] Guest: 2024-08-19 19:12:50.056929881 +0000 UTC Remote: 2024-08-19 19:12:49.969497734 +0000 UTC m=+259.472837552 (delta=87.432147ms)
	I0819 19:12:50.084711  438716 fix.go:200] guest clock delta is within tolerance: 87.432147ms
	I0819 19:12:50.084718  438716 start.go:83] releasing machines lock for "old-k8s-version-104669", held for 20.951701853s
	I0819 19:12:50.084752  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .DriverName
	I0819 19:12:50.085050  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetIP
	I0819 19:12:50.087976  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:50.088363  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:50.088391  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:50.088572  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .DriverName
	I0819 19:12:50.089141  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .DriverName
	I0819 19:12:50.089360  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .DriverName
	I0819 19:12:50.089460  438716 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 19:12:50.089526  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHHostname
	I0819 19:12:50.089572  438716 ssh_runner.go:195] Run: cat /version.json
	I0819 19:12:50.089599  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHHostname
	I0819 19:12:50.092427  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:50.092591  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:50.092772  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:50.092797  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:50.092933  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:50.092965  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:50.092965  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHPort
	I0819 19:12:50.093147  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:50.093248  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHPort
	I0819 19:12:50.093328  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHUsername
	I0819 19:12:50.093409  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:50.093503  438716 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/old-k8s-version-104669/id_rsa Username:docker}
	I0819 19:12:50.093532  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHUsername
	I0819 19:12:50.093650  438716 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/old-k8s-version-104669/id_rsa Username:docker}
	I0819 19:12:50.177322  438716 ssh_runner.go:195] Run: systemctl --version
	I0819 19:12:50.200999  438716 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 19:12:50.349276  438716 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 19:12:50.357011  438716 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 19:12:50.357090  438716 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 19:12:50.377691  438716 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 19:12:50.377721  438716 start.go:495] detecting cgroup driver to use...
	I0819 19:12:50.377790  438716 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 19:12:50.394502  438716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 19:12:50.408481  438716 docker.go:217] disabling cri-docker service (if available) ...
	I0819 19:12:50.408556  438716 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 19:12:50.421818  438716 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 19:12:50.434899  438716 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 19:12:50.559399  438716 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 19:12:50.708621  438716 docker.go:233] disabling docker service ...
	I0819 19:12:50.708695  438716 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 19:12:50.726699  438716 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 19:12:50.740605  438716 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 19:12:50.896815  438716 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 19:12:51.037560  438716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 19:12:51.052554  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 19:12:51.072292  438716 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0819 19:12:51.072360  438716 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:51.083248  438716 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 19:12:51.083334  438716 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:51.093721  438716 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:51.105212  438716 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:51.119349  438716 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 19:12:51.134647  438716 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 19:12:51.144553  438716 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 19:12:51.144598  438716 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 19:12:51.159151  438716 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 19:12:51.171260  438716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:12:51.328931  438716 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 19:12:51.500761  438716 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 19:12:51.500831  438716 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 19:12:51.505982  438716 start.go:563] Will wait 60s for crictl version
	I0819 19:12:51.506057  438716 ssh_runner.go:195] Run: which crictl
	I0819 19:12:51.510447  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 19:12:51.552892  438716 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 19:12:51.552982  438716 ssh_runner.go:195] Run: crio --version
	I0819 19:12:51.581931  438716 ssh_runner.go:195] Run: crio --version
	I0819 19:12:51.614565  438716 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0819 19:12:50.110718  438001 main.go:141] libmachine: (no-preload-278232) Calling .Start
	I0819 19:12:50.110888  438001 main.go:141] libmachine: (no-preload-278232) Ensuring networks are active...
	I0819 19:12:50.111809  438001 main.go:141] libmachine: (no-preload-278232) Ensuring network default is active
	I0819 19:12:50.112149  438001 main.go:141] libmachine: (no-preload-278232) Ensuring network mk-no-preload-278232 is active
	I0819 19:12:50.112709  438001 main.go:141] libmachine: (no-preload-278232) Getting domain xml...
	I0819 19:12:50.113441  438001 main.go:141] libmachine: (no-preload-278232) Creating domain...
	I0819 19:12:51.494803  438001 main.go:141] libmachine: (no-preload-278232) Waiting to get IP...
	I0819 19:12:51.495733  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:12:51.496203  438001 main.go:141] libmachine: (no-preload-278232) DBG | unable to find current IP address of domain no-preload-278232 in network mk-no-preload-278232
	I0819 19:12:51.496302  438001 main.go:141] libmachine: (no-preload-278232) DBG | I0819 19:12:51.496187  439925 retry.go:31] will retry after 190.334257ms: waiting for machine to come up
	I0819 19:12:48.694017  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:50.694533  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:51.102764  438295 node_ready.go:49] node "embed-certs-024748" has status "Ready":"True"
	I0819 19:12:51.102791  438295 node_ready.go:38] duration metric: took 7.004204889s for node "embed-certs-024748" to be "Ready" ...
	I0819 19:12:51.102814  438295 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 19:12:51.109122  438295 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-7ww4z" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:51.114649  438295 pod_ready.go:93] pod "coredns-6f6b679f8f-7ww4z" in "kube-system" namespace has status "Ready":"True"
	I0819 19:12:51.114679  438295 pod_ready.go:82] duration metric: took 5.529339ms for pod "coredns-6f6b679f8f-7ww4z" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:51.114692  438295 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:51.121699  438295 pod_ready.go:93] pod "etcd-embed-certs-024748" in "kube-system" namespace has status "Ready":"True"
	I0819 19:12:51.121729  438295 pod_ready.go:82] duration metric: took 7.027906ms for pod "etcd-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:51.121742  438295 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:51.129040  438295 pod_ready.go:93] pod "kube-apiserver-embed-certs-024748" in "kube-system" namespace has status "Ready":"True"
	I0819 19:12:51.129066  438295 pod_ready.go:82] duration metric: took 7.315166ms for pod "kube-apiserver-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:51.129078  438295 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:51.636173  438295 pod_ready.go:93] pod "kube-controller-manager-embed-certs-024748" in "kube-system" namespace has status "Ready":"True"
	I0819 19:12:51.636226  438295 pod_ready.go:82] duration metric: took 507.130455ms for pod "kube-controller-manager-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:51.636243  438295 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bmmbh" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:51.904734  438295 pod_ready.go:93] pod "kube-proxy-bmmbh" in "kube-system" namespace has status "Ready":"True"
	I0819 19:12:51.904776  438295 pod_ready.go:82] duration metric: took 268.522999ms for pod "kube-proxy-bmmbh" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:51.904806  438295 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:53.911857  438295 pod_ready.go:103] pod "kube-scheduler-embed-certs-024748" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:51.615865  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetIP
	I0819 19:12:51.618782  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:51.619238  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:51.619268  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:51.619508  438716 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0819 19:12:51.624020  438716 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 19:12:51.640765  438716 kubeadm.go:883] updating cluster {Name:old-k8s-version-104669 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-104669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.32 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 19:12:51.640905  438716 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0819 19:12:51.640982  438716 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 19:12:51.696872  438716 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0819 19:12:51.696931  438716 ssh_runner.go:195] Run: which lz4
	I0819 19:12:51.702194  438716 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 19:12:51.707228  438716 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 19:12:51.707265  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0819 19:12:53.435062  438716 crio.go:462] duration metric: took 1.732918912s to copy over tarball
	I0819 19:12:53.435149  438716 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 19:12:51.688680  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:12:51.689287  438001 main.go:141] libmachine: (no-preload-278232) DBG | unable to find current IP address of domain no-preload-278232 in network mk-no-preload-278232
	I0819 19:12:51.689326  438001 main.go:141] libmachine: (no-preload-278232) DBG | I0819 19:12:51.689222  439925 retry.go:31] will retry after 351.943478ms: waiting for machine to come up
	I0819 19:12:52.042810  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:12:52.043142  438001 main.go:141] libmachine: (no-preload-278232) DBG | unable to find current IP address of domain no-preload-278232 in network mk-no-preload-278232
	I0819 19:12:52.043163  438001 main.go:141] libmachine: (no-preload-278232) DBG | I0819 19:12:52.043070  439925 retry.go:31] will retry after 332.731922ms: waiting for machine to come up
	I0819 19:12:52.377750  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:12:52.378418  438001 main.go:141] libmachine: (no-preload-278232) DBG | unable to find current IP address of domain no-preload-278232 in network mk-no-preload-278232
	I0819 19:12:52.378442  438001 main.go:141] libmachine: (no-preload-278232) DBG | I0819 19:12:52.378377  439925 retry.go:31] will retry after 601.079013ms: waiting for machine to come up
	I0819 19:12:52.980930  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:12:52.981446  438001 main.go:141] libmachine: (no-preload-278232) DBG | unable to find current IP address of domain no-preload-278232 in network mk-no-preload-278232
	I0819 19:12:52.981474  438001 main.go:141] libmachine: (no-preload-278232) DBG | I0819 19:12:52.981396  439925 retry.go:31] will retry after 621.686612ms: waiting for machine to come up
	I0819 19:12:53.605240  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:12:53.605716  438001 main.go:141] libmachine: (no-preload-278232) DBG | unable to find current IP address of domain no-preload-278232 in network mk-no-preload-278232
	I0819 19:12:53.605751  438001 main.go:141] libmachine: (no-preload-278232) DBG | I0819 19:12:53.605666  439925 retry.go:31] will retry after 627.115747ms: waiting for machine to come up
	I0819 19:12:54.234095  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:12:54.234590  438001 main.go:141] libmachine: (no-preload-278232) DBG | unable to find current IP address of domain no-preload-278232 in network mk-no-preload-278232
	I0819 19:12:54.234613  438001 main.go:141] libmachine: (no-preload-278232) DBG | I0819 19:12:54.234541  439925 retry.go:31] will retry after 1.137953362s: waiting for machine to come up
	I0819 19:12:55.373941  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:12:55.374412  438001 main.go:141] libmachine: (no-preload-278232) DBG | unable to find current IP address of domain no-preload-278232 in network mk-no-preload-278232
	I0819 19:12:55.374440  438001 main.go:141] libmachine: (no-preload-278232) DBG | I0819 19:12:55.374368  439925 retry.go:31] will retry after 1.437610965s: waiting for machine to come up
	I0819 19:12:52.696277  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:54.704463  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:57.195001  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:55.412162  438295 pod_ready.go:93] pod "kube-scheduler-embed-certs-024748" in "kube-system" namespace has status "Ready":"True"
	I0819 19:12:55.412198  438295 pod_ready.go:82] duration metric: took 3.507380249s for pod "kube-scheduler-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:55.412214  438295 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:57.419600  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:56.399941  438716 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.96472478s)
	I0819 19:12:56.399971  438716 crio.go:469] duration metric: took 2.964877539s to extract the tarball
	I0819 19:12:56.399986  438716 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 19:12:56.447075  438716 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 19:12:56.491773  438716 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0819 19:12:56.491800  438716 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0819 19:12:56.491876  438716 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:12:56.491876  438716 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0819 19:12:56.491956  438716 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0819 19:12:56.491961  438716 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 19:12:56.492041  438716 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0819 19:12:56.492059  438716 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0819 19:12:56.492280  438716 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0819 19:12:56.492494  438716 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0819 19:12:56.493750  438716 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 19:12:56.493762  438716 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0819 19:12:56.493756  438716 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:12:56.493762  438716 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0819 19:12:56.493765  438716 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0819 19:12:56.493831  438716 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0819 19:12:56.493806  438716 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0819 19:12:56.494099  438716 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0819 19:12:56.694872  438716 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0819 19:12:56.711504  438716 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0819 19:12:56.754045  438716 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0819 19:12:56.754096  438716 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0819 19:12:56.754136  438716 ssh_runner.go:195] Run: which crictl
	I0819 19:12:56.770451  438716 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0819 19:12:56.770510  438716 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0819 19:12:56.770574  438716 ssh_runner.go:195] Run: which crictl
	I0819 19:12:56.770573  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0819 19:12:56.804839  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0819 19:12:56.804872  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0819 19:12:56.825837  438716 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0819 19:12:56.832063  438716 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0819 19:12:56.834072  438716 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0819 19:12:56.837029  438716 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0819 19:12:56.837697  438716 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 19:12:56.902843  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0819 19:12:56.902930  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0819 19:12:57.020902  438716 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0819 19:12:57.020962  438716 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0819 19:12:57.020988  438716 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0819 19:12:57.021017  438716 ssh_runner.go:195] Run: which crictl
	I0819 19:12:57.021025  438716 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0819 19:12:57.021098  438716 ssh_runner.go:195] Run: which crictl
	I0819 19:12:57.023363  438716 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0819 19:12:57.023411  438716 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0819 19:12:57.023457  438716 ssh_runner.go:195] Run: which crictl
	I0819 19:12:57.023541  438716 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0819 19:12:57.023569  438716 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0819 19:12:57.023605  438716 ssh_runner.go:195] Run: which crictl
	I0819 19:12:57.034648  438716 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0819 19:12:57.034698  438716 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 19:12:57.034719  438716 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0819 19:12:57.034748  438716 ssh_runner.go:195] Run: which crictl
	I0819 19:12:57.039577  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0819 19:12:57.039648  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0819 19:12:57.039715  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0819 19:12:57.041644  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0819 19:12:57.041983  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0819 19:12:57.045383  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 19:12:57.149677  438716 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0819 19:12:57.164701  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0819 19:12:57.164821  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0819 19:12:57.202353  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0819 19:12:57.202434  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0819 19:12:57.202465  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 19:12:57.258824  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0819 19:12:57.258858  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0819 19:12:57.285756  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0819 19:12:57.326148  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 19:12:57.326237  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0819 19:12:57.378322  438716 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0819 19:12:57.378369  438716 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0819 19:12:57.390369  438716 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0819 19:12:57.419554  438716 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0819 19:12:57.419627  438716 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0819 19:12:57.438485  438716 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:12:57.583634  438716 cache_images.go:92] duration metric: took 1.091812972s to LoadCachedImages
	W0819 19:12:57.583757  438716 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0819 19:12:57.583777  438716 kubeadm.go:934] updating node { 192.168.50.32 8443 v1.20.0 crio true true} ...
	I0819 19:12:57.583915  438716 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-104669 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.32
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-104669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 19:12:57.584007  438716 ssh_runner.go:195] Run: crio config
	I0819 19:12:57.636714  438716 cni.go:84] Creating CNI manager for ""
	I0819 19:12:57.636738  438716 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 19:12:57.636752  438716 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 19:12:57.636776  438716 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.32 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-104669 NodeName:old-k8s-version-104669 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.32"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.32 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0819 19:12:57.636951  438716 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.32
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-104669"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.32
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.32"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 19:12:57.637028  438716 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0819 19:12:57.648002  438716 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 19:12:57.648093  438716 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 19:12:57.658889  438716 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0819 19:12:57.677316  438716 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 19:12:57.695825  438716 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0819 19:12:57.715396  438716 ssh_runner.go:195] Run: grep 192.168.50.32	control-plane.minikube.internal$ /etc/hosts
	I0819 19:12:57.719886  438716 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.32	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 19:12:57.733179  438716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:12:57.854139  438716 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 19:12:57.871590  438716 certs.go:68] Setting up /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669 for IP: 192.168.50.32
	I0819 19:12:57.871619  438716 certs.go:194] generating shared ca certs ...
	I0819 19:12:57.871642  438716 certs.go:226] acquiring lock for ca certs: {Name:mk639e03f593e0bccac045f6e9f5ba3b96cc81e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:12:57.871850  438716 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key
	I0819 19:12:57.871916  438716 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key
	I0819 19:12:57.871930  438716 certs.go:256] generating profile certs ...
	I0819 19:12:57.872060  438716 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/client.key
	I0819 19:12:57.872131  438716 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/apiserver.key.7101f8a0
	I0819 19:12:57.872197  438716 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/proxy-client.key
	I0819 19:12:57.872336  438716 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem (1338 bytes)
	W0819 19:12:57.872365  438716 certs.go:480] ignoring /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009_empty.pem, impossibly tiny 0 bytes
	I0819 19:12:57.872371  438716 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 19:12:57.872390  438716 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem (1082 bytes)
	I0819 19:12:57.872419  438716 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem (1123 bytes)
	I0819 19:12:57.872441  438716 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem (1675 bytes)
	I0819 19:12:57.872488  438716 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem (1708 bytes)
	I0819 19:12:57.873259  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 19:12:57.907576  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 19:12:57.943535  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 19:12:57.977770  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 19:12:58.021213  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0819 19:12:58.051043  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 19:12:58.080442  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 19:12:58.110888  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 19:12:58.158635  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 19:12:58.184168  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem --> /usr/share/ca-certificates/380009.pem (1338 bytes)
	I0819 19:12:58.210064  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem --> /usr/share/ca-certificates/3800092.pem (1708 bytes)
	I0819 19:12:58.235366  438716 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 19:12:58.254667  438716 ssh_runner.go:195] Run: openssl version
	I0819 19:12:58.260977  438716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3800092.pem && ln -fs /usr/share/ca-certificates/3800092.pem /etc/ssl/certs/3800092.pem"
	I0819 19:12:58.272995  438716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3800092.pem
	I0819 19:12:58.278056  438716 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 17:56 /usr/share/ca-certificates/3800092.pem
	I0819 19:12:58.278154  438716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3800092.pem
	I0819 19:12:58.284420  438716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3800092.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 19:12:58.296945  438716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 19:12:58.309288  438716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:12:58.314695  438716 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:12:58.314774  438716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:12:58.321016  438716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 19:12:58.332728  438716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/380009.pem && ln -fs /usr/share/ca-certificates/380009.pem /etc/ssl/certs/380009.pem"
	I0819 19:12:58.344766  438716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/380009.pem
	I0819 19:12:58.349610  438716 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 17:56 /usr/share/ca-certificates/380009.pem
	I0819 19:12:58.349681  438716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/380009.pem
	I0819 19:12:58.355942  438716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/380009.pem /etc/ssl/certs/51391683.0"
	I0819 19:12:58.368869  438716 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 19:12:58.373681  438716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 19:12:58.380415  438716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 19:12:58.386741  438716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 19:12:58.393362  438716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 19:12:58.399665  438716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 19:12:58.406108  438716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 19:12:58.412486  438716 kubeadm.go:392] StartCluster: {Name:old-k8s-version-104669 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-104669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.32 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 19:12:58.412606  438716 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 19:12:58.412655  438716 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 19:12:58.462379  438716 cri.go:89] found id: ""
	I0819 19:12:58.462463  438716 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 19:12:58.474029  438716 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 19:12:58.474054  438716 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 19:12:58.474112  438716 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 19:12:58.485755  438716 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 19:12:58.486762  438716 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-104669" does not appear in /home/jenkins/minikube-integration/19468-372744/kubeconfig
	I0819 19:12:58.487464  438716 kubeconfig.go:62] /home/jenkins/minikube-integration/19468-372744/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-104669" cluster setting kubeconfig missing "old-k8s-version-104669" context setting]
	I0819 19:12:58.489361  438716 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/kubeconfig: {Name:mk8e7b4e1bb7da665111d2acd83eb48882c66853 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:12:58.508865  438716 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 19:12:58.520577  438716 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.32
	I0819 19:12:58.520622  438716 kubeadm.go:1160] stopping kube-system containers ...
	I0819 19:12:58.520637  438716 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0819 19:12:58.520728  438716 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 19:12:58.561900  438716 cri.go:89] found id: ""
	I0819 19:12:58.561984  438716 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0819 19:12:58.580483  438716 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 19:12:58.591734  438716 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 19:12:58.591754  438716 kubeadm.go:157] found existing configuration files:
	
	I0819 19:12:58.591804  438716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 19:12:58.601694  438716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 19:12:58.601771  438716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 19:12:58.612132  438716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 19:12:58.621911  438716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 19:12:58.621984  438716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 19:12:58.631525  438716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 19:12:58.640802  438716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 19:12:58.640872  438716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 19:12:58.650216  438716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 19:12:58.660647  438716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 19:12:58.660720  438716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 19:12:58.669992  438716 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 19:12:58.679709  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:12:58.809302  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:12:59.757994  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:13:00.006386  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:13:00.136752  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:13:00.222424  438716 api_server.go:52] waiting for apiserver process to appear ...
	I0819 19:13:00.222542  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:12:56.813279  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:12:56.813777  438001 main.go:141] libmachine: (no-preload-278232) DBG | unable to find current IP address of domain no-preload-278232 in network mk-no-preload-278232
	I0819 19:12:56.813807  438001 main.go:141] libmachine: (no-preload-278232) DBG | I0819 19:12:56.813725  439925 retry.go:31] will retry after 1.504132921s: waiting for machine to come up
	I0819 19:12:58.319408  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:12:58.319880  438001 main.go:141] libmachine: (no-preload-278232) DBG | unable to find current IP address of domain no-preload-278232 in network mk-no-preload-278232
	I0819 19:12:58.319910  438001 main.go:141] libmachine: (no-preload-278232) DBG | I0819 19:12:58.319832  439925 retry.go:31] will retry after 1.921699926s: waiting for machine to come up
	I0819 19:13:00.243504  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:00.243995  438001 main.go:141] libmachine: (no-preload-278232) DBG | unable to find current IP address of domain no-preload-278232 in network mk-no-preload-278232
	I0819 19:13:00.244021  438001 main.go:141] libmachine: (no-preload-278232) DBG | I0819 19:13:00.243952  439925 retry.go:31] will retry after 2.040704792s: waiting for machine to come up
	I0819 19:12:59.195084  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:01.693648  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:59.419644  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:01.918769  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:00.723213  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:01.222908  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:01.723081  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:02.223465  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:02.722589  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:03.222706  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:03.722930  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:04.222826  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:04.722638  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:05.222666  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:02.287044  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:02.287490  438001 main.go:141] libmachine: (no-preload-278232) DBG | unable to find current IP address of domain no-preload-278232 in network mk-no-preload-278232
	I0819 19:13:02.287526  438001 main.go:141] libmachine: (no-preload-278232) DBG | I0819 19:13:02.287416  439925 retry.go:31] will retry after 2.562055052s: waiting for machine to come up
	I0819 19:13:04.852682  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:04.853097  438001 main.go:141] libmachine: (no-preload-278232) DBG | unable to find current IP address of domain no-preload-278232 in network mk-no-preload-278232
	I0819 19:13:04.853125  438001 main.go:141] libmachine: (no-preload-278232) DBG | I0819 19:13:04.853062  439925 retry.go:31] will retry after 3.627213972s: waiting for machine to come up
	I0819 19:13:04.194149  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:06.194831  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:04.418550  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:06.919083  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:05.723627  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:06.222663  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:06.723230  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:07.222666  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:07.722653  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:08.222861  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:08.723248  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:09.222831  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:09.722738  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:10.223069  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:08.484125  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.484586  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has current primary IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.484612  438001 main.go:141] libmachine: (no-preload-278232) Found IP for machine: 192.168.39.106
	I0819 19:13:08.484642  438001 main.go:141] libmachine: (no-preload-278232) Reserving static IP address...
	I0819 19:13:08.485049  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "no-preload-278232", mac: "52:54:00:14:f3:b1", ip: "192.168.39.106"} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:08.485091  438001 main.go:141] libmachine: (no-preload-278232) Reserved static IP address: 192.168.39.106
	I0819 19:13:08.485112  438001 main.go:141] libmachine: (no-preload-278232) DBG | skip adding static IP to network mk-no-preload-278232 - found existing host DHCP lease matching {name: "no-preload-278232", mac: "52:54:00:14:f3:b1", ip: "192.168.39.106"}
	I0819 19:13:08.485129  438001 main.go:141] libmachine: (no-preload-278232) DBG | Getting to WaitForSSH function...
	I0819 19:13:08.485145  438001 main.go:141] libmachine: (no-preload-278232) Waiting for SSH to be available...
	I0819 19:13:08.486998  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.487266  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:08.487290  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.487402  438001 main.go:141] libmachine: (no-preload-278232) DBG | Using SSH client type: external
	I0819 19:13:08.487429  438001 main.go:141] libmachine: (no-preload-278232) DBG | Using SSH private key: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/no-preload-278232/id_rsa (-rw-------)
	I0819 19:13:08.487463  438001 main.go:141] libmachine: (no-preload-278232) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.106 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19468-372744/.minikube/machines/no-preload-278232/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 19:13:08.487476  438001 main.go:141] libmachine: (no-preload-278232) DBG | About to run SSH command:
	I0819 19:13:08.487487  438001 main.go:141] libmachine: (no-preload-278232) DBG | exit 0
	I0819 19:13:08.611459  438001 main.go:141] libmachine: (no-preload-278232) DBG | SSH cmd err, output: <nil>: 
	I0819 19:13:08.611934  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetConfigRaw
	I0819 19:13:08.612610  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetIP
	I0819 19:13:08.615212  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.615564  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:08.615594  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.615919  438001 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/no-preload-278232/config.json ...
	I0819 19:13:08.616140  438001 machine.go:93] provisionDockerMachine start ...
	I0819 19:13:08.616162  438001 main.go:141] libmachine: (no-preload-278232) Calling .DriverName
	I0819 19:13:08.616387  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHHostname
	I0819 19:13:08.618650  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.618956  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:08.618988  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.619098  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHPort
	I0819 19:13:08.619291  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:08.619433  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:08.619569  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHUsername
	I0819 19:13:08.619727  438001 main.go:141] libmachine: Using SSH client type: native
	I0819 19:13:08.619893  438001 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I0819 19:13:08.619903  438001 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 19:13:08.724912  438001 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 19:13:08.724955  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetMachineName
	I0819 19:13:08.725264  438001 buildroot.go:166] provisioning hostname "no-preload-278232"
	I0819 19:13:08.725291  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetMachineName
	I0819 19:13:08.725486  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHHostname
	I0819 19:13:08.728810  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.729237  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:08.729274  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.729434  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHPort
	I0819 19:13:08.729667  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:08.729887  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:08.730067  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHUsername
	I0819 19:13:08.730244  438001 main.go:141] libmachine: Using SSH client type: native
	I0819 19:13:08.730490  438001 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I0819 19:13:08.730511  438001 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-278232 && echo "no-preload-278232" | sudo tee /etc/hostname
	I0819 19:13:08.854474  438001 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-278232
	
	I0819 19:13:08.854499  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHHostname
	I0819 19:13:08.857179  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.857511  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:08.857540  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.857713  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHPort
	I0819 19:13:08.857912  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:08.858075  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:08.858189  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHUsername
	I0819 19:13:08.858356  438001 main.go:141] libmachine: Using SSH client type: native
	I0819 19:13:08.858556  438001 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I0819 19:13:08.858579  438001 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-278232' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-278232/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-278232' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 19:13:08.973053  438001 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 19:13:08.973090  438001 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19468-372744/.minikube CaCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19468-372744/.minikube}
	I0819 19:13:08.973115  438001 buildroot.go:174] setting up certificates
	I0819 19:13:08.973125  438001 provision.go:84] configureAuth start
	I0819 19:13:08.973135  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetMachineName
	I0819 19:13:08.973417  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetIP
	I0819 19:13:08.976100  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.976459  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:08.976487  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.976690  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHHostname
	I0819 19:13:08.978902  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.979342  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:08.979370  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.979530  438001 provision.go:143] copyHostCerts
	I0819 19:13:08.979605  438001 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem, removing ...
	I0819 19:13:08.979628  438001 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem
	I0819 19:13:08.979717  438001 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem (1082 bytes)
	I0819 19:13:08.979830  438001 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem, removing ...
	I0819 19:13:08.979842  438001 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem
	I0819 19:13:08.979874  438001 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem (1123 bytes)
	I0819 19:13:08.979963  438001 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem, removing ...
	I0819 19:13:08.979974  438001 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem
	I0819 19:13:08.980002  438001 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem (1675 bytes)
	I0819 19:13:08.980075  438001 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem org=jenkins.no-preload-278232 san=[127.0.0.1 192.168.39.106 localhost minikube no-preload-278232]
	I0819 19:13:09.092643  438001 provision.go:177] copyRemoteCerts
	I0819 19:13:09.092707  438001 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 19:13:09.092739  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHHostname
	I0819 19:13:09.095542  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:09.095929  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:09.095960  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:09.096099  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHPort
	I0819 19:13:09.096318  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:09.096481  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHUsername
	I0819 19:13:09.096635  438001 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/no-preload-278232/id_rsa Username:docker}
	I0819 19:13:09.179713  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 19:13:09.206363  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0819 19:13:09.231180  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 19:13:09.256764  438001 provision.go:87] duration metric: took 283.626537ms to configureAuth
	I0819 19:13:09.256810  438001 buildroot.go:189] setting minikube options for container-runtime
	I0819 19:13:09.256993  438001 config.go:182] Loaded profile config "no-preload-278232": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:13:09.257079  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHHostname
	I0819 19:13:09.259661  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:09.260061  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:09.260094  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:09.260253  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHPort
	I0819 19:13:09.260461  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:09.260640  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:09.260796  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHUsername
	I0819 19:13:09.260973  438001 main.go:141] libmachine: Using SSH client type: native
	I0819 19:13:09.261150  438001 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I0819 19:13:09.261166  438001 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 19:13:09.534325  438001 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 19:13:09.534357  438001 machine.go:96] duration metric: took 918.201944ms to provisionDockerMachine
	I0819 19:13:09.534371  438001 start.go:293] postStartSetup for "no-preload-278232" (driver="kvm2")
	I0819 19:13:09.534387  438001 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 19:13:09.534412  438001 main.go:141] libmachine: (no-preload-278232) Calling .DriverName
	I0819 19:13:09.534794  438001 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 19:13:09.534826  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHHostname
	I0819 19:13:09.537623  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:09.537974  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:09.538002  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:09.538138  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHPort
	I0819 19:13:09.538349  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:09.538534  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHUsername
	I0819 19:13:09.538669  438001 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/no-preload-278232/id_rsa Username:docker}
	I0819 19:13:09.627085  438001 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 19:13:09.631714  438001 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 19:13:09.631740  438001 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-372744/.minikube/addons for local assets ...
	I0819 19:13:09.631817  438001 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-372744/.minikube/files for local assets ...
	I0819 19:13:09.631911  438001 filesync.go:149] local asset: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem -> 3800092.pem in /etc/ssl/certs
	I0819 19:13:09.632035  438001 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 19:13:09.642942  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem --> /etc/ssl/certs/3800092.pem (1708 bytes)
	I0819 19:13:09.669242  438001 start.go:296] duration metric: took 134.853886ms for postStartSetup
	I0819 19:13:09.669294  438001 fix.go:56] duration metric: took 19.584399031s for fixHost
	I0819 19:13:09.669325  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHHostname
	I0819 19:13:09.672072  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:09.672461  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:09.672494  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:09.672635  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHPort
	I0819 19:13:09.672937  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:09.673116  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:09.673331  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHUsername
	I0819 19:13:09.673517  438001 main.go:141] libmachine: Using SSH client type: native
	I0819 19:13:09.673699  438001 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I0819 19:13:09.673717  438001 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 19:13:09.780601  438001 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724094789.749951838
	
	I0819 19:13:09.780628  438001 fix.go:216] guest clock: 1724094789.749951838
	I0819 19:13:09.780640  438001 fix.go:229] Guest: 2024-08-19 19:13:09.749951838 +0000 UTC Remote: 2024-08-19 19:13:09.669301343 +0000 UTC m=+358.073543000 (delta=80.650495ms)
	I0819 19:13:09.780668  438001 fix.go:200] guest clock delta is within tolerance: 80.650495ms
	I0819 19:13:09.780676  438001 start.go:83] releasing machines lock for "no-preload-278232", held for 19.69582363s
	I0819 19:13:09.780703  438001 main.go:141] libmachine: (no-preload-278232) Calling .DriverName
	I0819 19:13:09.781042  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetIP
	I0819 19:13:09.783578  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:09.783967  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:09.783996  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:09.784149  438001 main.go:141] libmachine: (no-preload-278232) Calling .DriverName
	I0819 19:13:09.784649  438001 main.go:141] libmachine: (no-preload-278232) Calling .DriverName
	I0819 19:13:09.784855  438001 main.go:141] libmachine: (no-preload-278232) Calling .DriverName
	I0819 19:13:09.784946  438001 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 19:13:09.785037  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHHostname
	I0819 19:13:09.785073  438001 ssh_runner.go:195] Run: cat /version.json
	I0819 19:13:09.785107  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHHostname
	I0819 19:13:09.787346  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:09.787706  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:09.787763  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:09.787788  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:09.787977  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHPort
	I0819 19:13:09.788162  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:09.788226  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:09.788251  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:09.788327  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHUsername
	I0819 19:13:09.788447  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHPort
	I0819 19:13:09.788500  438001 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/no-preload-278232/id_rsa Username:docker}
	I0819 19:13:09.788622  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:09.788805  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHUsername
	I0819 19:13:09.788994  438001 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/no-preload-278232/id_rsa Username:docker}
	I0819 19:13:09.864596  438001 ssh_runner.go:195] Run: systemctl --version
	I0819 19:13:09.890038  438001 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 19:13:10.039016  438001 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 19:13:10.045269  438001 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 19:13:10.045352  438001 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 19:13:10.061345  438001 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 19:13:10.061380  438001 start.go:495] detecting cgroup driver to use...
	I0819 19:13:10.061467  438001 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 19:13:10.079229  438001 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 19:13:10.094396  438001 docker.go:217] disabling cri-docker service (if available) ...
	I0819 19:13:10.094471  438001 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 19:13:10.109307  438001 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 19:13:10.123389  438001 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 19:13:10.241132  438001 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 19:13:10.395346  438001 docker.go:233] disabling docker service ...
	I0819 19:13:10.395444  438001 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 19:13:10.409604  438001 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 19:13:10.424149  438001 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 19:13:10.544180  438001 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 19:13:10.671038  438001 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 19:13:10.685563  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 19:13:10.704754  438001 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 19:13:10.704819  438001 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:13:10.716002  438001 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 19:13:10.716077  438001 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:13:10.728085  438001 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:13:10.739292  438001 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:13:10.750083  438001 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 19:13:10.760832  438001 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:13:10.771231  438001 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:13:10.788807  438001 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:13:10.799472  438001 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 19:13:10.809354  438001 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 19:13:10.809432  438001 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 19:13:10.824339  438001 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 19:13:10.833761  438001 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:13:10.953587  438001 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 19:13:11.091264  438001 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 19:13:11.091336  438001 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 19:13:11.096092  438001 start.go:563] Will wait 60s for crictl version
	I0819 19:13:11.096161  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:13:11.100040  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 19:13:11.142512  438001 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 19:13:11.142612  438001 ssh_runner.go:195] Run: crio --version
	I0819 19:13:11.176967  438001 ssh_runner.go:195] Run: crio --version
	I0819 19:13:11.208687  438001 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 19:13:11.209819  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetIP
	I0819 19:13:11.212533  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:11.212876  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:11.212900  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:11.213098  438001 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 19:13:11.217234  438001 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 19:13:11.229995  438001 kubeadm.go:883] updating cluster {Name:no-preload-278232 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-278232 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.106 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 19:13:11.230124  438001 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 19:13:11.230168  438001 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 19:13:11.265699  438001 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0819 19:13:11.265730  438001 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0819 19:13:11.265816  438001 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0819 19:13:11.265836  438001 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0819 19:13:11.265843  438001 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0819 19:13:11.265816  438001 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:13:11.265875  438001 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 19:13:11.265941  438001 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0819 19:13:11.265955  438001 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0819 19:13:11.266027  438001 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0819 19:13:11.267344  438001 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0819 19:13:11.267364  438001 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0819 19:13:11.267344  438001 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0819 19:13:11.267408  438001 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0819 19:13:11.267349  438001 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0819 19:13:11.267445  438001 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0819 19:13:11.267408  438001 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 19:13:11.267407  438001 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:13:11.411117  438001 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0819 19:13:11.435022  438001 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0819 19:13:11.437707  438001 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0819 19:13:11.439226  438001 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 19:13:11.446384  438001 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0819 19:13:11.448011  438001 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0819 19:13:11.463921  438001 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0819 19:13:11.476902  438001 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0819 19:13:11.476956  438001 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0819 19:13:11.477011  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:13:11.561762  438001 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0819 19:13:11.561827  438001 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0819 19:13:11.561889  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:13:08.694513  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:11.193505  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:09.419409  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:11.919413  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:13.931174  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:10.722882  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:11.223650  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:11.722917  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:12.223146  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:12.723410  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:13.222692  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:13.722636  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:14.223152  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:14.722661  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:15.223297  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:11.657022  438001 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0819 19:13:11.657071  438001 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0819 19:13:11.657092  438001 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0819 19:13:11.657123  438001 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 19:13:11.657127  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:13:11.657164  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:13:11.657176  438001 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0819 19:13:11.657195  438001 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0819 19:13:11.657217  438001 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0819 19:13:11.657216  438001 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0819 19:13:11.657254  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:13:11.657260  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:13:11.729671  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0819 19:13:11.729903  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0819 19:13:11.730476  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 19:13:11.730489  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0819 19:13:11.730510  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0819 19:13:11.730544  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0819 19:13:11.853411  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0819 19:13:11.853647  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0819 19:13:11.872296  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0819 19:13:11.872370  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0819 19:13:11.876801  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 19:13:11.877002  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0819 19:13:11.982642  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0819 19:13:12.007940  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0819 19:13:12.031132  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0819 19:13:12.031150  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0819 19:13:12.031163  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 19:13:12.031275  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0819 19:13:12.130991  438001 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0819 19:13:12.131099  438001 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0819 19:13:12.130994  438001 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0819 19:13:12.131231  438001 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1
	I0819 19:13:12.162852  438001 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0819 19:13:12.162911  438001 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0819 19:13:12.162916  438001 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0819 19:13:12.162967  438001 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0819 19:13:12.162984  438001 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0819 19:13:12.162984  438001 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0819 19:13:12.163035  438001 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0819 19:13:12.163044  438001 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0819 19:13:12.163053  438001 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0819 19:13:12.163055  438001 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0819 19:13:12.163086  438001 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0819 19:13:12.163095  438001 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0819 19:13:12.177377  438001 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0819 19:13:12.177438  438001 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0819 19:13:12.177438  438001 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0819 19:13:12.229301  438001 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:13:14.745129  438001 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (2.582015913s)
	I0819 19:13:14.745162  438001 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0819 19:13:14.745196  438001 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0: (2.582131532s)
	I0819 19:13:14.745215  438001 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.515891614s)
	I0819 19:13:14.745232  438001 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0819 19:13:14.745200  438001 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0819 19:13:14.745247  438001 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0819 19:13:14.745285  438001 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:13:14.745298  438001 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0819 19:13:14.745325  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:13:13.693752  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:15.693871  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:16.419552  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:18.920189  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:15.723053  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:16.223486  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:16.722740  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:17.223337  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:17.723160  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:18.222651  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:18.723509  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:19.223686  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:19.723376  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:20.222953  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:16.728557  438001 ssh_runner.go:235] Completed: which crictl: (1.983204878s)
	I0819 19:13:16.728614  438001 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.983294709s)
	I0819 19:13:16.728635  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:13:16.728642  438001 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0819 19:13:16.728673  438001 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0819 19:13:16.728714  438001 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0819 19:13:16.771574  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:13:20.532388  438001 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.760772797s)
	I0819 19:13:20.532421  438001 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.80368813s)
	I0819 19:13:20.532437  438001 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0819 19:13:20.532469  438001 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0819 19:13:20.532480  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:13:20.532500  438001 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0819 19:13:18.193852  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:20.692752  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:21.419154  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:23.419271  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:20.723620  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:21.223286  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:21.723663  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:22.223594  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:22.723415  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:23.223643  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:23.723395  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:24.223476  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:24.723236  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:25.223620  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:22.500967  438001 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.968455152s)
	I0819 19:13:22.501030  438001 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0819 19:13:22.501036  438001 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (1.968509024s)
	I0819 19:13:22.501068  438001 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0819 19:13:22.501108  438001 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0819 19:13:22.501138  438001 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0819 19:13:22.501175  438001 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0819 19:13:22.506796  438001 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0819 19:13:23.962797  438001 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.461519717s)
	I0819 19:13:23.962838  438001 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0819 19:13:23.962876  438001 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0819 19:13:23.962959  438001 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0819 19:13:25.927805  438001 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.964816993s)
	I0819 19:13:25.927836  438001 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0819 19:13:25.927868  438001 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0819 19:13:25.927922  438001 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0819 19:13:26.572310  438001 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0819 19:13:26.572368  438001 cache_images.go:123] Successfully loaded all cached images
	I0819 19:13:26.572376  438001 cache_images.go:92] duration metric: took 15.306632126s to LoadCachedImages
	I0819 19:13:26.572397  438001 kubeadm.go:934] updating node { 192.168.39.106 8443 v1.31.0 crio true true} ...
	I0819 19:13:26.572549  438001 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-278232 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.106
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-278232 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 19:13:26.572635  438001 ssh_runner.go:195] Run: crio config
	I0819 19:13:26.623839  438001 cni.go:84] Creating CNI manager for ""
	I0819 19:13:26.623862  438001 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 19:13:26.623872  438001 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 19:13:26.623896  438001 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.106 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-278232 NodeName:no-preload-278232 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.106"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.106 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 19:13:26.624138  438001 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.106
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-278232"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.106
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.106"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 19:13:26.624226  438001 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 19:13:22.693093  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:24.694313  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:26.695312  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:25.918793  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:27.919721  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:25.722593  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:26.223582  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:26.722927  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:27.223364  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:27.723223  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:28.223458  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:28.723262  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:29.222823  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:29.722837  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:30.223196  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:26.634770  438001 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 19:13:26.634844  438001 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 19:13:26.644193  438001 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0819 19:13:26.661226  438001 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 19:13:26.677413  438001 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0819 19:13:26.696260  438001 ssh_runner.go:195] Run: grep 192.168.39.106	control-plane.minikube.internal$ /etc/hosts
	I0819 19:13:26.700029  438001 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.106	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 19:13:26.711667  438001 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:13:26.849658  438001 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 19:13:26.867185  438001 certs.go:68] Setting up /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/no-preload-278232 for IP: 192.168.39.106
	I0819 19:13:26.867216  438001 certs.go:194] generating shared ca certs ...
	I0819 19:13:26.867240  438001 certs.go:226] acquiring lock for ca certs: {Name:mk639e03f593e0bccac045f6e9f5ba3b96cc81e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:13:26.867431  438001 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key
	I0819 19:13:26.867489  438001 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key
	I0819 19:13:26.867502  438001 certs.go:256] generating profile certs ...
	I0819 19:13:26.867600  438001 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/no-preload-278232/client.key
	I0819 19:13:26.867705  438001 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/no-preload-278232/apiserver.key.4086521c
	I0819 19:13:26.867759  438001 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/no-preload-278232/proxy-client.key
	I0819 19:13:26.867936  438001 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem (1338 bytes)
	W0819 19:13:26.867980  438001 certs.go:480] ignoring /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009_empty.pem, impossibly tiny 0 bytes
	I0819 19:13:26.867995  438001 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 19:13:26.868037  438001 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem (1082 bytes)
	I0819 19:13:26.868075  438001 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem (1123 bytes)
	I0819 19:13:26.868107  438001 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem (1675 bytes)
	I0819 19:13:26.868171  438001 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem (1708 bytes)
	I0819 19:13:26.869217  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 19:13:26.903250  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 19:13:26.928593  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 19:13:26.957098  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 19:13:26.982422  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/no-preload-278232/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0819 19:13:27.009252  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/no-preload-278232/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 19:13:27.038043  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/no-preload-278232/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 19:13:27.075400  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/no-preload-278232/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 19:13:27.101568  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem --> /usr/share/ca-certificates/3800092.pem (1708 bytes)
	I0819 19:13:27.127162  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 19:13:27.152327  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem --> /usr/share/ca-certificates/380009.pem (1338 bytes)
	I0819 19:13:27.176207  438001 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 19:13:27.194919  438001 ssh_runner.go:195] Run: openssl version
	I0819 19:13:27.201002  438001 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3800092.pem && ln -fs /usr/share/ca-certificates/3800092.pem /etc/ssl/certs/3800092.pem"
	I0819 19:13:27.212050  438001 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3800092.pem
	I0819 19:13:27.216607  438001 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 17:56 /usr/share/ca-certificates/3800092.pem
	I0819 19:13:27.216663  438001 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3800092.pem
	I0819 19:13:27.222437  438001 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3800092.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 19:13:27.234112  438001 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 19:13:27.245472  438001 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:13:27.250203  438001 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:13:27.250257  438001 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:13:27.256045  438001 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 19:13:27.266746  438001 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/380009.pem && ln -fs /usr/share/ca-certificates/380009.pem /etc/ssl/certs/380009.pem"
	I0819 19:13:27.277316  438001 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/380009.pem
	I0819 19:13:27.281660  438001 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 17:56 /usr/share/ca-certificates/380009.pem
	I0819 19:13:27.281721  438001 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/380009.pem
	I0819 19:13:27.287223  438001 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/380009.pem /etc/ssl/certs/51391683.0"
	I0819 19:13:27.299791  438001 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 19:13:27.304470  438001 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 19:13:27.310642  438001 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 19:13:27.316259  438001 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 19:13:27.322248  438001 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 19:13:27.327902  438001 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 19:13:27.333447  438001 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 19:13:27.339044  438001 kubeadm.go:392] StartCluster: {Name:no-preload-278232 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-278232 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.106 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 19:13:27.339165  438001 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 19:13:27.339241  438001 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 19:13:27.378362  438001 cri.go:89] found id: ""
	I0819 19:13:27.378436  438001 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 19:13:27.388560  438001 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 19:13:27.388580  438001 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 19:13:27.388623  438001 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 19:13:27.397834  438001 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 19:13:27.399336  438001 kubeconfig.go:125] found "no-preload-278232" server: "https://192.168.39.106:8443"
	I0819 19:13:27.402651  438001 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 19:13:27.412108  438001 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.106
	I0819 19:13:27.412155  438001 kubeadm.go:1160] stopping kube-system containers ...
	I0819 19:13:27.412170  438001 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0819 19:13:27.412230  438001 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 19:13:27.450332  438001 cri.go:89] found id: ""
	I0819 19:13:27.450431  438001 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0819 19:13:27.466943  438001 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 19:13:27.476741  438001 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 19:13:27.476765  438001 kubeadm.go:157] found existing configuration files:
	
	I0819 19:13:27.476810  438001 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 19:13:27.485630  438001 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 19:13:27.485695  438001 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 19:13:27.495232  438001 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 19:13:27.504379  438001 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 19:13:27.504449  438001 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 19:13:27.513723  438001 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 19:13:27.522864  438001 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 19:13:27.522946  438001 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 19:13:27.532402  438001 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 19:13:27.541502  438001 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 19:13:27.541592  438001 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 19:13:27.550934  438001 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 19:13:27.560650  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:13:27.684890  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:13:28.534223  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:13:28.757538  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:13:28.831313  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:13:28.897644  438001 api_server.go:52] waiting for apiserver process to appear ...
	I0819 19:13:28.897735  438001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:29.398486  438001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:29.898494  438001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:29.924881  438001 api_server.go:72] duration metric: took 1.027247684s to wait for apiserver process to appear ...
	I0819 19:13:29.924918  438001 api_server.go:88] waiting for apiserver healthz status ...
	I0819 19:13:29.924944  438001 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8443/healthz ...
	I0819 19:13:29.925535  438001 api_server.go:269] stopped: https://192.168.39.106:8443/healthz: Get "https://192.168.39.106:8443/healthz": dial tcp 192.168.39.106:8443: connect: connection refused
	I0819 19:13:30.425624  438001 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8443/healthz ...
	I0819 19:13:29.193722  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:31.194540  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:32.406445  438001 api_server.go:279] https://192.168.39.106:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 19:13:32.406476  438001 api_server.go:103] status: https://192.168.39.106:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 19:13:32.406491  438001 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8443/healthz ...
	I0819 19:13:32.470160  438001 api_server.go:279] https://192.168.39.106:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 19:13:32.470195  438001 api_server.go:103] status: https://192.168.39.106:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 19:13:32.470211  438001 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8443/healthz ...
	I0819 19:13:32.486292  438001 api_server.go:279] https://192.168.39.106:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 19:13:32.486322  438001 api_server.go:103] status: https://192.168.39.106:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 19:13:32.925943  438001 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8443/healthz ...
	I0819 19:13:32.933024  438001 api_server.go:279] https://192.168.39.106:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 19:13:32.933068  438001 api_server.go:103] status: https://192.168.39.106:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 19:13:33.425638  438001 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8443/healthz ...
	I0819 19:13:33.431919  438001 api_server.go:279] https://192.168.39.106:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 19:13:33.432051  438001 api_server.go:103] status: https://192.168.39.106:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 19:13:33.925369  438001 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8443/healthz ...
	I0819 19:13:33.930489  438001 api_server.go:279] https://192.168.39.106:8443/healthz returned 200:
	ok
	I0819 19:13:33.937758  438001 api_server.go:141] control plane version: v1.31.0
	I0819 19:13:33.937789  438001 api_server.go:131] duration metric: took 4.012862801s to wait for apiserver health ...
	I0819 19:13:33.937800  438001 cni.go:84] Creating CNI manager for ""
	I0819 19:13:33.937807  438001 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 19:13:33.939711  438001 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 19:13:30.419241  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:32.419437  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:30.723537  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:31.223437  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:31.723289  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:32.222714  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:32.723037  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:33.223138  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:33.723303  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:34.223334  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:34.722692  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:35.223021  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:33.941055  438001 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 19:13:33.953427  438001 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 19:13:33.982889  438001 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 19:13:33.998701  438001 system_pods.go:59] 8 kube-system pods found
	I0819 19:13:33.998750  438001 system_pods.go:61] "coredns-6f6b679f8f-22lbt" [c8a5cabd-41d4-41cb-91c1-2db1f3471db3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0819 19:13:33.998762  438001 system_pods.go:61] "etcd-no-preload-278232" [36d555a1-33e4-4c6c-b24e-2fee4fd84f2b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0819 19:13:33.998775  438001 system_pods.go:61] "kube-apiserver-no-preload-278232" [af7173e5-c4ac-4ece-b8b9-bb81cb6b9bfd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0819 19:13:33.998784  438001 system_pods.go:61] "kube-controller-manager-no-preload-278232" [2463d97a-5221-40ce-8fd7-08151165d6f7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0819 19:13:33.998794  438001 system_pods.go:61] "kube-proxy-rcf49" [85d5814a-1ba9-46be-ab11-17bf40c0f029] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0819 19:13:33.998807  438001 system_pods.go:61] "kube-scheduler-no-preload-278232" [3b327704-f70c-4d6f-a774-15427a305472] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0819 19:13:33.998819  438001 system_pods.go:61] "metrics-server-6867b74b74-vxwrs" [e8b74128-b393-4f0f-90fe-e05f20d54acd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 19:13:33.998827  438001 system_pods.go:61] "storage-provisioner" [24766475-1a5b-4f1a-9350-3e891b5272cc] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0819 19:13:33.998841  438001 system_pods.go:74] duration metric: took 15.918876ms to wait for pod list to return data ...
	I0819 19:13:33.998853  438001 node_conditions.go:102] verifying NodePressure condition ...
	I0819 19:13:34.003102  438001 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 19:13:34.003131  438001 node_conditions.go:123] node cpu capacity is 2
	I0819 19:13:34.003145  438001 node_conditions.go:105] duration metric: took 4.283682ms to run NodePressure ...
	I0819 19:13:34.003163  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:13:34.300052  438001 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0819 19:13:34.304483  438001 kubeadm.go:739] kubelet initialised
	I0819 19:13:34.304505  438001 kubeadm.go:740] duration metric: took 4.421894ms waiting for restarted kubelet to initialise ...
	I0819 19:13:34.304513  438001 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 19:13:34.310575  438001 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-22lbt" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:34.316040  438001 pod_ready.go:98] node "no-preload-278232" hosting pod "coredns-6f6b679f8f-22lbt" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:34.316068  438001 pod_ready.go:82] duration metric: took 5.462078ms for pod "coredns-6f6b679f8f-22lbt" in "kube-system" namespace to be "Ready" ...
	E0819 19:13:34.316080  438001 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-278232" hosting pod "coredns-6f6b679f8f-22lbt" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:34.316088  438001 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:34.320731  438001 pod_ready.go:98] node "no-preload-278232" hosting pod "etcd-no-preload-278232" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:34.320751  438001 pod_ready.go:82] duration metric: took 4.649545ms for pod "etcd-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	E0819 19:13:34.320758  438001 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-278232" hosting pod "etcd-no-preload-278232" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:34.320763  438001 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:34.325499  438001 pod_ready.go:98] node "no-preload-278232" hosting pod "kube-apiserver-no-preload-278232" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:34.325519  438001 pod_ready.go:82] duration metric: took 4.750861ms for pod "kube-apiserver-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	E0819 19:13:34.325526  438001 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-278232" hosting pod "kube-apiserver-no-preload-278232" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:34.325531  438001 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:34.388221  438001 pod_ready.go:98] node "no-preload-278232" hosting pod "kube-controller-manager-no-preload-278232" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:34.388248  438001 pod_ready.go:82] duration metric: took 62.708596ms for pod "kube-controller-manager-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	E0819 19:13:34.388259  438001 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-278232" hosting pod "kube-controller-manager-no-preload-278232" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:34.388265  438001 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-rcf49" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:34.787164  438001 pod_ready.go:98] node "no-preload-278232" hosting pod "kube-proxy-rcf49" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:34.787193  438001 pod_ready.go:82] duration metric: took 398.919585ms for pod "kube-proxy-rcf49" in "kube-system" namespace to be "Ready" ...
	E0819 19:13:34.787203  438001 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-278232" hosting pod "kube-proxy-rcf49" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:34.787210  438001 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:35.186336  438001 pod_ready.go:98] node "no-preload-278232" hosting pod "kube-scheduler-no-preload-278232" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:35.186365  438001 pod_ready.go:82] duration metric: took 399.147858ms for pod "kube-scheduler-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	E0819 19:13:35.186377  438001 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-278232" hosting pod "kube-scheduler-no-preload-278232" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:35.186386  438001 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:35.586266  438001 pod_ready.go:98] node "no-preload-278232" hosting pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:35.586292  438001 pod_ready.go:82] duration metric: took 399.895038ms for pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace to be "Ready" ...
	E0819 19:13:35.586301  438001 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-278232" hosting pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:35.586307  438001 pod_ready.go:39] duration metric: took 1.281785432s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 19:13:35.586326  438001 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 19:13:35.598523  438001 ops.go:34] apiserver oom_adj: -16
	I0819 19:13:35.598545  438001 kubeadm.go:597] duration metric: took 8.20995933s to restartPrimaryControlPlane
	I0819 19:13:35.598554  438001 kubeadm.go:394] duration metric: took 8.259514907s to StartCluster
	I0819 19:13:35.598576  438001 settings.go:142] acquiring lock: {Name:mk396fcf49a1d0e69583cf37ff3c819e37118163 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:13:35.598662  438001 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19468-372744/kubeconfig
	I0819 19:13:35.600424  438001 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/kubeconfig: {Name:mk8e7b4e1bb7da665111d2acd83eb48882c66853 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:13:35.600672  438001 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.106 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 19:13:35.600768  438001 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 19:13:35.600850  438001 addons.go:69] Setting storage-provisioner=true in profile "no-preload-278232"
	I0819 19:13:35.600879  438001 addons.go:69] Setting metrics-server=true in profile "no-preload-278232"
	I0819 19:13:35.600924  438001 addons.go:234] Setting addon metrics-server=true in "no-preload-278232"
	W0819 19:13:35.600938  438001 addons.go:243] addon metrics-server should already be in state true
	I0819 19:13:35.600884  438001 addons.go:234] Setting addon storage-provisioner=true in "no-preload-278232"
	W0819 19:13:35.600969  438001 addons.go:243] addon storage-provisioner should already be in state true
	I0819 19:13:35.600966  438001 config.go:182] Loaded profile config "no-preload-278232": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:13:35.600976  438001 host.go:66] Checking if "no-preload-278232" exists ...
	I0819 19:13:35.600988  438001 host.go:66] Checking if "no-preload-278232" exists ...
	I0819 19:13:35.601395  438001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:13:35.601428  438001 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:13:35.601436  438001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:13:35.601453  438001 addons.go:69] Setting default-storageclass=true in profile "no-preload-278232"
	I0819 19:13:35.601501  438001 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-278232"
	I0819 19:13:35.601463  438001 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:13:35.601898  438001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:13:35.601948  438001 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:13:35.602507  438001 out.go:177] * Verifying Kubernetes components...
	I0819 19:13:35.604092  438001 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:13:35.617515  438001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34839
	I0819 19:13:35.617538  438001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36157
	I0819 19:13:35.617521  438001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35771
	I0819 19:13:35.618045  438001 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:13:35.618101  438001 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:13:35.618163  438001 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:13:35.618570  438001 main.go:141] libmachine: Using API Version  1
	I0819 19:13:35.618598  438001 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:13:35.618712  438001 main.go:141] libmachine: Using API Version  1
	I0819 19:13:35.618734  438001 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:13:35.618715  438001 main.go:141] libmachine: Using API Version  1
	I0819 19:13:35.618754  438001 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:13:35.618989  438001 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:13:35.619109  438001 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:13:35.619111  438001 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:13:35.619177  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetState
	I0819 19:13:35.619649  438001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:13:35.619693  438001 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:13:35.619695  438001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:13:35.619768  438001 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:13:35.641244  438001 addons.go:234] Setting addon default-storageclass=true in "no-preload-278232"
	W0819 19:13:35.641268  438001 addons.go:243] addon default-storageclass should already be in state true
	I0819 19:13:35.641298  438001 host.go:66] Checking if "no-preload-278232" exists ...
	I0819 19:13:35.641558  438001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:13:35.641610  438001 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:13:35.659392  438001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39373
	I0819 19:13:35.659999  438001 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:13:35.660432  438001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38477
	I0819 19:13:35.660432  438001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35477
	I0819 19:13:35.660604  438001 main.go:141] libmachine: Using API Version  1
	I0819 19:13:35.660631  438001 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:13:35.661089  438001 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:13:35.661149  438001 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:13:35.661169  438001 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:13:35.661641  438001 main.go:141] libmachine: Using API Version  1
	I0819 19:13:35.661661  438001 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:13:35.661757  438001 main.go:141] libmachine: Using API Version  1
	I0819 19:13:35.661772  438001 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:13:35.661792  438001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:13:35.661826  438001 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:13:35.662039  438001 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:13:35.662142  438001 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:13:35.662222  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetState
	I0819 19:13:35.662375  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetState
	I0819 19:13:35.664221  438001 main.go:141] libmachine: (no-preload-278232) Calling .DriverName
	I0819 19:13:35.664397  438001 main.go:141] libmachine: (no-preload-278232) Calling .DriverName
	I0819 19:13:35.666459  438001 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0819 19:13:35.666471  438001 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:13:35.667849  438001 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0819 19:13:35.667864  438001 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0819 19:13:35.667882  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHHostname
	I0819 19:13:35.667944  438001 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 19:13:35.667959  438001 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 19:13:35.667977  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHHostname
	I0819 19:13:35.673516  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHPort
	I0819 19:13:35.673544  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:35.673520  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:35.673578  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:35.673593  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:35.673602  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:35.673521  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHPort
	I0819 19:13:35.673615  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:35.673793  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:35.673937  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:35.673986  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHUsername
	I0819 19:13:35.674150  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHUsername
	I0819 19:13:35.674324  438001 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/no-preload-278232/id_rsa Username:docker}
	I0819 19:13:35.674350  438001 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/no-preload-278232/id_rsa Username:docker}
	I0819 19:13:35.683691  438001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39783
	I0819 19:13:35.684219  438001 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:13:35.684806  438001 main.go:141] libmachine: Using API Version  1
	I0819 19:13:35.684831  438001 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:13:35.685251  438001 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:13:35.685515  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetState
	I0819 19:13:35.687268  438001 main.go:141] libmachine: (no-preload-278232) Calling .DriverName
	I0819 19:13:35.687485  438001 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 19:13:35.687503  438001 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 19:13:35.687524  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHHostname
	I0819 19:13:35.690504  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:35.691297  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHPort
	I0819 19:13:35.691333  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:35.691356  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:35.691477  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:35.691659  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHUsername
	I0819 19:13:35.691814  438001 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/no-preload-278232/id_rsa Username:docker}
	I0819 19:13:35.833054  438001 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 19:13:35.855442  438001 node_ready.go:35] waiting up to 6m0s for node "no-preload-278232" to be "Ready" ...
	I0819 19:13:35.923521  438001 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0819 19:13:35.923551  438001 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0819 19:13:35.940005  438001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 19:13:35.965657  438001 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0819 19:13:35.965686  438001 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0819 19:13:36.002636  438001 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 19:13:36.002665  438001 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0819 19:13:36.024764  438001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 19:13:36.058824  438001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 19:13:36.420421  438001 main.go:141] libmachine: Making call to close driver server
	I0819 19:13:36.420452  438001 main.go:141] libmachine: (no-preload-278232) Calling .Close
	I0819 19:13:36.420785  438001 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:13:36.420804  438001 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:13:36.420844  438001 main.go:141] libmachine: (no-preload-278232) DBG | Closing plugin on server side
	I0819 19:13:36.420904  438001 main.go:141] libmachine: Making call to close driver server
	I0819 19:13:36.420918  438001 main.go:141] libmachine: (no-preload-278232) Calling .Close
	I0819 19:13:36.421185  438001 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:13:36.421210  438001 main.go:141] libmachine: (no-preload-278232) DBG | Closing plugin on server side
	I0819 19:13:36.421224  438001 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:13:36.429463  438001 main.go:141] libmachine: Making call to close driver server
	I0819 19:13:36.429481  438001 main.go:141] libmachine: (no-preload-278232) Calling .Close
	I0819 19:13:36.429811  438001 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:13:36.429830  438001 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:13:37.141893  438001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.117083882s)
	I0819 19:13:37.141987  438001 main.go:141] libmachine: Making call to close driver server
	I0819 19:13:37.141999  438001 main.go:141] libmachine: (no-preload-278232) Calling .Close
	I0819 19:13:37.142472  438001 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:13:37.142495  438001 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:13:37.142506  438001 main.go:141] libmachine: Making call to close driver server
	I0819 19:13:37.142515  438001 main.go:141] libmachine: (no-preload-278232) Calling .Close
	I0819 19:13:37.142788  438001 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:13:37.142808  438001 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:13:37.142814  438001 main.go:141] libmachine: (no-preload-278232) DBG | Closing plugin on server side
	I0819 19:13:37.161659  438001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.10278963s)
	I0819 19:13:37.161723  438001 main.go:141] libmachine: Making call to close driver server
	I0819 19:13:37.161739  438001 main.go:141] libmachine: (no-preload-278232) Calling .Close
	I0819 19:13:37.162067  438001 main.go:141] libmachine: (no-preload-278232) DBG | Closing plugin on server side
	I0819 19:13:37.162099  438001 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:13:37.162125  438001 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:13:37.162142  438001 main.go:141] libmachine: Making call to close driver server
	I0819 19:13:37.162154  438001 main.go:141] libmachine: (no-preload-278232) Calling .Close
	I0819 19:13:37.162404  438001 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:13:37.162420  438001 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:13:37.162432  438001 addons.go:475] Verifying addon metrics-server=true in "no-preload-278232"
	I0819 19:13:37.164423  438001 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0819 19:13:33.694203  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:35.694403  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:34.918988  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:36.919564  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:35.722784  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:36.223168  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:36.723041  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:37.222801  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:37.722855  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:38.223296  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:38.722936  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:39.223326  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:39.722883  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:40.223284  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:37.165767  438001 addons.go:510] duration metric: took 1.565026237s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0819 19:13:37.859454  438001 node_ready.go:53] node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:39.859662  438001 node_ready.go:53] node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:38.193207  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:40.694127  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:39.418572  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:41.918302  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:43.918558  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:40.722612  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:41.222700  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:41.723144  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:42.223369  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:42.723209  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:43.222849  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:43.723518  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:44.223585  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:44.722772  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:45.223078  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:41.859965  438001 node_ready.go:53] node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:43.359120  438001 node_ready.go:49] node "no-preload-278232" has status "Ready":"True"
	I0819 19:13:43.359151  438001 node_ready.go:38] duration metric: took 7.503671074s for node "no-preload-278232" to be "Ready" ...
	I0819 19:13:43.359169  438001 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 19:13:43.365307  438001 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-22lbt" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:43.369626  438001 pod_ready.go:93] pod "coredns-6f6b679f8f-22lbt" in "kube-system" namespace has status "Ready":"True"
	I0819 19:13:43.369646  438001 pod_ready.go:82] duration metric: took 4.316734ms for pod "coredns-6f6b679f8f-22lbt" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:43.369654  438001 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:45.377672  438001 pod_ready.go:103] pod "etcd-no-preload-278232" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:43.193775  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:45.693494  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:45.919705  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:48.418981  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:45.723287  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:46.223666  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:46.722754  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:47.223414  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:47.723567  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:48.222938  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:48.723011  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:49.223076  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:49.723443  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:50.223627  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:47.875409  438001 pod_ready.go:103] pod "etcd-no-preload-278232" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:48.377127  438001 pod_ready.go:93] pod "etcd-no-preload-278232" in "kube-system" namespace has status "Ready":"True"
	I0819 19:13:48.377155  438001 pod_ready.go:82] duration metric: took 5.007493319s for pod "etcd-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:48.377169  438001 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:48.381841  438001 pod_ready.go:93] pod "kube-apiserver-no-preload-278232" in "kube-system" namespace has status "Ready":"True"
	I0819 19:13:48.381864  438001 pod_ready.go:82] duration metric: took 4.686309ms for pod "kube-apiserver-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:48.381877  438001 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:48.386382  438001 pod_ready.go:93] pod "kube-controller-manager-no-preload-278232" in "kube-system" namespace has status "Ready":"True"
	I0819 19:13:48.386397  438001 pod_ready.go:82] duration metric: took 4.514361ms for pod "kube-controller-manager-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:48.386405  438001 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rcf49" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:48.390940  438001 pod_ready.go:93] pod "kube-proxy-rcf49" in "kube-system" namespace has status "Ready":"True"
	I0819 19:13:48.390955  438001 pod_ready.go:82] duration metric: took 4.544499ms for pod "kube-proxy-rcf49" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:48.390963  438001 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:48.395159  438001 pod_ready.go:93] pod "kube-scheduler-no-preload-278232" in "kube-system" namespace has status "Ready":"True"
	I0819 19:13:48.395180  438001 pod_ready.go:82] duration metric: took 4.211012ms for pod "kube-scheduler-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:48.395197  438001 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:50.402109  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:47.693601  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:50.193183  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:50.918811  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:52.919981  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:50.723259  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:51.222697  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:51.723284  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:52.222757  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:52.723414  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:53.223202  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:53.722721  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:54.223578  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:54.723400  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:55.222730  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:52.901901  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:54.903583  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:52.693231  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:54.693934  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:56.695700  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:55.418965  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:57.918885  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:55.723644  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:56.223212  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:56.722729  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:57.223226  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:57.723045  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:58.222901  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:58.722710  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:59.223149  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:59.723186  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:00.222763  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:00.222844  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:00.271266  438716 cri.go:89] found id: ""
	I0819 19:14:00.271296  438716 logs.go:276] 0 containers: []
	W0819 19:14:00.271305  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:00.271312  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:00.271373  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:00.311870  438716 cri.go:89] found id: ""
	I0819 19:14:00.311900  438716 logs.go:276] 0 containers: []
	W0819 19:14:00.311936  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:00.311946  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:00.312011  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:00.350476  438716 cri.go:89] found id: ""
	I0819 19:14:00.350505  438716 logs.go:276] 0 containers: []
	W0819 19:14:00.350514  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:00.350520  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:00.350586  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:00.387404  438716 cri.go:89] found id: ""
	I0819 19:14:00.387438  438716 logs.go:276] 0 containers: []
	W0819 19:14:00.387447  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:00.387457  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:00.387516  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:00.423493  438716 cri.go:89] found id: ""
	I0819 19:14:00.423521  438716 logs.go:276] 0 containers: []
	W0819 19:14:00.423529  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:00.423535  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:00.423596  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:00.458593  438716 cri.go:89] found id: ""
	I0819 19:14:00.458630  438716 logs.go:276] 0 containers: []
	W0819 19:14:00.458642  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:00.458651  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:00.458722  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:00.495645  438716 cri.go:89] found id: ""
	I0819 19:14:00.495695  438716 logs.go:276] 0 containers: []
	W0819 19:14:00.495709  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:00.495717  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:00.495782  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:00.531464  438716 cri.go:89] found id: ""
	I0819 19:14:00.531498  438716 logs.go:276] 0 containers: []
	W0819 19:14:00.531508  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:00.531529  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:00.531543  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:13:57.401329  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:59.402701  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:59.192781  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:01.194411  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:00.419287  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:02.918450  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:00.584029  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:00.584078  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:00.597870  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:00.597908  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:00.746061  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:00.746085  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:00.746098  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:00.818001  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:00.818042  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:03.358509  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:03.371262  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:03.371345  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:03.408201  438716 cri.go:89] found id: ""
	I0819 19:14:03.408231  438716 logs.go:276] 0 containers: []
	W0819 19:14:03.408241  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:03.408248  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:03.408306  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:03.445354  438716 cri.go:89] found id: ""
	I0819 19:14:03.445386  438716 logs.go:276] 0 containers: []
	W0819 19:14:03.445396  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:03.445408  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:03.445470  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:03.481144  438716 cri.go:89] found id: ""
	I0819 19:14:03.481178  438716 logs.go:276] 0 containers: []
	W0819 19:14:03.481188  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:03.481195  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:03.481260  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:03.529069  438716 cri.go:89] found id: ""
	I0819 19:14:03.529109  438716 logs.go:276] 0 containers: []
	W0819 19:14:03.529141  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:03.529148  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:03.529216  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:03.590325  438716 cri.go:89] found id: ""
	I0819 19:14:03.590364  438716 logs.go:276] 0 containers: []
	W0819 19:14:03.590377  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:03.590386  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:03.590456  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:03.634924  438716 cri.go:89] found id: ""
	I0819 19:14:03.634969  438716 logs.go:276] 0 containers: []
	W0819 19:14:03.634981  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:03.634990  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:03.635062  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:03.684133  438716 cri.go:89] found id: ""
	I0819 19:14:03.684164  438716 logs.go:276] 0 containers: []
	W0819 19:14:03.684176  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:03.684184  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:03.684253  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:03.722285  438716 cri.go:89] found id: ""
	I0819 19:14:03.722312  438716 logs.go:276] 0 containers: []
	W0819 19:14:03.722321  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:03.722330  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:03.722372  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:03.735937  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:03.735965  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:03.814906  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:03.814931  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:03.814948  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:03.896323  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:03.896363  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:03.943002  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:03.943037  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:01.901154  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:03.902972  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:05.903388  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:03.694686  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:06.193228  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:04.919332  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:07.419221  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:06.496886  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:06.510719  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:06.510790  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:06.544692  438716 cri.go:89] found id: ""
	I0819 19:14:06.544724  438716 logs.go:276] 0 containers: []
	W0819 19:14:06.544737  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:06.544747  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:06.544818  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:06.578935  438716 cri.go:89] found id: ""
	I0819 19:14:06.578962  438716 logs.go:276] 0 containers: []
	W0819 19:14:06.578971  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:06.578979  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:06.579033  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:06.614488  438716 cri.go:89] found id: ""
	I0819 19:14:06.614516  438716 logs.go:276] 0 containers: []
	W0819 19:14:06.614525  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:06.614532  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:06.614583  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:06.648579  438716 cri.go:89] found id: ""
	I0819 19:14:06.648612  438716 logs.go:276] 0 containers: []
	W0819 19:14:06.648623  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:06.648630  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:06.648685  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:06.685168  438716 cri.go:89] found id: ""
	I0819 19:14:06.685198  438716 logs.go:276] 0 containers: []
	W0819 19:14:06.685208  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:06.685217  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:06.685280  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:06.720391  438716 cri.go:89] found id: ""
	I0819 19:14:06.720424  438716 logs.go:276] 0 containers: []
	W0819 19:14:06.720433  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:06.720440  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:06.720491  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:06.758183  438716 cri.go:89] found id: ""
	I0819 19:14:06.758217  438716 logs.go:276] 0 containers: []
	W0819 19:14:06.758228  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:06.758237  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:06.758307  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:06.800182  438716 cri.go:89] found id: ""
	I0819 19:14:06.800215  438716 logs.go:276] 0 containers: []
	W0819 19:14:06.800224  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:06.800234  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:06.800247  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:06.852735  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:06.852777  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:06.867214  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:06.867249  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:06.938942  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:06.938967  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:06.938980  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:07.023950  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:07.023992  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:09.568889  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:09.588481  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:09.588545  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:09.630790  438716 cri.go:89] found id: ""
	I0819 19:14:09.630825  438716 logs.go:276] 0 containers: []
	W0819 19:14:09.630839  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:09.630848  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:09.630926  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:09.673258  438716 cri.go:89] found id: ""
	I0819 19:14:09.673291  438716 logs.go:276] 0 containers: []
	W0819 19:14:09.673302  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:09.673311  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:09.673374  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:09.709500  438716 cri.go:89] found id: ""
	I0819 19:14:09.709530  438716 logs.go:276] 0 containers: []
	W0819 19:14:09.709541  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:09.709549  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:09.709617  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:09.743110  438716 cri.go:89] found id: ""
	I0819 19:14:09.743139  438716 logs.go:276] 0 containers: []
	W0819 19:14:09.743150  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:09.743164  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:09.743238  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:09.776717  438716 cri.go:89] found id: ""
	I0819 19:14:09.776746  438716 logs.go:276] 0 containers: []
	W0819 19:14:09.776754  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:09.776761  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:09.776820  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:09.811381  438716 cri.go:89] found id: ""
	I0819 19:14:09.811409  438716 logs.go:276] 0 containers: []
	W0819 19:14:09.811417  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:09.811423  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:09.811474  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:09.843699  438716 cri.go:89] found id: ""
	I0819 19:14:09.843730  438716 logs.go:276] 0 containers: []
	W0819 19:14:09.843741  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:09.843750  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:09.843822  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:09.882972  438716 cri.go:89] found id: ""
	I0819 19:14:09.883005  438716 logs.go:276] 0 containers: []
	W0819 19:14:09.883018  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:09.883033  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:09.883050  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:09.973077  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:09.973114  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:10.014505  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:10.014556  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:10.069779  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:10.069819  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:10.084337  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:10.084367  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:10.164870  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:08.402464  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:10.900684  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:08.193980  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:10.194818  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:09.918852  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:12.419687  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:12.665929  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:12.679881  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:12.679960  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:12.718305  438716 cri.go:89] found id: ""
	I0819 19:14:12.718332  438716 logs.go:276] 0 containers: []
	W0819 19:14:12.718341  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:12.718348  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:12.718398  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:12.759084  438716 cri.go:89] found id: ""
	I0819 19:14:12.759112  438716 logs.go:276] 0 containers: []
	W0819 19:14:12.759127  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:12.759135  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:12.759205  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:12.793193  438716 cri.go:89] found id: ""
	I0819 19:14:12.793228  438716 logs.go:276] 0 containers: []
	W0819 19:14:12.793238  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:12.793245  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:12.793299  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:12.828283  438716 cri.go:89] found id: ""
	I0819 19:14:12.828310  438716 logs.go:276] 0 containers: []
	W0819 19:14:12.828322  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:12.828329  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:12.828379  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:12.861971  438716 cri.go:89] found id: ""
	I0819 19:14:12.862004  438716 logs.go:276] 0 containers: []
	W0819 19:14:12.862016  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:12.862025  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:12.862092  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:12.898173  438716 cri.go:89] found id: ""
	I0819 19:14:12.898203  438716 logs.go:276] 0 containers: []
	W0819 19:14:12.898214  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:12.898223  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:12.898287  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:12.940203  438716 cri.go:89] found id: ""
	I0819 19:14:12.940234  438716 logs.go:276] 0 containers: []
	W0819 19:14:12.940246  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:12.940254  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:12.940309  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:12.978092  438716 cri.go:89] found id: ""
	I0819 19:14:12.978123  438716 logs.go:276] 0 containers: []
	W0819 19:14:12.978134  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:12.978147  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:12.978172  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:12.992082  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:12.992117  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:13.073609  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:13.073636  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:13.073649  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:13.153060  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:13.153105  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:13.196535  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:13.196581  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:12.903116  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:15.401183  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:12.693872  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:14.694252  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:17.193116  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:14.919563  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:17.418946  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:15.750298  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:15.763913  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:15.763996  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:15.804515  438716 cri.go:89] found id: ""
	I0819 19:14:15.804542  438716 logs.go:276] 0 containers: []
	W0819 19:14:15.804551  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:15.804558  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:15.804624  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:15.847077  438716 cri.go:89] found id: ""
	I0819 19:14:15.847112  438716 logs.go:276] 0 containers: []
	W0819 19:14:15.847125  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:15.847133  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:15.847200  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:15.882316  438716 cri.go:89] found id: ""
	I0819 19:14:15.882348  438716 logs.go:276] 0 containers: []
	W0819 19:14:15.882358  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:15.882365  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:15.882417  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:15.919084  438716 cri.go:89] found id: ""
	I0819 19:14:15.919114  438716 logs.go:276] 0 containers: []
	W0819 19:14:15.919125  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:15.919132  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:15.919202  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:15.953139  438716 cri.go:89] found id: ""
	I0819 19:14:15.953175  438716 logs.go:276] 0 containers: []
	W0819 19:14:15.953188  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:15.953209  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:15.953276  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:15.993231  438716 cri.go:89] found id: ""
	I0819 19:14:15.993259  438716 logs.go:276] 0 containers: []
	W0819 19:14:15.993268  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:15.993286  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:15.993337  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:16.030382  438716 cri.go:89] found id: ""
	I0819 19:14:16.030412  438716 logs.go:276] 0 containers: []
	W0819 19:14:16.030422  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:16.030428  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:16.030482  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:16.065834  438716 cri.go:89] found id: ""
	I0819 19:14:16.065861  438716 logs.go:276] 0 containers: []
	W0819 19:14:16.065872  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:16.065885  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:16.065901  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:16.117943  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:16.117983  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:16.132010  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:16.132041  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:16.202398  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:16.202416  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:16.202429  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:16.286609  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:16.286653  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:18.830502  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:18.844022  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:18.844107  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:18.880539  438716 cri.go:89] found id: ""
	I0819 19:14:18.880576  438716 logs.go:276] 0 containers: []
	W0819 19:14:18.880588  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:18.880595  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:18.880657  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:18.918426  438716 cri.go:89] found id: ""
	I0819 19:14:18.918454  438716 logs.go:276] 0 containers: []
	W0819 19:14:18.918463  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:18.918470  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:18.918531  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:18.954534  438716 cri.go:89] found id: ""
	I0819 19:14:18.954566  438716 logs.go:276] 0 containers: []
	W0819 19:14:18.954578  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:18.954587  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:18.954651  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:18.993820  438716 cri.go:89] found id: ""
	I0819 19:14:18.993852  438716 logs.go:276] 0 containers: []
	W0819 19:14:18.993864  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:18.993885  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:18.993967  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:19.026947  438716 cri.go:89] found id: ""
	I0819 19:14:19.026982  438716 logs.go:276] 0 containers: []
	W0819 19:14:19.026995  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:19.027005  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:19.027072  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:19.062097  438716 cri.go:89] found id: ""
	I0819 19:14:19.062130  438716 logs.go:276] 0 containers: []
	W0819 19:14:19.062142  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:19.062150  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:19.062207  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:19.099522  438716 cri.go:89] found id: ""
	I0819 19:14:19.099549  438716 logs.go:276] 0 containers: []
	W0819 19:14:19.099559  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:19.099567  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:19.099630  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:19.134766  438716 cri.go:89] found id: ""
	I0819 19:14:19.134803  438716 logs.go:276] 0 containers: []
	W0819 19:14:19.134815  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:19.134850  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:19.134867  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:19.176428  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:19.176458  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:19.231448  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:19.231484  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:19.245631  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:19.245687  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:19.318679  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:19.318703  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:19.318717  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:17.401916  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:19.402628  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:19.195224  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:21.693528  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:19.918727  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:21.918863  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:23.919050  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:21.898430  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:21.913840  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:21.913911  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:21.955682  438716 cri.go:89] found id: ""
	I0819 19:14:21.955720  438716 logs.go:276] 0 containers: []
	W0819 19:14:21.955732  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:21.955743  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:21.955820  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:21.994798  438716 cri.go:89] found id: ""
	I0819 19:14:21.994836  438716 logs.go:276] 0 containers: []
	W0819 19:14:21.994845  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:21.994852  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:21.994904  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:22.029155  438716 cri.go:89] found id: ""
	I0819 19:14:22.029191  438716 logs.go:276] 0 containers: []
	W0819 19:14:22.029202  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:22.029210  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:22.029281  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:22.072489  438716 cri.go:89] found id: ""
	I0819 19:14:22.072534  438716 logs.go:276] 0 containers: []
	W0819 19:14:22.072546  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:22.072559  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:22.072621  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:22.109160  438716 cri.go:89] found id: ""
	I0819 19:14:22.109192  438716 logs.go:276] 0 containers: []
	W0819 19:14:22.109203  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:22.109211  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:22.109281  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:22.146161  438716 cri.go:89] found id: ""
	I0819 19:14:22.146194  438716 logs.go:276] 0 containers: []
	W0819 19:14:22.146206  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:22.146215  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:22.146276  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:22.183005  438716 cri.go:89] found id: ""
	I0819 19:14:22.183033  438716 logs.go:276] 0 containers: []
	W0819 19:14:22.183046  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:22.183054  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:22.183108  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:22.220745  438716 cri.go:89] found id: ""
	I0819 19:14:22.220772  438716 logs.go:276] 0 containers: []
	W0819 19:14:22.220784  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:22.220798  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:22.220817  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:22.297377  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:22.297403  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:22.297416  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:22.373503  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:22.373542  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:22.414922  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:22.414956  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:22.477902  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:22.477944  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:24.993405  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:25.007305  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:25.007379  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:25.041157  438716 cri.go:89] found id: ""
	I0819 19:14:25.041191  438716 logs.go:276] 0 containers: []
	W0819 19:14:25.041203  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:25.041211  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:25.041278  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:25.078572  438716 cri.go:89] found id: ""
	I0819 19:14:25.078605  438716 logs.go:276] 0 containers: []
	W0819 19:14:25.078617  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:25.078625  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:25.078695  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:25.114571  438716 cri.go:89] found id: ""
	I0819 19:14:25.114603  438716 logs.go:276] 0 containers: []
	W0819 19:14:25.114615  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:25.114624  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:25.114690  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:25.154341  438716 cri.go:89] found id: ""
	I0819 19:14:25.154366  438716 logs.go:276] 0 containers: []
	W0819 19:14:25.154375  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:25.154381  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:25.154434  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:25.192592  438716 cri.go:89] found id: ""
	I0819 19:14:25.192620  438716 logs.go:276] 0 containers: []
	W0819 19:14:25.192631  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:25.192640  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:25.192705  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:25.227813  438716 cri.go:89] found id: ""
	I0819 19:14:25.227847  438716 logs.go:276] 0 containers: []
	W0819 19:14:25.227860  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:25.227869  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:25.227933  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:25.264321  438716 cri.go:89] found id: ""
	I0819 19:14:25.264349  438716 logs.go:276] 0 containers: []
	W0819 19:14:25.264357  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:25.264364  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:25.264427  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:25.298562  438716 cri.go:89] found id: ""
	I0819 19:14:25.298596  438716 logs.go:276] 0 containers: []
	W0819 19:14:25.298608  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:25.298621  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:25.298638  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:25.352659  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:25.352695  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:25.366638  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:25.366665  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:25.432964  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:25.432992  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:25.433010  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:25.511487  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:25.511549  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:21.902660  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:24.401454  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:26.402255  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:24.193406  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:26.194758  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:25.919090  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:28.420031  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:28.057003  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:28.070849  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:28.070914  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:28.107817  438716 cri.go:89] found id: ""
	I0819 19:14:28.107852  438716 logs.go:276] 0 containers: []
	W0819 19:14:28.107865  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:28.107875  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:28.107948  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:28.141816  438716 cri.go:89] found id: ""
	I0819 19:14:28.141862  438716 logs.go:276] 0 containers: []
	W0819 19:14:28.141874  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:28.141887  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:28.141958  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:28.179854  438716 cri.go:89] found id: ""
	I0819 19:14:28.179885  438716 logs.go:276] 0 containers: []
	W0819 19:14:28.179893  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:28.179905  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:28.179972  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:28.217335  438716 cri.go:89] found id: ""
	I0819 19:14:28.217364  438716 logs.go:276] 0 containers: []
	W0819 19:14:28.217372  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:28.217380  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:28.217438  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:28.254161  438716 cri.go:89] found id: ""
	I0819 19:14:28.254193  438716 logs.go:276] 0 containers: []
	W0819 19:14:28.254204  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:28.254213  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:28.254276  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:28.288658  438716 cri.go:89] found id: ""
	I0819 19:14:28.288682  438716 logs.go:276] 0 containers: []
	W0819 19:14:28.288691  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:28.288698  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:28.288749  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:28.321957  438716 cri.go:89] found id: ""
	I0819 19:14:28.321987  438716 logs.go:276] 0 containers: []
	W0819 19:14:28.321996  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:28.322004  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:28.322057  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:28.355032  438716 cri.go:89] found id: ""
	I0819 19:14:28.355068  438716 logs.go:276] 0 containers: []
	W0819 19:14:28.355080  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:28.355094  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:28.355111  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:28.406220  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:28.406253  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:28.420877  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:28.420907  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:28.502576  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:28.502598  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:28.502614  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:28.582717  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:28.582769  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:28.904716  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:31.401098  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:28.195001  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:30.693605  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:30.917957  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:32.918239  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:31.121960  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:31.135502  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:31.135568  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:31.170423  438716 cri.go:89] found id: ""
	I0819 19:14:31.170451  438716 logs.go:276] 0 containers: []
	W0819 19:14:31.170461  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:31.170467  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:31.170532  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:31.207328  438716 cri.go:89] found id: ""
	I0819 19:14:31.207356  438716 logs.go:276] 0 containers: []
	W0819 19:14:31.207364  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:31.207370  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:31.207430  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:31.245655  438716 cri.go:89] found id: ""
	I0819 19:14:31.245687  438716 logs.go:276] 0 containers: []
	W0819 19:14:31.245698  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:31.245707  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:31.245773  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:31.282174  438716 cri.go:89] found id: ""
	I0819 19:14:31.282208  438716 logs.go:276] 0 containers: []
	W0819 19:14:31.282221  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:31.282230  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:31.282303  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:31.316779  438716 cri.go:89] found id: ""
	I0819 19:14:31.316810  438716 logs.go:276] 0 containers: []
	W0819 19:14:31.316818  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:31.316826  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:31.316879  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:31.356849  438716 cri.go:89] found id: ""
	I0819 19:14:31.356884  438716 logs.go:276] 0 containers: []
	W0819 19:14:31.356894  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:31.356900  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:31.356963  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:31.395102  438716 cri.go:89] found id: ""
	I0819 19:14:31.395135  438716 logs.go:276] 0 containers: []
	W0819 19:14:31.395143  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:31.395150  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:31.395205  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:31.433018  438716 cri.go:89] found id: ""
	I0819 19:14:31.433045  438716 logs.go:276] 0 containers: []
	W0819 19:14:31.433076  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:31.433091  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:31.433108  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:31.446294  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:31.446319  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:31.518158  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:31.518180  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:31.518196  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:31.600568  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:31.600611  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:31.642356  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:31.642386  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:34.195665  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:34.210300  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:34.210370  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:34.248715  438716 cri.go:89] found id: ""
	I0819 19:14:34.248753  438716 logs.go:276] 0 containers: []
	W0819 19:14:34.248767  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:34.248775  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:34.248849  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:34.285305  438716 cri.go:89] found id: ""
	I0819 19:14:34.285334  438716 logs.go:276] 0 containers: []
	W0819 19:14:34.285347  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:34.285355  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:34.285438  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:34.326114  438716 cri.go:89] found id: ""
	I0819 19:14:34.326148  438716 logs.go:276] 0 containers: []
	W0819 19:14:34.326160  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:34.326168  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:34.326235  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:34.360587  438716 cri.go:89] found id: ""
	I0819 19:14:34.360616  438716 logs.go:276] 0 containers: []
	W0819 19:14:34.360628  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:34.360638  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:34.360715  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:34.397452  438716 cri.go:89] found id: ""
	I0819 19:14:34.397483  438716 logs.go:276] 0 containers: []
	W0819 19:14:34.397491  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:34.397498  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:34.397556  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:34.433651  438716 cri.go:89] found id: ""
	I0819 19:14:34.433683  438716 logs.go:276] 0 containers: []
	W0819 19:14:34.433694  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:34.433702  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:34.433771  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:34.468758  438716 cri.go:89] found id: ""
	I0819 19:14:34.468787  438716 logs.go:276] 0 containers: []
	W0819 19:14:34.468796  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:34.468802  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:34.468856  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:34.505787  438716 cri.go:89] found id: ""
	I0819 19:14:34.505816  438716 logs.go:276] 0 containers: []
	W0819 19:14:34.505828  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:34.505842  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:34.505859  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:34.519430  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:34.519463  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:34.592785  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:34.592810  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:34.592827  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:34.671215  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:34.671254  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:34.711248  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:34.711277  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:33.403429  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:35.901124  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:33.194319  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:35.694280  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:34.918372  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:37.418982  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:37.265131  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:37.279035  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:37.279127  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:37.325556  438716 cri.go:89] found id: ""
	I0819 19:14:37.325589  438716 logs.go:276] 0 containers: []
	W0819 19:14:37.325601  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:37.325610  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:37.325676  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:37.360514  438716 cri.go:89] found id: ""
	I0819 19:14:37.360541  438716 logs.go:276] 0 containers: []
	W0819 19:14:37.360553  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:37.360561  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:37.360616  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:37.394428  438716 cri.go:89] found id: ""
	I0819 19:14:37.394456  438716 logs.go:276] 0 containers: []
	W0819 19:14:37.394465  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:37.394472  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:37.394531  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:37.430221  438716 cri.go:89] found id: ""
	I0819 19:14:37.430249  438716 logs.go:276] 0 containers: []
	W0819 19:14:37.430257  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:37.430264  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:37.430324  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:37.466598  438716 cri.go:89] found id: ""
	I0819 19:14:37.466630  438716 logs.go:276] 0 containers: []
	W0819 19:14:37.466641  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:37.466649  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:37.466719  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:37.510455  438716 cri.go:89] found id: ""
	I0819 19:14:37.510484  438716 logs.go:276] 0 containers: []
	W0819 19:14:37.510492  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:37.510499  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:37.510563  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:37.546122  438716 cri.go:89] found id: ""
	I0819 19:14:37.546157  438716 logs.go:276] 0 containers: []
	W0819 19:14:37.546169  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:37.546178  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:37.546247  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:37.579425  438716 cri.go:89] found id: ""
	I0819 19:14:37.579452  438716 logs.go:276] 0 containers: []
	W0819 19:14:37.579463  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:37.579475  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:37.579491  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:37.592673  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:37.592704  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:37.674026  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:37.674048  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:37.674065  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:37.752206  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:37.752244  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:37.791281  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:37.791321  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:40.345520  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:40.358771  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:40.358835  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:40.394515  438716 cri.go:89] found id: ""
	I0819 19:14:40.394549  438716 logs.go:276] 0 containers: []
	W0819 19:14:40.394565  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:40.394575  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:40.394637  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:40.430971  438716 cri.go:89] found id: ""
	I0819 19:14:40.431007  438716 logs.go:276] 0 containers: []
	W0819 19:14:40.431018  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:40.431027  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:40.431094  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:40.471417  438716 cri.go:89] found id: ""
	I0819 19:14:40.471443  438716 logs.go:276] 0 containers: []
	W0819 19:14:40.471452  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:40.471458  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:40.471511  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:40.508641  438716 cri.go:89] found id: ""
	I0819 19:14:40.508670  438716 logs.go:276] 0 containers: []
	W0819 19:14:40.508678  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:40.508684  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:40.508749  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:37.903083  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:40.402562  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:37.695031  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:40.193724  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:39.921480  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:42.420201  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:40.542418  438716 cri.go:89] found id: ""
	I0819 19:14:40.542456  438716 logs.go:276] 0 containers: []
	W0819 19:14:40.542465  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:40.542472  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:40.542533  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:40.577367  438716 cri.go:89] found id: ""
	I0819 19:14:40.577399  438716 logs.go:276] 0 containers: []
	W0819 19:14:40.577408  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:40.577414  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:40.577476  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:40.611111  438716 cri.go:89] found id: ""
	I0819 19:14:40.611138  438716 logs.go:276] 0 containers: []
	W0819 19:14:40.611147  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:40.611155  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:40.611222  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:40.650769  438716 cri.go:89] found id: ""
	I0819 19:14:40.650797  438716 logs.go:276] 0 containers: []
	W0819 19:14:40.650805  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:40.650814  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:40.650827  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:40.688085  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:40.688111  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:40.740187  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:40.740225  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:40.754774  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:40.754803  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:40.828689  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:40.828712  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:40.828728  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:43.419171  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:43.432127  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:43.432201  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:43.468751  438716 cri.go:89] found id: ""
	I0819 19:14:43.468778  438716 logs.go:276] 0 containers: []
	W0819 19:14:43.468787  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:43.468803  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:43.468870  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:43.503290  438716 cri.go:89] found id: ""
	I0819 19:14:43.503319  438716 logs.go:276] 0 containers: []
	W0819 19:14:43.503328  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:43.503334  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:43.503390  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:43.536382  438716 cri.go:89] found id: ""
	I0819 19:14:43.536416  438716 logs.go:276] 0 containers: []
	W0819 19:14:43.536435  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:43.536443  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:43.536494  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:43.571570  438716 cri.go:89] found id: ""
	I0819 19:14:43.571602  438716 logs.go:276] 0 containers: []
	W0819 19:14:43.571611  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:43.571617  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:43.571682  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:43.610421  438716 cri.go:89] found id: ""
	I0819 19:14:43.610455  438716 logs.go:276] 0 containers: []
	W0819 19:14:43.610465  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:43.610473  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:43.610524  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:43.647173  438716 cri.go:89] found id: ""
	I0819 19:14:43.647200  438716 logs.go:276] 0 containers: []
	W0819 19:14:43.647209  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:43.647215  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:43.647266  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:43.684493  438716 cri.go:89] found id: ""
	I0819 19:14:43.684525  438716 logs.go:276] 0 containers: []
	W0819 19:14:43.684535  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:43.684541  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:43.684609  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:43.718781  438716 cri.go:89] found id: ""
	I0819 19:14:43.718811  438716 logs.go:276] 0 containers: []
	W0819 19:14:43.718822  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:43.718834  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:43.718858  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:43.732546  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:43.732578  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:43.819640  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:43.819665  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:43.819700  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:43.900246  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:43.900286  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:43.941751  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:43.941783  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:42.901387  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:44.901876  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:42.693950  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:45.193132  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:44.918631  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:47.417977  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:46.498232  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:46.511167  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:46.511237  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:46.545493  438716 cri.go:89] found id: ""
	I0819 19:14:46.545528  438716 logs.go:276] 0 containers: []
	W0819 19:14:46.545541  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:46.545549  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:46.545607  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:46.580599  438716 cri.go:89] found id: ""
	I0819 19:14:46.580626  438716 logs.go:276] 0 containers: []
	W0819 19:14:46.580634  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:46.580640  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:46.580760  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:46.614515  438716 cri.go:89] found id: ""
	I0819 19:14:46.614551  438716 logs.go:276] 0 containers: []
	W0819 19:14:46.614561  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:46.614570  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:46.614637  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:46.647767  438716 cri.go:89] found id: ""
	I0819 19:14:46.647803  438716 logs.go:276] 0 containers: []
	W0819 19:14:46.647816  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:46.647825  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:46.647893  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:46.681660  438716 cri.go:89] found id: ""
	I0819 19:14:46.681695  438716 logs.go:276] 0 containers: []
	W0819 19:14:46.681707  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:46.681717  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:46.681788  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:46.718828  438716 cri.go:89] found id: ""
	I0819 19:14:46.718858  438716 logs.go:276] 0 containers: []
	W0819 19:14:46.718868  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:46.718875  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:46.718929  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:46.760524  438716 cri.go:89] found id: ""
	I0819 19:14:46.760553  438716 logs.go:276] 0 containers: []
	W0819 19:14:46.760561  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:46.760569  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:46.760634  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:46.799014  438716 cri.go:89] found id: ""
	I0819 19:14:46.799042  438716 logs.go:276] 0 containers: []
	W0819 19:14:46.799054  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:46.799067  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:46.799135  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:46.850769  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:46.850812  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:46.865647  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:46.865698  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:46.942197  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:46.942228  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:46.942244  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:47.019295  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:47.019337  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:49.562713  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:49.575406  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:49.575484  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:49.610067  438716 cri.go:89] found id: ""
	I0819 19:14:49.610105  438716 logs.go:276] 0 containers: []
	W0819 19:14:49.610115  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:49.610121  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:49.610182  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:49.646164  438716 cri.go:89] found id: ""
	I0819 19:14:49.646205  438716 logs.go:276] 0 containers: []
	W0819 19:14:49.646230  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:49.646238  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:49.646317  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:49.680268  438716 cri.go:89] found id: ""
	I0819 19:14:49.680303  438716 logs.go:276] 0 containers: []
	W0819 19:14:49.680314  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:49.680322  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:49.680387  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:49.714952  438716 cri.go:89] found id: ""
	I0819 19:14:49.714981  438716 logs.go:276] 0 containers: []
	W0819 19:14:49.714992  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:49.715001  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:49.715067  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:49.749483  438716 cri.go:89] found id: ""
	I0819 19:14:49.749516  438716 logs.go:276] 0 containers: []
	W0819 19:14:49.749528  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:49.749537  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:49.749616  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:49.794506  438716 cri.go:89] found id: ""
	I0819 19:14:49.794538  438716 logs.go:276] 0 containers: []
	W0819 19:14:49.794550  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:49.794558  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:49.794628  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:49.847284  438716 cri.go:89] found id: ""
	I0819 19:14:49.847313  438716 logs.go:276] 0 containers: []
	W0819 19:14:49.847324  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:49.847334  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:49.847398  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:49.903800  438716 cri.go:89] found id: ""
	I0819 19:14:49.903829  438716 logs.go:276] 0 containers: []
	W0819 19:14:49.903839  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:49.903850  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:49.903867  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:49.972836  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:49.972866  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:49.972885  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:50.049939  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:50.049976  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:50.086514  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:50.086550  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:50.140681  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:50.140718  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:46.903667  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:49.402220  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:51.402281  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:47.693723  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:49.694755  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:52.193220  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:49.919931  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:52.419880  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:52.656573  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:52.670043  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:52.670124  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:52.704514  438716 cri.go:89] found id: ""
	I0819 19:14:52.704541  438716 logs.go:276] 0 containers: []
	W0819 19:14:52.704551  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:52.704558  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:52.704621  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:52.738329  438716 cri.go:89] found id: ""
	I0819 19:14:52.738357  438716 logs.go:276] 0 containers: []
	W0819 19:14:52.738365  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:52.738371  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:52.738423  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:52.774886  438716 cri.go:89] found id: ""
	I0819 19:14:52.774917  438716 logs.go:276] 0 containers: []
	W0819 19:14:52.774926  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:52.774933  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:52.774986  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:52.810262  438716 cri.go:89] found id: ""
	I0819 19:14:52.810288  438716 logs.go:276] 0 containers: []
	W0819 19:14:52.810296  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:52.810303  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:52.810363  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:52.848429  438716 cri.go:89] found id: ""
	I0819 19:14:52.848455  438716 logs.go:276] 0 containers: []
	W0819 19:14:52.848463  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:52.848474  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:52.848539  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:52.886135  438716 cri.go:89] found id: ""
	I0819 19:14:52.886163  438716 logs.go:276] 0 containers: []
	W0819 19:14:52.886179  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:52.886185  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:52.886241  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:52.923288  438716 cri.go:89] found id: ""
	I0819 19:14:52.923314  438716 logs.go:276] 0 containers: []
	W0819 19:14:52.923325  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:52.923333  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:52.923397  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:52.957273  438716 cri.go:89] found id: ""
	I0819 19:14:52.957303  438716 logs.go:276] 0 containers: []
	W0819 19:14:52.957315  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:52.957328  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:52.957345  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:52.970687  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:52.970714  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:53.045081  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:53.045108  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:53.045125  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:53.122233  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:53.122279  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:53.161525  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:53.161554  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:53.901584  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:55.902739  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:54.194220  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:56.197070  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:54.917358  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:56.918562  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:58.919041  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:55.714177  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:55.733726  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:55.733809  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:55.781435  438716 cri.go:89] found id: ""
	I0819 19:14:55.781472  438716 logs.go:276] 0 containers: []
	W0819 19:14:55.781485  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:55.781493  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:55.781560  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:55.846316  438716 cri.go:89] found id: ""
	I0819 19:14:55.846351  438716 logs.go:276] 0 containers: []
	W0819 19:14:55.846362  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:55.846370  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:55.846439  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:55.881587  438716 cri.go:89] found id: ""
	I0819 19:14:55.881623  438716 logs.go:276] 0 containers: []
	W0819 19:14:55.881635  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:55.881644  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:55.881719  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:55.919332  438716 cri.go:89] found id: ""
	I0819 19:14:55.919374  438716 logs.go:276] 0 containers: []
	W0819 19:14:55.919382  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:55.919389  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:55.919441  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:55.954704  438716 cri.go:89] found id: ""
	I0819 19:14:55.954739  438716 logs.go:276] 0 containers: []
	W0819 19:14:55.954752  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:55.954761  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:55.954836  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:55.989289  438716 cri.go:89] found id: ""
	I0819 19:14:55.989321  438716 logs.go:276] 0 containers: []
	W0819 19:14:55.989332  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:55.989340  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:55.989406  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:56.025771  438716 cri.go:89] found id: ""
	I0819 19:14:56.025800  438716 logs.go:276] 0 containers: []
	W0819 19:14:56.025809  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:56.025816  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:56.025883  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:56.065631  438716 cri.go:89] found id: ""
	I0819 19:14:56.065673  438716 logs.go:276] 0 containers: []
	W0819 19:14:56.065686  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:56.065699  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:56.065722  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:56.119482  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:56.119523  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:56.133885  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:56.133915  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:56.207012  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:56.207033  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:56.207045  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:56.288158  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:56.288195  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:58.829677  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:58.844085  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:58.844158  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:58.880900  438716 cri.go:89] found id: ""
	I0819 19:14:58.880934  438716 logs.go:276] 0 containers: []
	W0819 19:14:58.880945  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:58.880951  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:58.881016  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:58.918833  438716 cri.go:89] found id: ""
	I0819 19:14:58.918862  438716 logs.go:276] 0 containers: []
	W0819 19:14:58.918872  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:58.918881  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:58.918939  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:58.956577  438716 cri.go:89] found id: ""
	I0819 19:14:58.956612  438716 logs.go:276] 0 containers: []
	W0819 19:14:58.956623  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:58.956634  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:58.956705  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:58.993884  438716 cri.go:89] found id: ""
	I0819 19:14:58.993914  438716 logs.go:276] 0 containers: []
	W0819 19:14:58.993923  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:58.993930  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:58.993988  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:59.031366  438716 cri.go:89] found id: ""
	I0819 19:14:59.031389  438716 logs.go:276] 0 containers: []
	W0819 19:14:59.031398  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:59.031405  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:59.031464  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:59.072014  438716 cri.go:89] found id: ""
	I0819 19:14:59.072047  438716 logs.go:276] 0 containers: []
	W0819 19:14:59.072058  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:59.072065  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:59.072129  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:59.108713  438716 cri.go:89] found id: ""
	I0819 19:14:59.108744  438716 logs.go:276] 0 containers: []
	W0819 19:14:59.108756  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:59.108765  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:59.108866  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:59.147599  438716 cri.go:89] found id: ""
	I0819 19:14:59.147634  438716 logs.go:276] 0 containers: []
	W0819 19:14:59.147647  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:59.147659  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:59.147695  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:59.224745  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:59.224781  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:59.264586  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:59.264616  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:59.317065  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:59.317104  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:59.331230  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:59.331264  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:59.398370  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:58.401471  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:00.402623  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:58.694096  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:01.193262  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:01.418063  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:03.418302  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:01.899123  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:01.912743  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:01.912824  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:01.949717  438716 cri.go:89] found id: ""
	I0819 19:15:01.949748  438716 logs.go:276] 0 containers: []
	W0819 19:15:01.949756  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:01.949763  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:01.949819  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:01.992776  438716 cri.go:89] found id: ""
	I0819 19:15:01.992802  438716 logs.go:276] 0 containers: []
	W0819 19:15:01.992812  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:01.992819  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:01.992884  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:02.030551  438716 cri.go:89] found id: ""
	I0819 19:15:02.030579  438716 logs.go:276] 0 containers: []
	W0819 19:15:02.030592  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:02.030600  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:02.030672  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:02.069927  438716 cri.go:89] found id: ""
	I0819 19:15:02.069955  438716 logs.go:276] 0 containers: []
	W0819 19:15:02.069964  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:02.069971  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:02.070031  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:02.106584  438716 cri.go:89] found id: ""
	I0819 19:15:02.106609  438716 logs.go:276] 0 containers: []
	W0819 19:15:02.106619  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:02.106629  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:02.106695  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:02.145007  438716 cri.go:89] found id: ""
	I0819 19:15:02.145035  438716 logs.go:276] 0 containers: []
	W0819 19:15:02.145044  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:02.145051  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:02.145113  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:02.180693  438716 cri.go:89] found id: ""
	I0819 19:15:02.180730  438716 logs.go:276] 0 containers: []
	W0819 19:15:02.180741  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:02.180748  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:02.180800  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:02.215563  438716 cri.go:89] found id: ""
	I0819 19:15:02.215597  438716 logs.go:276] 0 containers: []
	W0819 19:15:02.215609  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:02.215623  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:02.215641  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:02.285658  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:02.285692  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:02.285711  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:02.363620  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:02.363660  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:02.414240  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:02.414274  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:02.467336  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:02.467380  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:04.981935  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:04.995537  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:04.995611  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:05.032700  438716 cri.go:89] found id: ""
	I0819 19:15:05.032735  438716 logs.go:276] 0 containers: []
	W0819 19:15:05.032748  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:05.032756  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:05.032827  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:05.069132  438716 cri.go:89] found id: ""
	I0819 19:15:05.069162  438716 logs.go:276] 0 containers: []
	W0819 19:15:05.069173  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:05.069181  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:05.069247  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:05.105320  438716 cri.go:89] found id: ""
	I0819 19:15:05.105346  438716 logs.go:276] 0 containers: []
	W0819 19:15:05.105355  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:05.105361  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:05.105421  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:05.142311  438716 cri.go:89] found id: ""
	I0819 19:15:05.142343  438716 logs.go:276] 0 containers: []
	W0819 19:15:05.142354  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:05.142362  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:05.142412  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:05.177398  438716 cri.go:89] found id: ""
	I0819 19:15:05.177426  438716 logs.go:276] 0 containers: []
	W0819 19:15:05.177437  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:05.177450  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:05.177506  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:05.212749  438716 cri.go:89] found id: ""
	I0819 19:15:05.212780  438716 logs.go:276] 0 containers: []
	W0819 19:15:05.212789  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:05.212796  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:05.212854  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:05.246325  438716 cri.go:89] found id: ""
	I0819 19:15:05.246356  438716 logs.go:276] 0 containers: []
	W0819 19:15:05.246364  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:05.246371  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:05.246420  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:05.287429  438716 cri.go:89] found id: ""
	I0819 19:15:05.287456  438716 logs.go:276] 0 containers: []
	W0819 19:15:05.287466  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:05.287476  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:05.287489  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:05.338742  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:05.338787  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:05.352948  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:05.352978  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:05.421478  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:05.421502  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:05.421529  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:05.497772  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:05.497809  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:02.902202  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:05.403518  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:03.193491  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:05.194340  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:05.419361  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:07.918522  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:08.040403  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:08.053761  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:08.053827  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:08.087047  438716 cri.go:89] found id: ""
	I0819 19:15:08.087073  438716 logs.go:276] 0 containers: []
	W0819 19:15:08.087082  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:08.087089  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:08.087140  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:08.122012  438716 cri.go:89] found id: ""
	I0819 19:15:08.122048  438716 logs.go:276] 0 containers: []
	W0819 19:15:08.122059  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:08.122068  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:08.122134  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:08.155319  438716 cri.go:89] found id: ""
	I0819 19:15:08.155349  438716 logs.go:276] 0 containers: []
	W0819 19:15:08.155360  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:08.155368  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:08.155447  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:08.196003  438716 cri.go:89] found id: ""
	I0819 19:15:08.196027  438716 logs.go:276] 0 containers: []
	W0819 19:15:08.196035  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:08.196041  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:08.196091  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:08.230798  438716 cri.go:89] found id: ""
	I0819 19:15:08.230826  438716 logs.go:276] 0 containers: []
	W0819 19:15:08.230836  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:08.230845  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:08.230910  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:08.267522  438716 cri.go:89] found id: ""
	I0819 19:15:08.267554  438716 logs.go:276] 0 containers: []
	W0819 19:15:08.267562  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:08.267569  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:08.267621  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:08.304775  438716 cri.go:89] found id: ""
	I0819 19:15:08.304801  438716 logs.go:276] 0 containers: []
	W0819 19:15:08.304809  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:08.304815  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:08.304866  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:08.344694  438716 cri.go:89] found id: ""
	I0819 19:15:08.344720  438716 logs.go:276] 0 containers: []
	W0819 19:15:08.344734  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:08.344744  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:08.344757  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:08.383581  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:08.383619  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:08.433868  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:08.433905  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:08.447627  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:08.447657  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:08.518846  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:08.518869  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:08.518887  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:07.901746  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:09.902647  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:07.693351  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:10.193893  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:12.194400  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:09.919436  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:12.418215  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:11.104449  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:11.118149  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:11.118228  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:11.157917  438716 cri.go:89] found id: ""
	I0819 19:15:11.157951  438716 logs.go:276] 0 containers: []
	W0819 19:15:11.157963  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:11.157971  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:11.158040  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:11.196685  438716 cri.go:89] found id: ""
	I0819 19:15:11.196711  438716 logs.go:276] 0 containers: []
	W0819 19:15:11.196721  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:11.196729  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:11.196788  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:11.231089  438716 cri.go:89] found id: ""
	I0819 19:15:11.231124  438716 logs.go:276] 0 containers: []
	W0819 19:15:11.231135  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:11.231144  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:11.231223  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:11.267001  438716 cri.go:89] found id: ""
	I0819 19:15:11.267032  438716 logs.go:276] 0 containers: []
	W0819 19:15:11.267041  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:11.267048  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:11.267113  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:11.302178  438716 cri.go:89] found id: ""
	I0819 19:15:11.302210  438716 logs.go:276] 0 containers: []
	W0819 19:15:11.302223  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:11.302232  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:11.302292  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:11.336335  438716 cri.go:89] found id: ""
	I0819 19:15:11.336368  438716 logs.go:276] 0 containers: []
	W0819 19:15:11.336442  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:11.336458  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:11.336525  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:11.370891  438716 cri.go:89] found id: ""
	I0819 19:15:11.370926  438716 logs.go:276] 0 containers: []
	W0819 19:15:11.370937  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:11.370945  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:11.371007  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:11.407439  438716 cri.go:89] found id: ""
	I0819 19:15:11.407466  438716 logs.go:276] 0 containers: []
	W0819 19:15:11.407473  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:11.407482  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:11.407497  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:11.458692  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:11.458735  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:11.473104  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:11.473133  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:11.542004  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:11.542031  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:11.542050  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:11.619972  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:11.620014  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:14.159220  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:14.173135  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:14.173204  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:14.210347  438716 cri.go:89] found id: ""
	I0819 19:15:14.210377  438716 logs.go:276] 0 containers: []
	W0819 19:15:14.210389  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:14.210398  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:14.210468  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:14.247143  438716 cri.go:89] found id: ""
	I0819 19:15:14.247169  438716 logs.go:276] 0 containers: []
	W0819 19:15:14.247180  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:14.247187  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:14.247260  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:14.284949  438716 cri.go:89] found id: ""
	I0819 19:15:14.284981  438716 logs.go:276] 0 containers: []
	W0819 19:15:14.284995  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:14.285003  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:14.285071  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:14.326801  438716 cri.go:89] found id: ""
	I0819 19:15:14.326826  438716 logs.go:276] 0 containers: []
	W0819 19:15:14.326834  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:14.326842  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:14.326903  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:14.362730  438716 cri.go:89] found id: ""
	I0819 19:15:14.362764  438716 logs.go:276] 0 containers: []
	W0819 19:15:14.362775  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:14.362783  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:14.362852  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:14.403406  438716 cri.go:89] found id: ""
	I0819 19:15:14.403437  438716 logs.go:276] 0 containers: []
	W0819 19:15:14.403448  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:14.403456  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:14.403514  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:14.440641  438716 cri.go:89] found id: ""
	I0819 19:15:14.440670  438716 logs.go:276] 0 containers: []
	W0819 19:15:14.440678  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:14.440685  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:14.440737  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:14.479477  438716 cri.go:89] found id: ""
	I0819 19:15:14.479511  438716 logs.go:276] 0 containers: []
	W0819 19:15:14.479521  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:14.479530  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:14.479544  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:14.530573  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:14.530620  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:14.545329  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:14.545368  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:14.619632  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:14.619652  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:14.619680  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:14.694923  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:14.694956  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:12.401350  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:14.402845  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:14.693534  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:16.693737  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:14.420872  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:16.918227  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:18.919244  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:17.237830  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:17.250579  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:17.250645  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:17.284706  438716 cri.go:89] found id: ""
	I0819 19:15:17.284738  438716 logs.go:276] 0 containers: []
	W0819 19:15:17.284750  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:17.284759  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:17.284832  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:17.320313  438716 cri.go:89] found id: ""
	I0819 19:15:17.320342  438716 logs.go:276] 0 containers: []
	W0819 19:15:17.320350  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:17.320356  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:17.320419  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:17.355974  438716 cri.go:89] found id: ""
	I0819 19:15:17.356008  438716 logs.go:276] 0 containers: []
	W0819 19:15:17.356018  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:17.356027  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:17.356093  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:17.390759  438716 cri.go:89] found id: ""
	I0819 19:15:17.390786  438716 logs.go:276] 0 containers: []
	W0819 19:15:17.390795  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:17.390803  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:17.390861  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:17.431951  438716 cri.go:89] found id: ""
	I0819 19:15:17.431982  438716 logs.go:276] 0 containers: []
	W0819 19:15:17.431993  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:17.432001  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:17.432068  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:17.467183  438716 cri.go:89] found id: ""
	I0819 19:15:17.467215  438716 logs.go:276] 0 containers: []
	W0819 19:15:17.467227  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:17.467236  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:17.467306  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:17.502678  438716 cri.go:89] found id: ""
	I0819 19:15:17.502709  438716 logs.go:276] 0 containers: []
	W0819 19:15:17.502721  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:17.502730  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:17.502801  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:17.537597  438716 cri.go:89] found id: ""
	I0819 19:15:17.537629  438716 logs.go:276] 0 containers: []
	W0819 19:15:17.537643  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:17.537656  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:17.537672  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:17.620076  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:17.620117  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:17.659979  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:17.660009  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:17.710963  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:17.711006  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:17.725556  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:17.725590  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:17.796176  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:20.297246  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:20.311395  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:20.311476  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:20.352279  438716 cri.go:89] found id: ""
	I0819 19:15:20.352317  438716 logs.go:276] 0 containers: []
	W0819 19:15:20.352328  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:20.352338  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:20.352401  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:20.390335  438716 cri.go:89] found id: ""
	I0819 19:15:20.390368  438716 logs.go:276] 0 containers: []
	W0819 19:15:20.390377  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:20.390384  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:20.390450  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:20.430264  438716 cri.go:89] found id: ""
	I0819 19:15:20.430300  438716 logs.go:276] 0 containers: []
	W0819 19:15:20.430312  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:20.430320  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:20.430386  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:20.469670  438716 cri.go:89] found id: ""
	I0819 19:15:20.469703  438716 logs.go:276] 0 containers: []
	W0819 19:15:20.469715  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:20.469723  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:20.469790  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:20.503233  438716 cri.go:89] found id: ""
	I0819 19:15:20.503263  438716 logs.go:276] 0 containers: []
	W0819 19:15:20.503274  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:20.503283  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:20.503371  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:16.902246  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:19.402407  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:18.693921  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:21.193124  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:21.418463  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:23.418730  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:20.538180  438716 cri.go:89] found id: ""
	I0819 19:15:20.538211  438716 logs.go:276] 0 containers: []
	W0819 19:15:20.538223  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:20.538231  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:20.538302  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:20.573301  438716 cri.go:89] found id: ""
	I0819 19:15:20.573329  438716 logs.go:276] 0 containers: []
	W0819 19:15:20.573337  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:20.573352  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:20.573411  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:20.606962  438716 cri.go:89] found id: ""
	I0819 19:15:20.606995  438716 logs.go:276] 0 containers: []
	W0819 19:15:20.607007  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:20.607019  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:20.607035  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:20.658392  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:20.658428  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:20.672063  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:20.672092  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:20.747987  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:20.748010  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:20.748035  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:20.829367  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:20.829415  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:23.378885  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:23.393711  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:23.393778  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:23.430629  438716 cri.go:89] found id: ""
	I0819 19:15:23.430655  438716 logs.go:276] 0 containers: []
	W0819 19:15:23.430665  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:23.430675  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:23.430727  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:23.467509  438716 cri.go:89] found id: ""
	I0819 19:15:23.467541  438716 logs.go:276] 0 containers: []
	W0819 19:15:23.467552  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:23.467560  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:23.467634  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:23.505313  438716 cri.go:89] found id: ""
	I0819 19:15:23.505351  438716 logs.go:276] 0 containers: []
	W0819 19:15:23.505359  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:23.505366  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:23.505416  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:23.543393  438716 cri.go:89] found id: ""
	I0819 19:15:23.543428  438716 logs.go:276] 0 containers: []
	W0819 19:15:23.543441  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:23.543450  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:23.543514  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:23.578265  438716 cri.go:89] found id: ""
	I0819 19:15:23.578293  438716 logs.go:276] 0 containers: []
	W0819 19:15:23.578301  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:23.578308  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:23.578376  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:23.613951  438716 cri.go:89] found id: ""
	I0819 19:15:23.613981  438716 logs.go:276] 0 containers: []
	W0819 19:15:23.613989  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:23.613996  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:23.614061  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:23.647387  438716 cri.go:89] found id: ""
	I0819 19:15:23.647418  438716 logs.go:276] 0 containers: []
	W0819 19:15:23.647426  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:23.647433  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:23.647501  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:23.682482  438716 cri.go:89] found id: ""
	I0819 19:15:23.682510  438716 logs.go:276] 0 containers: []
	W0819 19:15:23.682519  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:23.682530  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:23.682547  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:23.696601  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:23.696629  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:23.766762  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:23.766788  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:23.766804  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:23.850947  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:23.850988  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:23.891113  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:23.891146  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:21.902926  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:24.401874  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:23.193192  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:25.193347  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:25.919555  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:28.419920  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:26.444086  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:26.457774  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:26.457844  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:26.494525  438716 cri.go:89] found id: ""
	I0819 19:15:26.494552  438716 logs.go:276] 0 containers: []
	W0819 19:15:26.494560  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:26.494567  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:26.494618  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:26.535317  438716 cri.go:89] found id: ""
	I0819 19:15:26.535348  438716 logs.go:276] 0 containers: []
	W0819 19:15:26.535359  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:26.535368  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:26.535437  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:26.570853  438716 cri.go:89] found id: ""
	I0819 19:15:26.570886  438716 logs.go:276] 0 containers: []
	W0819 19:15:26.570896  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:26.570920  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:26.570987  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:26.610739  438716 cri.go:89] found id: ""
	I0819 19:15:26.610773  438716 logs.go:276] 0 containers: []
	W0819 19:15:26.610785  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:26.610794  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:26.610885  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:26.651274  438716 cri.go:89] found id: ""
	I0819 19:15:26.651303  438716 logs.go:276] 0 containers: []
	W0819 19:15:26.651311  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:26.651318  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:26.651367  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:26.689963  438716 cri.go:89] found id: ""
	I0819 19:15:26.689993  438716 logs.go:276] 0 containers: []
	W0819 19:15:26.690005  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:26.690013  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:26.690083  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:26.729433  438716 cri.go:89] found id: ""
	I0819 19:15:26.729465  438716 logs.go:276] 0 containers: []
	W0819 19:15:26.729475  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:26.729483  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:26.729548  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:26.768386  438716 cri.go:89] found id: ""
	I0819 19:15:26.768418  438716 logs.go:276] 0 containers: []
	W0819 19:15:26.768427  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:26.768436  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:26.768449  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:26.821526  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:26.821564  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:26.835714  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:26.835763  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:26.907981  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:26.908007  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:26.908023  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:26.991969  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:26.992008  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:29.529743  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:29.544812  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:29.544883  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:29.581455  438716 cri.go:89] found id: ""
	I0819 19:15:29.581486  438716 logs.go:276] 0 containers: []
	W0819 19:15:29.581496  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:29.581503  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:29.581559  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:29.634542  438716 cri.go:89] found id: ""
	I0819 19:15:29.634576  438716 logs.go:276] 0 containers: []
	W0819 19:15:29.634587  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:29.634596  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:29.634663  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:29.670388  438716 cri.go:89] found id: ""
	I0819 19:15:29.670422  438716 logs.go:276] 0 containers: []
	W0819 19:15:29.670439  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:29.670449  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:29.670511  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:29.712267  438716 cri.go:89] found id: ""
	I0819 19:15:29.712293  438716 logs.go:276] 0 containers: []
	W0819 19:15:29.712304  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:29.712313  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:29.712376  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:29.752392  438716 cri.go:89] found id: ""
	I0819 19:15:29.752423  438716 logs.go:276] 0 containers: []
	W0819 19:15:29.752432  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:29.752438  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:29.752500  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:29.791734  438716 cri.go:89] found id: ""
	I0819 19:15:29.791763  438716 logs.go:276] 0 containers: []
	W0819 19:15:29.791772  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:29.791778  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:29.791830  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:29.832882  438716 cri.go:89] found id: ""
	I0819 19:15:29.832910  438716 logs.go:276] 0 containers: []
	W0819 19:15:29.832921  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:29.832929  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:29.832986  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:29.872035  438716 cri.go:89] found id: ""
	I0819 19:15:29.872068  438716 logs.go:276] 0 containers: []
	W0819 19:15:29.872076  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:29.872086  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:29.872098  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:29.926551  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:29.926588  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:29.940500  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:29.940537  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:30.010327  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:30.010348  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:30.010368  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:30.090864  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:30.090910  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:26.902881  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:29.401449  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:27.692753  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:29.693161  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:32.193256  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:30.421066  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:32.918642  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:32.636291  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:32.649264  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:32.649334  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:32.683746  438716 cri.go:89] found id: ""
	I0819 19:15:32.683774  438716 logs.go:276] 0 containers: []
	W0819 19:15:32.683785  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:32.683794  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:32.683867  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:32.723805  438716 cri.go:89] found id: ""
	I0819 19:15:32.723838  438716 logs.go:276] 0 containers: []
	W0819 19:15:32.723850  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:32.723858  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:32.723917  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:32.758119  438716 cri.go:89] found id: ""
	I0819 19:15:32.758148  438716 logs.go:276] 0 containers: []
	W0819 19:15:32.758157  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:32.758164  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:32.758215  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:32.792726  438716 cri.go:89] found id: ""
	I0819 19:15:32.792754  438716 logs.go:276] 0 containers: []
	W0819 19:15:32.792768  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:32.792775  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:32.792823  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:32.829180  438716 cri.go:89] found id: ""
	I0819 19:15:32.829208  438716 logs.go:276] 0 containers: []
	W0819 19:15:32.829217  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:32.829224  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:32.829274  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:32.869045  438716 cri.go:89] found id: ""
	I0819 19:15:32.869081  438716 logs.go:276] 0 containers: []
	W0819 19:15:32.869093  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:32.869102  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:32.869172  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:32.904780  438716 cri.go:89] found id: ""
	I0819 19:15:32.904803  438716 logs.go:276] 0 containers: []
	W0819 19:15:32.904811  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:32.904818  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:32.904870  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:32.940846  438716 cri.go:89] found id: ""
	I0819 19:15:32.940876  438716 logs.go:276] 0 containers: []
	W0819 19:15:32.940886  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:32.940900  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:32.940924  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:33.008569  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:33.008592  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:33.008606  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:33.092605  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:33.092657  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:33.133016  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:33.133045  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:33.188335  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:33.188376  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:31.901719  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:34.401060  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:36.401983  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:34.193690  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:36.694042  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:34.918948  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:37.418186  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:35.704043  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:35.717647  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:35.717708  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:35.752337  438716 cri.go:89] found id: ""
	I0819 19:15:35.752364  438716 logs.go:276] 0 containers: []
	W0819 19:15:35.752372  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:35.752378  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:35.752431  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:35.787233  438716 cri.go:89] found id: ""
	I0819 19:15:35.787261  438716 logs.go:276] 0 containers: []
	W0819 19:15:35.787269  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:35.787275  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:35.787334  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:35.819641  438716 cri.go:89] found id: ""
	I0819 19:15:35.819667  438716 logs.go:276] 0 containers: []
	W0819 19:15:35.819697  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:35.819705  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:35.819775  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:35.856133  438716 cri.go:89] found id: ""
	I0819 19:15:35.856160  438716 logs.go:276] 0 containers: []
	W0819 19:15:35.856169  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:35.856176  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:35.856240  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:35.889390  438716 cri.go:89] found id: ""
	I0819 19:15:35.889422  438716 logs.go:276] 0 containers: []
	W0819 19:15:35.889432  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:35.889438  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:35.889501  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:35.927477  438716 cri.go:89] found id: ""
	I0819 19:15:35.927519  438716 logs.go:276] 0 containers: []
	W0819 19:15:35.927531  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:35.927539  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:35.927600  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:35.961787  438716 cri.go:89] found id: ""
	I0819 19:15:35.961825  438716 logs.go:276] 0 containers: []
	W0819 19:15:35.961837  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:35.961845  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:35.961912  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:35.998350  438716 cri.go:89] found id: ""
	I0819 19:15:35.998384  438716 logs.go:276] 0 containers: []
	W0819 19:15:35.998396  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:35.998407  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:35.998419  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:36.054352  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:36.054394  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:36.078278  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:36.078311  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:36.166388  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:36.166416  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:36.166433  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:36.247222  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:36.247269  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:38.786510  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:38.800306  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:38.800364  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:38.834555  438716 cri.go:89] found id: ""
	I0819 19:15:38.834583  438716 logs.go:276] 0 containers: []
	W0819 19:15:38.834591  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:38.834598  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:38.834648  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:38.869078  438716 cri.go:89] found id: ""
	I0819 19:15:38.869105  438716 logs.go:276] 0 containers: []
	W0819 19:15:38.869114  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:38.869120  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:38.869174  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:38.903702  438716 cri.go:89] found id: ""
	I0819 19:15:38.903728  438716 logs.go:276] 0 containers: []
	W0819 19:15:38.903736  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:38.903743  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:38.903795  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:38.938326  438716 cri.go:89] found id: ""
	I0819 19:15:38.938352  438716 logs.go:276] 0 containers: []
	W0819 19:15:38.938360  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:38.938367  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:38.938422  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:38.976032  438716 cri.go:89] found id: ""
	I0819 19:15:38.976063  438716 logs.go:276] 0 containers: []
	W0819 19:15:38.976075  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:38.976084  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:38.976149  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:39.009957  438716 cri.go:89] found id: ""
	I0819 19:15:39.009991  438716 logs.go:276] 0 containers: []
	W0819 19:15:39.010002  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:39.010011  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:39.010077  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:39.046381  438716 cri.go:89] found id: ""
	I0819 19:15:39.046408  438716 logs.go:276] 0 containers: []
	W0819 19:15:39.046416  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:39.046422  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:39.046474  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:39.083022  438716 cri.go:89] found id: ""
	I0819 19:15:39.083050  438716 logs.go:276] 0 containers: []
	W0819 19:15:39.083058  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:39.083067  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:39.083079  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:39.160731  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:39.160768  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:39.204846  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:39.204879  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:39.259248  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:39.259287  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:39.273764  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:39.273796  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:39.344477  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:38.402275  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:40.901494  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:39.194367  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:41.692933  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:39.419291  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:41.919708  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:43.919984  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:41.845258  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:41.861691  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:41.861754  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:41.908235  438716 cri.go:89] found id: ""
	I0819 19:15:41.908269  438716 logs.go:276] 0 containers: []
	W0819 19:15:41.908281  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:41.908289  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:41.908357  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:41.965631  438716 cri.go:89] found id: ""
	I0819 19:15:41.965657  438716 logs.go:276] 0 containers: []
	W0819 19:15:41.965667  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:41.965673  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:41.965732  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:42.004540  438716 cri.go:89] found id: ""
	I0819 19:15:42.004569  438716 logs.go:276] 0 containers: []
	W0819 19:15:42.004578  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:42.004585  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:42.004650  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:42.042189  438716 cri.go:89] found id: ""
	I0819 19:15:42.042215  438716 logs.go:276] 0 containers: []
	W0819 19:15:42.042224  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:42.042231  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:42.042299  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:42.079313  438716 cri.go:89] found id: ""
	I0819 19:15:42.079349  438716 logs.go:276] 0 containers: []
	W0819 19:15:42.079361  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:42.079370  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:42.079450  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:42.116130  438716 cri.go:89] found id: ""
	I0819 19:15:42.116164  438716 logs.go:276] 0 containers: []
	W0819 19:15:42.116176  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:42.116184  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:42.116253  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:42.154886  438716 cri.go:89] found id: ""
	I0819 19:15:42.154919  438716 logs.go:276] 0 containers: []
	W0819 19:15:42.154928  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:42.154935  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:42.154987  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:42.191204  438716 cri.go:89] found id: ""
	I0819 19:15:42.191237  438716 logs.go:276] 0 containers: []
	W0819 19:15:42.191248  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:42.191258  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:42.191275  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:42.244395  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:42.244434  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:42.258029  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:42.258066  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:42.323461  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:42.323481  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:42.323498  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:42.401932  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:42.401969  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:44.943615  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:44.958243  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:44.958315  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:44.995181  438716 cri.go:89] found id: ""
	I0819 19:15:44.995217  438716 logs.go:276] 0 containers: []
	W0819 19:15:44.995236  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:44.995244  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:44.995309  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:45.030705  438716 cri.go:89] found id: ""
	I0819 19:15:45.030743  438716 logs.go:276] 0 containers: []
	W0819 19:15:45.030752  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:45.030759  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:45.030814  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:45.068186  438716 cri.go:89] found id: ""
	I0819 19:15:45.068215  438716 logs.go:276] 0 containers: []
	W0819 19:15:45.068224  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:45.068231  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:45.068314  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:45.105415  438716 cri.go:89] found id: ""
	I0819 19:15:45.105443  438716 logs.go:276] 0 containers: []
	W0819 19:15:45.105452  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:45.105458  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:45.105517  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:45.143628  438716 cri.go:89] found id: ""
	I0819 19:15:45.143662  438716 logs.go:276] 0 containers: []
	W0819 19:15:45.143694  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:45.143704  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:45.143771  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:45.184896  438716 cri.go:89] found id: ""
	I0819 19:15:45.184922  438716 logs.go:276] 0 containers: []
	W0819 19:15:45.184930  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:45.184937  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:45.185000  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:45.222599  438716 cri.go:89] found id: ""
	I0819 19:15:45.222631  438716 logs.go:276] 0 containers: []
	W0819 19:15:45.222639  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:45.222645  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:45.222700  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:45.260310  438716 cri.go:89] found id: ""
	I0819 19:15:45.260341  438716 logs.go:276] 0 containers: []
	W0819 19:15:45.260352  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:45.260361  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:45.260379  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:45.273687  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:45.273718  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:45.351367  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:45.351390  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:45.351407  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:45.428751  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:45.428787  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:45.468830  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:45.468869  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:42.902576  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:45.402812  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:43.693205  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:46.192804  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:46.419903  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:48.918620  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:48.023654  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:48.037206  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:48.037294  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:48.071647  438716 cri.go:89] found id: ""
	I0819 19:15:48.071686  438716 logs.go:276] 0 containers: []
	W0819 19:15:48.071695  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:48.071704  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:48.071765  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:48.106542  438716 cri.go:89] found id: ""
	I0819 19:15:48.106575  438716 logs.go:276] 0 containers: []
	W0819 19:15:48.106586  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:48.106596  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:48.106662  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:48.151917  438716 cri.go:89] found id: ""
	I0819 19:15:48.151949  438716 logs.go:276] 0 containers: []
	W0819 19:15:48.151959  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:48.151966  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:48.152022  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:48.190095  438716 cri.go:89] found id: ""
	I0819 19:15:48.190125  438716 logs.go:276] 0 containers: []
	W0819 19:15:48.190137  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:48.190146  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:48.190211  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:48.227193  438716 cri.go:89] found id: ""
	I0819 19:15:48.227228  438716 logs.go:276] 0 containers: []
	W0819 19:15:48.227240  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:48.227248  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:48.227317  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:48.261353  438716 cri.go:89] found id: ""
	I0819 19:15:48.261386  438716 logs.go:276] 0 containers: []
	W0819 19:15:48.261396  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:48.261403  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:48.261455  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:48.295749  438716 cri.go:89] found id: ""
	I0819 19:15:48.295782  438716 logs.go:276] 0 containers: []
	W0819 19:15:48.295794  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:48.295803  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:48.295874  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:48.338350  438716 cri.go:89] found id: ""
	I0819 19:15:48.338383  438716 logs.go:276] 0 containers: []
	W0819 19:15:48.338394  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:48.338404  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:48.338420  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:48.420705  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:48.420749  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:48.464114  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:48.464153  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:48.519461  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:48.519505  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:48.534324  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:48.534357  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:48.603580  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:47.900813  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:49.902363  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:48.194425  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:50.693598  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:51.419909  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:53.918494  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:51.104343  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:51.117552  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:51.117629  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:51.150630  438716 cri.go:89] found id: ""
	I0819 19:15:51.150665  438716 logs.go:276] 0 containers: []
	W0819 19:15:51.150677  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:51.150691  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:51.150765  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:51.184316  438716 cri.go:89] found id: ""
	I0819 19:15:51.184346  438716 logs.go:276] 0 containers: []
	W0819 19:15:51.184356  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:51.184362  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:51.184410  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:51.221252  438716 cri.go:89] found id: ""
	I0819 19:15:51.221277  438716 logs.go:276] 0 containers: []
	W0819 19:15:51.221286  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:51.221292  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:51.221349  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:51.255727  438716 cri.go:89] found id: ""
	I0819 19:15:51.255755  438716 logs.go:276] 0 containers: []
	W0819 19:15:51.255763  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:51.255769  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:51.255823  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:51.290615  438716 cri.go:89] found id: ""
	I0819 19:15:51.290651  438716 logs.go:276] 0 containers: []
	W0819 19:15:51.290660  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:51.290667  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:51.290721  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:51.326895  438716 cri.go:89] found id: ""
	I0819 19:15:51.326922  438716 logs.go:276] 0 containers: []
	W0819 19:15:51.326930  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:51.326937  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:51.326987  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:51.365516  438716 cri.go:89] found id: ""
	I0819 19:15:51.365547  438716 logs.go:276] 0 containers: []
	W0819 19:15:51.365558  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:51.365566  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:51.365632  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:51.399002  438716 cri.go:89] found id: ""
	I0819 19:15:51.399030  438716 logs.go:276] 0 containers: []
	W0819 19:15:51.399038  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:51.399048  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:51.399059  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:51.453481  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:51.453524  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:51.467246  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:51.467277  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:51.548547  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:51.548578  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:51.548595  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:51.635627  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:51.635670  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:54.175003  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:54.190462  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:54.190537  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:54.232140  438716 cri.go:89] found id: ""
	I0819 19:15:54.232168  438716 logs.go:276] 0 containers: []
	W0819 19:15:54.232178  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:54.232186  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:54.232254  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:54.267700  438716 cri.go:89] found id: ""
	I0819 19:15:54.267732  438716 logs.go:276] 0 containers: []
	W0819 19:15:54.267742  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:54.267748  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:54.267807  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:54.306272  438716 cri.go:89] found id: ""
	I0819 19:15:54.306300  438716 logs.go:276] 0 containers: []
	W0819 19:15:54.306308  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:54.306315  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:54.306368  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:54.341503  438716 cri.go:89] found id: ""
	I0819 19:15:54.341536  438716 logs.go:276] 0 containers: []
	W0819 19:15:54.341549  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:54.341556  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:54.341609  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:54.375535  438716 cri.go:89] found id: ""
	I0819 19:15:54.375570  438716 logs.go:276] 0 containers: []
	W0819 19:15:54.375582  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:54.375591  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:54.375661  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:54.409611  438716 cri.go:89] found id: ""
	I0819 19:15:54.409641  438716 logs.go:276] 0 containers: []
	W0819 19:15:54.409653  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:54.409662  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:54.409731  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:54.444318  438716 cri.go:89] found id: ""
	I0819 19:15:54.444346  438716 logs.go:276] 0 containers: []
	W0819 19:15:54.444358  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:54.444366  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:54.444425  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:54.480746  438716 cri.go:89] found id: ""
	I0819 19:15:54.480777  438716 logs.go:276] 0 containers: []
	W0819 19:15:54.480789  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:54.480802  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:54.480817  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:54.534209  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:54.534245  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:54.549557  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:54.549598  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:54.625086  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:54.625111  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:54.625136  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:54.705549  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:54.705589  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:52.401150  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:54.402049  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:56.402545  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:52.693826  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:54.694875  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:57.193741  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:56.418166  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:58.418955  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:57.257440  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:57.276724  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:57.276812  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:57.319032  438716 cri.go:89] found id: ""
	I0819 19:15:57.319062  438716 logs.go:276] 0 containers: []
	W0819 19:15:57.319073  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:57.319081  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:57.319163  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:57.357093  438716 cri.go:89] found id: ""
	I0819 19:15:57.357129  438716 logs.go:276] 0 containers: []
	W0819 19:15:57.357140  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:57.357152  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:57.357222  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:57.393978  438716 cri.go:89] found id: ""
	I0819 19:15:57.394013  438716 logs.go:276] 0 containers: []
	W0819 19:15:57.394025  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:57.394033  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:57.394102  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:57.428731  438716 cri.go:89] found id: ""
	I0819 19:15:57.428760  438716 logs.go:276] 0 containers: []
	W0819 19:15:57.428768  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:57.428775  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:57.428824  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:57.467772  438716 cri.go:89] found id: ""
	I0819 19:15:57.467810  438716 logs.go:276] 0 containers: []
	W0819 19:15:57.467822  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:57.467832  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:57.467904  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:57.502398  438716 cri.go:89] found id: ""
	I0819 19:15:57.502434  438716 logs.go:276] 0 containers: []
	W0819 19:15:57.502444  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:57.502450  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:57.502503  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:57.536729  438716 cri.go:89] found id: ""
	I0819 19:15:57.536760  438716 logs.go:276] 0 containers: []
	W0819 19:15:57.536771  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:57.536779  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:57.536845  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:57.574738  438716 cri.go:89] found id: ""
	I0819 19:15:57.574762  438716 logs.go:276] 0 containers: []
	W0819 19:15:57.574770  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:57.574780  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:57.574793  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:57.630063  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:57.630113  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:57.643083  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:57.643111  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:57.725081  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:57.725104  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:57.725118  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:57.805065  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:57.805105  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:00.344557  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:00.357940  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:00.358005  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:00.399319  438716 cri.go:89] found id: ""
	I0819 19:16:00.399355  438716 logs.go:276] 0 containers: []
	W0819 19:16:00.399368  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:00.399377  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:00.399446  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:00.444223  438716 cri.go:89] found id: ""
	I0819 19:16:00.444254  438716 logs.go:276] 0 containers: []
	W0819 19:16:00.444264  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:00.444271  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:00.444323  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:00.479903  438716 cri.go:89] found id: ""
	I0819 19:16:00.479932  438716 logs.go:276] 0 containers: []
	W0819 19:16:00.479942  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:00.479948  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:00.480003  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:00.515923  438716 cri.go:89] found id: ""
	I0819 19:16:00.515954  438716 logs.go:276] 0 containers: []
	W0819 19:16:00.515966  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:00.515974  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:00.516043  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:58.901349  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:00.902114  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:59.194660  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:01.693174  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:00.419210  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:02.918814  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:00.551319  438716 cri.go:89] found id: ""
	I0819 19:16:00.551348  438716 logs.go:276] 0 containers: []
	W0819 19:16:00.551360  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:00.551370  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:00.551434  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:00.587847  438716 cri.go:89] found id: ""
	I0819 19:16:00.587882  438716 logs.go:276] 0 containers: []
	W0819 19:16:00.587892  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:00.587901  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:00.587976  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:00.624769  438716 cri.go:89] found id: ""
	I0819 19:16:00.624800  438716 logs.go:276] 0 containers: []
	W0819 19:16:00.624812  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:00.624820  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:00.624894  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:00.659300  438716 cri.go:89] found id: ""
	I0819 19:16:00.659330  438716 logs.go:276] 0 containers: []
	W0819 19:16:00.659342  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:00.659355  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:00.659371  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:00.739073  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:00.739113  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:00.779087  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:00.779116  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:00.831864  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:00.831914  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:00.845832  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:00.845863  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:00.920622  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:03.420751  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:03.434599  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:03.434664  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:03.469288  438716 cri.go:89] found id: ""
	I0819 19:16:03.469326  438716 logs.go:276] 0 containers: []
	W0819 19:16:03.469349  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:03.469372  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:03.469445  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:03.507885  438716 cri.go:89] found id: ""
	I0819 19:16:03.507911  438716 logs.go:276] 0 containers: []
	W0819 19:16:03.507927  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:03.507934  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:03.507987  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:03.543805  438716 cri.go:89] found id: ""
	I0819 19:16:03.543837  438716 logs.go:276] 0 containers: []
	W0819 19:16:03.543847  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:03.543854  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:03.543928  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:03.584060  438716 cri.go:89] found id: ""
	I0819 19:16:03.584093  438716 logs.go:276] 0 containers: []
	W0819 19:16:03.584105  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:03.584114  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:03.584202  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:03.619724  438716 cri.go:89] found id: ""
	I0819 19:16:03.619758  438716 logs.go:276] 0 containers: []
	W0819 19:16:03.619769  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:03.619776  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:03.619854  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:03.657180  438716 cri.go:89] found id: ""
	I0819 19:16:03.657213  438716 logs.go:276] 0 containers: []
	W0819 19:16:03.657225  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:03.657234  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:03.657303  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:03.695099  438716 cri.go:89] found id: ""
	I0819 19:16:03.695125  438716 logs.go:276] 0 containers: []
	W0819 19:16:03.695134  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:03.695139  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:03.695193  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:03.730263  438716 cri.go:89] found id: ""
	I0819 19:16:03.730291  438716 logs.go:276] 0 containers: []
	W0819 19:16:03.730302  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:03.730314  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:03.730331  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:03.780776  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:03.780816  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:03.795381  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:03.795419  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:03.869995  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:03.870016  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:03.870029  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:03.949654  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:03.949691  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:03.402500  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:05.902412  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:03.694220  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:06.193280  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:04.919284  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:07.418061  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:06.493589  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:06.506758  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:06.506834  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:06.545325  438716 cri.go:89] found id: ""
	I0819 19:16:06.545357  438716 logs.go:276] 0 containers: []
	W0819 19:16:06.545370  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:06.545378  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:06.545443  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:06.581708  438716 cri.go:89] found id: ""
	I0819 19:16:06.581741  438716 logs.go:276] 0 containers: []
	W0819 19:16:06.581753  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:06.581761  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:06.581828  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:06.626543  438716 cri.go:89] found id: ""
	I0819 19:16:06.626588  438716 logs.go:276] 0 containers: []
	W0819 19:16:06.626600  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:06.626609  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:06.626676  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:06.662466  438716 cri.go:89] found id: ""
	I0819 19:16:06.662499  438716 logs.go:276] 0 containers: []
	W0819 19:16:06.662509  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:06.662518  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:06.662585  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:06.701584  438716 cri.go:89] found id: ""
	I0819 19:16:06.701619  438716 logs.go:276] 0 containers: []
	W0819 19:16:06.701628  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:06.701635  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:06.701688  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:06.736245  438716 cri.go:89] found id: ""
	I0819 19:16:06.736280  438716 logs.go:276] 0 containers: []
	W0819 19:16:06.736292  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:06.736300  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:06.736392  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:06.774411  438716 cri.go:89] found id: ""
	I0819 19:16:06.774439  438716 logs.go:276] 0 containers: []
	W0819 19:16:06.774447  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:06.774454  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:06.774510  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:06.809560  438716 cri.go:89] found id: ""
	I0819 19:16:06.809597  438716 logs.go:276] 0 containers: []
	W0819 19:16:06.809609  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:06.809624  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:06.809648  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:06.884841  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:06.884862  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:06.884878  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:06.971467  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:06.971507  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:07.010737  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:07.010767  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:07.063807  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:07.063846  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:09.578451  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:09.591643  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:09.591737  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:09.625607  438716 cri.go:89] found id: ""
	I0819 19:16:09.625639  438716 logs.go:276] 0 containers: []
	W0819 19:16:09.625650  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:09.625659  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:09.625727  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:09.669145  438716 cri.go:89] found id: ""
	I0819 19:16:09.669177  438716 logs.go:276] 0 containers: []
	W0819 19:16:09.669185  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:09.669191  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:09.669254  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:09.707035  438716 cri.go:89] found id: ""
	I0819 19:16:09.707064  438716 logs.go:276] 0 containers: []
	W0819 19:16:09.707073  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:09.707080  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:09.707142  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:09.742089  438716 cri.go:89] found id: ""
	I0819 19:16:09.742116  438716 logs.go:276] 0 containers: []
	W0819 19:16:09.742125  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:09.742132  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:09.742193  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:09.782736  438716 cri.go:89] found id: ""
	I0819 19:16:09.782774  438716 logs.go:276] 0 containers: []
	W0819 19:16:09.782785  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:09.782794  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:09.782860  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:09.818003  438716 cri.go:89] found id: ""
	I0819 19:16:09.818031  438716 logs.go:276] 0 containers: []
	W0819 19:16:09.818040  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:09.818047  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:09.818110  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:09.852716  438716 cri.go:89] found id: ""
	I0819 19:16:09.852748  438716 logs.go:276] 0 containers: []
	W0819 19:16:09.852757  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:09.852764  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:09.852828  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:09.887176  438716 cri.go:89] found id: ""
	I0819 19:16:09.887206  438716 logs.go:276] 0 containers: []
	W0819 19:16:09.887218  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:09.887230  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:09.887247  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:09.901547  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:09.901573  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:09.969153  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:09.969190  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:09.969205  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:10.053777  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:10.053820  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:10.100888  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:10.100916  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:08.401650  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:10.402279  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:08.194305  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:10.693097  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:09.418856  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:11.918836  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:12.655112  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:12.667824  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:12.667897  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:12.702337  438716 cri.go:89] found id: ""
	I0819 19:16:12.702364  438716 logs.go:276] 0 containers: []
	W0819 19:16:12.702373  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:12.702379  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:12.702432  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:12.736628  438716 cri.go:89] found id: ""
	I0819 19:16:12.736655  438716 logs.go:276] 0 containers: []
	W0819 19:16:12.736663  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:12.736669  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:12.736720  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:12.773598  438716 cri.go:89] found id: ""
	I0819 19:16:12.773628  438716 logs.go:276] 0 containers: []
	W0819 19:16:12.773636  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:12.773643  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:12.773695  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:12.806584  438716 cri.go:89] found id: ""
	I0819 19:16:12.806620  438716 logs.go:276] 0 containers: []
	W0819 19:16:12.806632  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:12.806640  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:12.806723  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:12.840535  438716 cri.go:89] found id: ""
	I0819 19:16:12.840561  438716 logs.go:276] 0 containers: []
	W0819 19:16:12.840569  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:12.840575  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:12.840639  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:12.877680  438716 cri.go:89] found id: ""
	I0819 19:16:12.877712  438716 logs.go:276] 0 containers: []
	W0819 19:16:12.877721  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:12.877728  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:12.877779  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:12.912226  438716 cri.go:89] found id: ""
	I0819 19:16:12.912253  438716 logs.go:276] 0 containers: []
	W0819 19:16:12.912264  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:12.912272  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:12.912342  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:12.953463  438716 cri.go:89] found id: ""
	I0819 19:16:12.953493  438716 logs.go:276] 0 containers: []
	W0819 19:16:12.953504  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:12.953524  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:12.953542  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:13.007648  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:13.007691  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:13.022452  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:13.022494  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:13.092411  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:13.092439  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:13.092455  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:13.168711  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:13.168750  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:12.903478  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:15.402551  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:12.693162  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:14.698051  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:17.193988  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:14.417821  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:16.418541  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:18.918478  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:15.711501  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:15.724841  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:15.724921  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:15.760120  438716 cri.go:89] found id: ""
	I0819 19:16:15.760149  438716 logs.go:276] 0 containers: []
	W0819 19:16:15.760158  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:15.760166  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:15.760234  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:15.794959  438716 cri.go:89] found id: ""
	I0819 19:16:15.794988  438716 logs.go:276] 0 containers: []
	W0819 19:16:15.794996  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:15.795002  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:15.795054  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:15.842776  438716 cri.go:89] found id: ""
	I0819 19:16:15.842804  438716 logs.go:276] 0 containers: []
	W0819 19:16:15.842814  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:15.842820  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:15.842874  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:15.882134  438716 cri.go:89] found id: ""
	I0819 19:16:15.882167  438716 logs.go:276] 0 containers: []
	W0819 19:16:15.882178  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:15.882187  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:15.882251  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:15.919296  438716 cri.go:89] found id: ""
	I0819 19:16:15.919325  438716 logs.go:276] 0 containers: []
	W0819 19:16:15.919336  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:15.919345  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:15.919409  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:15.956401  438716 cri.go:89] found id: ""
	I0819 19:16:15.956429  438716 logs.go:276] 0 containers: []
	W0819 19:16:15.956437  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:15.956444  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:15.956507  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:15.994271  438716 cri.go:89] found id: ""
	I0819 19:16:15.994304  438716 logs.go:276] 0 containers: []
	W0819 19:16:15.994314  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:15.994320  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:15.994378  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:16.033685  438716 cri.go:89] found id: ""
	I0819 19:16:16.033714  438716 logs.go:276] 0 containers: []
	W0819 19:16:16.033724  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:16.033736  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:16.033754  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:16.083929  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:16.083964  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:16.107309  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:16.107342  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:16.193657  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:16.193681  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:16.193697  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:16.276974  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:16.277016  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:18.818532  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:18.831586  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:18.831655  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:18.866663  438716 cri.go:89] found id: ""
	I0819 19:16:18.866689  438716 logs.go:276] 0 containers: []
	W0819 19:16:18.866700  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:18.866709  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:18.866769  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:18.900711  438716 cri.go:89] found id: ""
	I0819 19:16:18.900746  438716 logs.go:276] 0 containers: []
	W0819 19:16:18.900757  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:18.900765  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:18.900849  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:18.935156  438716 cri.go:89] found id: ""
	I0819 19:16:18.935179  438716 logs.go:276] 0 containers: []
	W0819 19:16:18.935186  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:18.935193  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:18.935246  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:18.973853  438716 cri.go:89] found id: ""
	I0819 19:16:18.973889  438716 logs.go:276] 0 containers: []
	W0819 19:16:18.973902  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:18.973911  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:18.973978  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:19.014212  438716 cri.go:89] found id: ""
	I0819 19:16:19.014241  438716 logs.go:276] 0 containers: []
	W0819 19:16:19.014250  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:19.014255  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:19.014317  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:19.056089  438716 cri.go:89] found id: ""
	I0819 19:16:19.056125  438716 logs.go:276] 0 containers: []
	W0819 19:16:19.056137  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:19.056146  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:19.056211  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:19.091372  438716 cri.go:89] found id: ""
	I0819 19:16:19.091399  438716 logs.go:276] 0 containers: []
	W0819 19:16:19.091411  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:19.091420  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:19.091478  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:19.129737  438716 cri.go:89] found id: ""
	I0819 19:16:19.129767  438716 logs.go:276] 0 containers: []
	W0819 19:16:19.129777  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:19.129787  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:19.129800  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:19.207325  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:19.207360  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:19.247780  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:19.247816  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:19.302496  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:19.302543  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:19.317706  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:19.317739  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:19.395029  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:17.901762  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:19.901818  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:19.195079  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:21.693863  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:21.418534  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:23.420217  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:21.895538  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:21.910595  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:21.910658  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:21.948363  438716 cri.go:89] found id: ""
	I0819 19:16:21.948398  438716 logs.go:276] 0 containers: []
	W0819 19:16:21.948410  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:21.948419  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:21.948492  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:21.983391  438716 cri.go:89] found id: ""
	I0819 19:16:21.983428  438716 logs.go:276] 0 containers: []
	W0819 19:16:21.983440  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:21.983449  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:21.983520  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:22.022383  438716 cri.go:89] found id: ""
	I0819 19:16:22.022415  438716 logs.go:276] 0 containers: []
	W0819 19:16:22.022427  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:22.022436  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:22.022493  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:22.060676  438716 cri.go:89] found id: ""
	I0819 19:16:22.060707  438716 logs.go:276] 0 containers: []
	W0819 19:16:22.060716  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:22.060725  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:22.060778  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:22.095188  438716 cri.go:89] found id: ""
	I0819 19:16:22.095218  438716 logs.go:276] 0 containers: []
	W0819 19:16:22.095227  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:22.095234  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:22.095300  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:22.131164  438716 cri.go:89] found id: ""
	I0819 19:16:22.131192  438716 logs.go:276] 0 containers: []
	W0819 19:16:22.131200  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:22.131209  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:22.131275  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:22.166539  438716 cri.go:89] found id: ""
	I0819 19:16:22.166566  438716 logs.go:276] 0 containers: []
	W0819 19:16:22.166573  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:22.166580  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:22.166643  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:22.205604  438716 cri.go:89] found id: ""
	I0819 19:16:22.205631  438716 logs.go:276] 0 containers: []
	W0819 19:16:22.205640  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:22.205649  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:22.205662  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:22.265650  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:22.265689  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:22.280401  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:22.280443  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:22.356818  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:22.356851  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:22.356872  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:22.437678  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:22.437719  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:24.979655  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:24.993462  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:24.993526  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:25.029955  438716 cri.go:89] found id: ""
	I0819 19:16:25.029983  438716 logs.go:276] 0 containers: []
	W0819 19:16:25.029992  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:25.029999  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:25.030049  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:25.068478  438716 cri.go:89] found id: ""
	I0819 19:16:25.068507  438716 logs.go:276] 0 containers: []
	W0819 19:16:25.068518  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:25.068527  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:25.068594  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:25.105209  438716 cri.go:89] found id: ""
	I0819 19:16:25.105238  438716 logs.go:276] 0 containers: []
	W0819 19:16:25.105247  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:25.105256  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:25.105327  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:25.143166  438716 cri.go:89] found id: ""
	I0819 19:16:25.143203  438716 logs.go:276] 0 containers: []
	W0819 19:16:25.143218  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:25.143225  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:25.143279  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:25.177993  438716 cri.go:89] found id: ""
	I0819 19:16:25.178023  438716 logs.go:276] 0 containers: []
	W0819 19:16:25.178035  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:25.178044  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:25.178129  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:25.216473  438716 cri.go:89] found id: ""
	I0819 19:16:25.216501  438716 logs.go:276] 0 containers: []
	W0819 19:16:25.216523  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:25.216540  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:25.216603  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:25.251454  438716 cri.go:89] found id: ""
	I0819 19:16:25.251486  438716 logs.go:276] 0 containers: []
	W0819 19:16:25.251495  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:25.251501  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:25.251555  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:25.287145  438716 cri.go:89] found id: ""
	I0819 19:16:25.287179  438716 logs.go:276] 0 containers: []
	W0819 19:16:25.287188  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:25.287198  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:25.287210  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:25.371571  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:25.371619  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:25.418247  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:25.418277  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:25.472209  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:25.472248  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:25.486286  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:25.486315  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 19:16:21.902887  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:23.904358  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:26.403026  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:24.193797  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:26.194535  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:25.919371  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:28.418267  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	W0819 19:16:25.554470  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:28.055382  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:28.068750  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:28.068827  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:28.101856  438716 cri.go:89] found id: ""
	I0819 19:16:28.101891  438716 logs.go:276] 0 containers: []
	W0819 19:16:28.101903  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:28.101912  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:28.101977  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:28.136402  438716 cri.go:89] found id: ""
	I0819 19:16:28.136437  438716 logs.go:276] 0 containers: []
	W0819 19:16:28.136449  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:28.136460  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:28.136528  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:28.171766  438716 cri.go:89] found id: ""
	I0819 19:16:28.171795  438716 logs.go:276] 0 containers: []
	W0819 19:16:28.171803  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:28.171809  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:28.171864  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:28.206228  438716 cri.go:89] found id: ""
	I0819 19:16:28.206256  438716 logs.go:276] 0 containers: []
	W0819 19:16:28.206264  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:28.206272  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:28.206337  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:28.248877  438716 cri.go:89] found id: ""
	I0819 19:16:28.248912  438716 logs.go:276] 0 containers: []
	W0819 19:16:28.248923  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:28.248931  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:28.249002  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:28.290160  438716 cri.go:89] found id: ""
	I0819 19:16:28.290201  438716 logs.go:276] 0 containers: []
	W0819 19:16:28.290212  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:28.290221  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:28.290287  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:28.340413  438716 cri.go:89] found id: ""
	I0819 19:16:28.340445  438716 logs.go:276] 0 containers: []
	W0819 19:16:28.340454  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:28.340461  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:28.340513  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:28.385486  438716 cri.go:89] found id: ""
	I0819 19:16:28.385513  438716 logs.go:276] 0 containers: []
	W0819 19:16:28.385521  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:28.385532  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:28.385544  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:28.441987  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:28.442029  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:28.456509  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:28.456538  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:28.527941  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:28.527976  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:28.527993  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:28.612696  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:28.612738  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:28.901312  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:30.901640  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:28.693578  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:30.693686  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:30.418811  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:32.919696  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:31.154773  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:31.168718  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:31.168789  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:31.205365  438716 cri.go:89] found id: ""
	I0819 19:16:31.205399  438716 logs.go:276] 0 containers: []
	W0819 19:16:31.205411  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:31.205419  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:31.205496  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:31.238829  438716 cri.go:89] found id: ""
	I0819 19:16:31.238871  438716 logs.go:276] 0 containers: []
	W0819 19:16:31.238879  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:31.238886  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:31.238936  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:31.273229  438716 cri.go:89] found id: ""
	I0819 19:16:31.273259  438716 logs.go:276] 0 containers: []
	W0819 19:16:31.273304  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:31.273313  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:31.273377  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:31.309559  438716 cri.go:89] found id: ""
	I0819 19:16:31.309601  438716 logs.go:276] 0 containers: []
	W0819 19:16:31.309613  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:31.309622  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:31.309689  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:31.344939  438716 cri.go:89] found id: ""
	I0819 19:16:31.344971  438716 logs.go:276] 0 containers: []
	W0819 19:16:31.344981  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:31.344987  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:31.345043  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:31.382423  438716 cri.go:89] found id: ""
	I0819 19:16:31.382455  438716 logs.go:276] 0 containers: []
	W0819 19:16:31.382468  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:31.382474  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:31.382525  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:31.420148  438716 cri.go:89] found id: ""
	I0819 19:16:31.420174  438716 logs.go:276] 0 containers: []
	W0819 19:16:31.420184  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:31.420192  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:31.420262  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:31.455691  438716 cri.go:89] found id: ""
	I0819 19:16:31.455720  438716 logs.go:276] 0 containers: []
	W0819 19:16:31.455730  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:31.455740  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:31.455753  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:31.509501  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:31.509549  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:31.523650  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:31.523693  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:31.591535  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:31.591557  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:31.591574  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:31.674038  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:31.674077  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:34.216506  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:34.232782  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:34.232875  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:34.286103  438716 cri.go:89] found id: ""
	I0819 19:16:34.286136  438716 logs.go:276] 0 containers: []
	W0819 19:16:34.286147  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:34.286156  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:34.286221  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:34.324193  438716 cri.go:89] found id: ""
	I0819 19:16:34.324220  438716 logs.go:276] 0 containers: []
	W0819 19:16:34.324229  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:34.324235  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:34.324292  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:34.382777  438716 cri.go:89] found id: ""
	I0819 19:16:34.382804  438716 logs.go:276] 0 containers: []
	W0819 19:16:34.382814  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:34.382822  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:34.382887  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:34.420714  438716 cri.go:89] found id: ""
	I0819 19:16:34.420743  438716 logs.go:276] 0 containers: []
	W0819 19:16:34.420753  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:34.420771  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:34.420840  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:34.455338  438716 cri.go:89] found id: ""
	I0819 19:16:34.455369  438716 logs.go:276] 0 containers: []
	W0819 19:16:34.455381  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:34.455391  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:34.455467  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:34.489528  438716 cri.go:89] found id: ""
	I0819 19:16:34.489566  438716 logs.go:276] 0 containers: []
	W0819 19:16:34.489575  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:34.489581  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:34.489634  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:34.523830  438716 cri.go:89] found id: ""
	I0819 19:16:34.523857  438716 logs.go:276] 0 containers: []
	W0819 19:16:34.523866  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:34.523873  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:34.523940  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:34.559023  438716 cri.go:89] found id: ""
	I0819 19:16:34.559052  438716 logs.go:276] 0 containers: []
	W0819 19:16:34.559063  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:34.559077  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:34.559092  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:34.639116  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:34.639159  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:34.675990  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:34.676017  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:34.730900  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:34.730935  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:34.744938  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:34.744964  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:34.816267  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:32.902138  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:35.401865  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:32.696537  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:35.192648  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:35.687633  438245 pod_ready.go:82] duration metric: took 4m0.000667446s for pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace to be "Ready" ...
	E0819 19:16:35.687688  438245 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0819 19:16:35.687715  438245 pod_ready.go:39] duration metric: took 4m13.552784118s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 19:16:35.687770  438245 kubeadm.go:597] duration metric: took 4m20.936149722s to restartPrimaryControlPlane
	W0819 19:16:35.687875  438245 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0819 19:16:35.687929  438245 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 19:16:35.419327  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:37.420007  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:37.317314  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:37.331915  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:37.331982  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:37.370233  438716 cri.go:89] found id: ""
	I0819 19:16:37.370261  438716 logs.go:276] 0 containers: []
	W0819 19:16:37.370269  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:37.370276  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:37.370343  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:37.409042  438716 cri.go:89] found id: ""
	I0819 19:16:37.409071  438716 logs.go:276] 0 containers: []
	W0819 19:16:37.409082  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:37.409090  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:37.409161  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:37.445903  438716 cri.go:89] found id: ""
	I0819 19:16:37.445932  438716 logs.go:276] 0 containers: []
	W0819 19:16:37.445941  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:37.445948  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:37.445999  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:37.484275  438716 cri.go:89] found id: ""
	I0819 19:16:37.484318  438716 logs.go:276] 0 containers: []
	W0819 19:16:37.484328  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:37.484334  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:37.484393  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:37.528131  438716 cri.go:89] found id: ""
	I0819 19:16:37.528161  438716 logs.go:276] 0 containers: []
	W0819 19:16:37.528174  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:37.528180  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:37.528243  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:37.563374  438716 cri.go:89] found id: ""
	I0819 19:16:37.563406  438716 logs.go:276] 0 containers: []
	W0819 19:16:37.563414  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:37.563421  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:37.563473  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:37.597234  438716 cri.go:89] found id: ""
	I0819 19:16:37.597260  438716 logs.go:276] 0 containers: []
	W0819 19:16:37.597267  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:37.597274  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:37.597329  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:37.634809  438716 cri.go:89] found id: ""
	I0819 19:16:37.634845  438716 logs.go:276] 0 containers: []
	W0819 19:16:37.634854  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:37.634864  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:37.634879  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:37.704354  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:37.704380  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:37.704396  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:37.788606  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:37.788646  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:37.830486  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:37.830513  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:37.890642  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:37.890681  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:40.405473  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:40.420019  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:40.420094  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:40.458558  438716 cri.go:89] found id: ""
	I0819 19:16:40.458586  438716 logs.go:276] 0 containers: []
	W0819 19:16:40.458598  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:40.458606  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:40.458671  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:40.500353  438716 cri.go:89] found id: ""
	I0819 19:16:40.500379  438716 logs.go:276] 0 containers: []
	W0819 19:16:40.500388  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:40.500394  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:40.500445  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:37.901881  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:39.902097  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:39.918877  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:41.919112  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:43.920092  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:40.534281  438716 cri.go:89] found id: ""
	I0819 19:16:40.534307  438716 logs.go:276] 0 containers: []
	W0819 19:16:40.534316  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:40.534322  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:40.534379  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:40.569537  438716 cri.go:89] found id: ""
	I0819 19:16:40.569568  438716 logs.go:276] 0 containers: []
	W0819 19:16:40.569578  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:40.569587  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:40.569654  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:40.603066  438716 cri.go:89] found id: ""
	I0819 19:16:40.603097  438716 logs.go:276] 0 containers: []
	W0819 19:16:40.603110  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:40.603118  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:40.603171  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:40.637598  438716 cri.go:89] found id: ""
	I0819 19:16:40.637628  438716 logs.go:276] 0 containers: []
	W0819 19:16:40.637637  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:40.637643  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:40.637704  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:40.673583  438716 cri.go:89] found id: ""
	I0819 19:16:40.673616  438716 logs.go:276] 0 containers: []
	W0819 19:16:40.673629  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:40.673637  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:40.673692  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:40.708324  438716 cri.go:89] found id: ""
	I0819 19:16:40.708354  438716 logs.go:276] 0 containers: []
	W0819 19:16:40.708363  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:40.708373  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:40.708387  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:40.789743  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:40.789782  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:40.830849  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:40.830884  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:40.882662  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:40.882700  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:40.896843  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:40.896869  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:40.969491  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:43.470579  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:43.483791  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:43.483876  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:43.523764  438716 cri.go:89] found id: ""
	I0819 19:16:43.523797  438716 logs.go:276] 0 containers: []
	W0819 19:16:43.523809  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:43.523817  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:43.523882  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:43.557925  438716 cri.go:89] found id: ""
	I0819 19:16:43.557953  438716 logs.go:276] 0 containers: []
	W0819 19:16:43.557960  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:43.557966  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:43.558017  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:43.591324  438716 cri.go:89] found id: ""
	I0819 19:16:43.591355  438716 logs.go:276] 0 containers: []
	W0819 19:16:43.591364  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:43.591370  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:43.591421  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:43.625798  438716 cri.go:89] found id: ""
	I0819 19:16:43.625826  438716 logs.go:276] 0 containers: []
	W0819 19:16:43.625834  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:43.625840  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:43.625898  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:43.659787  438716 cri.go:89] found id: ""
	I0819 19:16:43.659815  438716 logs.go:276] 0 containers: []
	W0819 19:16:43.659823  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:43.659830  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:43.659882  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:43.692982  438716 cri.go:89] found id: ""
	I0819 19:16:43.693008  438716 logs.go:276] 0 containers: []
	W0819 19:16:43.693017  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:43.693024  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:43.693075  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:43.726059  438716 cri.go:89] found id: ""
	I0819 19:16:43.726092  438716 logs.go:276] 0 containers: []
	W0819 19:16:43.726104  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:43.726113  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:43.726187  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:43.760906  438716 cri.go:89] found id: ""
	I0819 19:16:43.760947  438716 logs.go:276] 0 containers: []
	W0819 19:16:43.760958  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:43.760971  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:43.760994  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:43.812249  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:43.812285  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:43.826538  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:43.826566  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:43.894904  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:43.894926  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:43.894941  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:43.975746  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:43.975796  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:41.902398  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:43.902728  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:46.401834  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:46.419345  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:48.918688  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:46.515329  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:46.529088  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:46.529170  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:46.564525  438716 cri.go:89] found id: ""
	I0819 19:16:46.564557  438716 logs.go:276] 0 containers: []
	W0819 19:16:46.564570  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:46.564578  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:46.564647  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:46.598457  438716 cri.go:89] found id: ""
	I0819 19:16:46.598485  438716 logs.go:276] 0 containers: []
	W0819 19:16:46.598494  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:46.598499  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:46.598549  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:46.631767  438716 cri.go:89] found id: ""
	I0819 19:16:46.631798  438716 logs.go:276] 0 containers: []
	W0819 19:16:46.631807  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:46.631814  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:46.631867  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:46.664978  438716 cri.go:89] found id: ""
	I0819 19:16:46.665013  438716 logs.go:276] 0 containers: []
	W0819 19:16:46.665026  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:46.665034  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:46.665094  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:46.701024  438716 cri.go:89] found id: ""
	I0819 19:16:46.701052  438716 logs.go:276] 0 containers: []
	W0819 19:16:46.701061  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:46.701067  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:46.701132  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:46.735834  438716 cri.go:89] found id: ""
	I0819 19:16:46.735874  438716 logs.go:276] 0 containers: []
	W0819 19:16:46.735886  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:46.735894  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:46.735978  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:46.773392  438716 cri.go:89] found id: ""
	I0819 19:16:46.773426  438716 logs.go:276] 0 containers: []
	W0819 19:16:46.773437  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:46.773445  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:46.773498  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:46.819800  438716 cri.go:89] found id: ""
	I0819 19:16:46.819829  438716 logs.go:276] 0 containers: []
	W0819 19:16:46.819841  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:46.819869  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:46.819889  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:46.860633  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:46.860669  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:46.911895  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:46.911936  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:46.927388  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:46.927422  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:46.998601  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:46.998628  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:46.998645  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:49.585303  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:49.598962  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:49.599032  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:49.631891  438716 cri.go:89] found id: ""
	I0819 19:16:49.631920  438716 logs.go:276] 0 containers: []
	W0819 19:16:49.631931  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:49.631940  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:49.631998  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:49.671731  438716 cri.go:89] found id: ""
	I0819 19:16:49.671761  438716 logs.go:276] 0 containers: []
	W0819 19:16:49.671777  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:49.671786  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:49.671846  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:49.707517  438716 cri.go:89] found id: ""
	I0819 19:16:49.707556  438716 logs.go:276] 0 containers: []
	W0819 19:16:49.707568  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:49.707578  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:49.707651  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:49.744255  438716 cri.go:89] found id: ""
	I0819 19:16:49.744289  438716 logs.go:276] 0 containers: []
	W0819 19:16:49.744299  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:49.744305  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:49.744357  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:49.779224  438716 cri.go:89] found id: ""
	I0819 19:16:49.779252  438716 logs.go:276] 0 containers: []
	W0819 19:16:49.779259  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:49.779266  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:49.779322  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:49.815641  438716 cri.go:89] found id: ""
	I0819 19:16:49.815689  438716 logs.go:276] 0 containers: []
	W0819 19:16:49.815701  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:49.815711  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:49.815769  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:49.851861  438716 cri.go:89] found id: ""
	I0819 19:16:49.851894  438716 logs.go:276] 0 containers: []
	W0819 19:16:49.851906  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:49.851915  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:49.851984  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:49.888140  438716 cri.go:89] found id: ""
	I0819 19:16:49.888173  438716 logs.go:276] 0 containers: []
	W0819 19:16:49.888186  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:49.888199  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:49.888215  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:49.940389  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:49.940430  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:49.954519  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:49.954553  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:50.028462  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:50.028486  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:50.028502  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:50.108319  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:50.108362  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:48.901902  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:50.902702  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:50.919079  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:52.919271  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:52.647146  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:52.660468  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:52.660558  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:52.697665  438716 cri.go:89] found id: ""
	I0819 19:16:52.697703  438716 logs.go:276] 0 containers: []
	W0819 19:16:52.697719  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:52.697727  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:52.697786  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:52.739169  438716 cri.go:89] found id: ""
	I0819 19:16:52.739203  438716 logs.go:276] 0 containers: []
	W0819 19:16:52.739214  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:52.739222  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:52.739289  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:52.776580  438716 cri.go:89] found id: ""
	I0819 19:16:52.776610  438716 logs.go:276] 0 containers: []
	W0819 19:16:52.776619  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:52.776630  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:52.776683  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:52.813443  438716 cri.go:89] found id: ""
	I0819 19:16:52.813475  438716 logs.go:276] 0 containers: []
	W0819 19:16:52.813488  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:52.813497  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:52.813557  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:52.848035  438716 cri.go:89] found id: ""
	I0819 19:16:52.848064  438716 logs.go:276] 0 containers: []
	W0819 19:16:52.848075  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:52.848082  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:52.848150  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:52.881814  438716 cri.go:89] found id: ""
	I0819 19:16:52.881841  438716 logs.go:276] 0 containers: []
	W0819 19:16:52.881858  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:52.881867  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:52.881930  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:52.922179  438716 cri.go:89] found id: ""
	I0819 19:16:52.922202  438716 logs.go:276] 0 containers: []
	W0819 19:16:52.922210  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:52.922216  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:52.922277  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:52.958110  438716 cri.go:89] found id: ""
	I0819 19:16:52.958136  438716 logs.go:276] 0 containers: []
	W0819 19:16:52.958144  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:52.958153  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:52.958167  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:53.008553  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:53.008592  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:53.022826  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:53.022860  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:53.094940  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:53.094967  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:53.094982  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:53.173877  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:53.173920  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:53.403382  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:55.905504  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:55.419297  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:55.419331  438295 pod_ready.go:82] duration metric: took 4m0.007107243s for pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace to be "Ready" ...
	E0819 19:16:55.419345  438295 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0819 19:16:55.419355  438295 pod_ready.go:39] duration metric: took 4m4.316528467s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 19:16:55.419408  438295 api_server.go:52] waiting for apiserver process to appear ...
	I0819 19:16:55.419449  438295 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:55.419499  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:55.466648  438295 cri.go:89] found id: "d66ad075c652a3b446078444a32327c07459f74199be8f89197067dbad566d5a"
	I0819 19:16:55.466679  438295 cri.go:89] found id: ""
	I0819 19:16:55.466690  438295 logs.go:276] 1 containers: [d66ad075c652a3b446078444a32327c07459f74199be8f89197067dbad566d5a]
	I0819 19:16:55.466758  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:16:55.471085  438295 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:55.471164  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:55.509883  438295 cri.go:89] found id: "a3cb2c04e3eb3398fa324b660ca1864f22175cbf41fd84eae34a24ce7928b672"
	I0819 19:16:55.509910  438295 cri.go:89] found id: ""
	I0819 19:16:55.509921  438295 logs.go:276] 1 containers: [a3cb2c04e3eb3398fa324b660ca1864f22175cbf41fd84eae34a24ce7928b672]
	I0819 19:16:55.509984  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:16:55.516866  438295 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:55.516954  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:55.560957  438295 cri.go:89] found id: "a6bc5b24f616e32fdffb80b6ed0201250b02f143c8217d56ef90dc55551d709f"
	I0819 19:16:55.560988  438295 cri.go:89] found id: ""
	I0819 19:16:55.560999  438295 logs.go:276] 1 containers: [a6bc5b24f616e32fdffb80b6ed0201250b02f143c8217d56ef90dc55551d709f]
	I0819 19:16:55.561065  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:16:55.565592  438295 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:55.565662  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:55.610872  438295 cri.go:89] found id: "c09c2a3840c6b84c4d187a5b4938f1e79c515609ad3ff7077a163e94acd5fc22"
	I0819 19:16:55.610905  438295 cri.go:89] found id: ""
	I0819 19:16:55.610914  438295 logs.go:276] 1 containers: [c09c2a3840c6b84c4d187a5b4938f1e79c515609ad3ff7077a163e94acd5fc22]
	I0819 19:16:55.610976  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:16:55.615411  438295 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:55.615486  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:55.652759  438295 cri.go:89] found id: "3e23a8501fe9333693618c26b918ed665ca9f2ea955dfc771ddbd90f4af91338"
	I0819 19:16:55.652792  438295 cri.go:89] found id: ""
	I0819 19:16:55.652807  438295 logs.go:276] 1 containers: [3e23a8501fe9333693618c26b918ed665ca9f2ea955dfc771ddbd90f4af91338]
	I0819 19:16:55.652873  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:16:55.657124  438295 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:55.657190  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:55.699063  438295 cri.go:89] found id: "6e6dab43bac16fb6a2155177fd2cb01da57c882a322ae89145bc332c50c87071"
	I0819 19:16:55.699085  438295 cri.go:89] found id: ""
	I0819 19:16:55.699093  438295 logs.go:276] 1 containers: [6e6dab43bac16fb6a2155177fd2cb01da57c882a322ae89145bc332c50c87071]
	I0819 19:16:55.699145  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:16:55.703224  438295 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:55.703292  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:55.753166  438295 cri.go:89] found id: ""
	I0819 19:16:55.753198  438295 logs.go:276] 0 containers: []
	W0819 19:16:55.753210  438295 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:55.753218  438295 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 19:16:55.753286  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 19:16:55.803518  438295 cri.go:89] found id: "902796698c02b97c3f50f231cba5dfbc00bc7e8344f104fe7a36109e1d10a4f8"
	I0819 19:16:55.803551  438295 cri.go:89] found id: "44a4290db8405288dc877d1dbfa8f1a4976cb6221431aef419db3cdff822d3b6"
	I0819 19:16:55.803558  438295 cri.go:89] found id: ""
	I0819 19:16:55.803568  438295 logs.go:276] 2 containers: [902796698c02b97c3f50f231cba5dfbc00bc7e8344f104fe7a36109e1d10a4f8 44a4290db8405288dc877d1dbfa8f1a4976cb6221431aef419db3cdff822d3b6]
	I0819 19:16:55.803637  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:16:55.808063  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:16:55.812708  438295 logs.go:123] Gathering logs for container status ...
	I0819 19:16:55.812737  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:55.861697  438295 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:55.861736  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 19:16:55.911203  438295 logs.go:138] Found kubelet problem: Aug 19 19:12:40 embed-certs-024748 kubelet[936]: W0819 19:12:40.671901     936 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:embed-certs-024748" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-024748' and this object
	W0819 19:16:55.911420  438295 logs.go:138] Found kubelet problem: Aug 19 19:12:40 embed-certs-024748 kubelet[936]: E0819 19:12:40.672098     936 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:embed-certs-024748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-024748' and this object" logger="UnhandledError"
	W0819 19:16:55.911603  438295 logs.go:138] Found kubelet problem: Aug 19 19:12:40 embed-certs-024748 kubelet[936]: W0819 19:12:40.672624     936 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-024748" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-024748' and this object
	W0819 19:16:55.911834  438295 logs.go:138] Found kubelet problem: Aug 19 19:12:40 embed-certs-024748 kubelet[936]: E0819 19:12:40.672667     936 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:embed-certs-024748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-024748' and this object" logger="UnhandledError"
	I0819 19:16:55.949585  438295 logs.go:123] Gathering logs for kube-scheduler [c09c2a3840c6b84c4d187a5b4938f1e79c515609ad3ff7077a163e94acd5fc22] ...
	I0819 19:16:55.949663  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c09c2a3840c6b84c4d187a5b4938f1e79c515609ad3ff7077a163e94acd5fc22"
	I0819 19:16:55.995063  438295 logs.go:123] Gathering logs for kube-controller-manager [6e6dab43bac16fb6a2155177fd2cb01da57c882a322ae89145bc332c50c87071] ...
	I0819 19:16:55.995100  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e6dab43bac16fb6a2155177fd2cb01da57c882a322ae89145bc332c50c87071"
	I0819 19:16:56.062320  438295 logs.go:123] Gathering logs for storage-provisioner [902796698c02b97c3f50f231cba5dfbc00bc7e8344f104fe7a36109e1d10a4f8] ...
	I0819 19:16:56.062376  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 902796698c02b97c3f50f231cba5dfbc00bc7e8344f104fe7a36109e1d10a4f8"
	I0819 19:16:56.100112  438295 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:56.100152  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:56.589439  438295 logs.go:123] Gathering logs for kube-proxy [3e23a8501fe9333693618c26b918ed665ca9f2ea955dfc771ddbd90f4af91338] ...
	I0819 19:16:56.589486  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e23a8501fe9333693618c26b918ed665ca9f2ea955dfc771ddbd90f4af91338"
	I0819 19:16:56.632096  438295 logs.go:123] Gathering logs for storage-provisioner [44a4290db8405288dc877d1dbfa8f1a4976cb6221431aef419db3cdff822d3b6] ...
	I0819 19:16:56.632132  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44a4290db8405288dc877d1dbfa8f1a4976cb6221431aef419db3cdff822d3b6"
	I0819 19:16:56.670952  438295 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:56.670984  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:56.685246  438295 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:56.685279  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 19:16:56.826418  438295 logs.go:123] Gathering logs for kube-apiserver [d66ad075c652a3b446078444a32327c07459f74199be8f89197067dbad566d5a] ...
	I0819 19:16:56.826456  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d66ad075c652a3b446078444a32327c07459f74199be8f89197067dbad566d5a"
	I0819 19:16:56.876901  438295 logs.go:123] Gathering logs for etcd [a3cb2c04e3eb3398fa324b660ca1864f22175cbf41fd84eae34a24ce7928b672] ...
	I0819 19:16:56.876944  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a3cb2c04e3eb3398fa324b660ca1864f22175cbf41fd84eae34a24ce7928b672"
	I0819 19:16:56.920390  438295 logs.go:123] Gathering logs for coredns [a6bc5b24f616e32fdffb80b6ed0201250b02f143c8217d56ef90dc55551d709f] ...
	I0819 19:16:56.920423  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6bc5b24f616e32fdffb80b6ed0201250b02f143c8217d56ef90dc55551d709f"
	I0819 19:16:56.961691  438295 out.go:358] Setting ErrFile to fd 2...
	I0819 19:16:56.961718  438295 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 19:16:56.961793  438295 out.go:270] X Problems detected in kubelet:
	W0819 19:16:56.961805  438295 out.go:270]   Aug 19 19:12:40 embed-certs-024748 kubelet[936]: W0819 19:12:40.671901     936 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:embed-certs-024748" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-024748' and this object
	W0819 19:16:56.961824  438295 out.go:270]   Aug 19 19:12:40 embed-certs-024748 kubelet[936]: E0819 19:12:40.672098     936 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:embed-certs-024748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-024748' and this object" logger="UnhandledError"
	W0819 19:16:56.961839  438295 out.go:270]   Aug 19 19:12:40 embed-certs-024748 kubelet[936]: W0819 19:12:40.672624     936 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-024748" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-024748' and this object
	W0819 19:16:56.961853  438295 out.go:270]   Aug 19 19:12:40 embed-certs-024748 kubelet[936]: E0819 19:12:40.672667     936 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:embed-certs-024748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-024748' and this object" logger="UnhandledError"
	I0819 19:16:56.961884  438295 out.go:358] Setting ErrFile to fd 2...
	I0819 19:16:56.961893  438295 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:16:55.716096  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:55.734732  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:55.734817  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:55.780484  438716 cri.go:89] found id: ""
	I0819 19:16:55.780514  438716 logs.go:276] 0 containers: []
	W0819 19:16:55.780525  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:55.780534  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:55.780607  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:55.821755  438716 cri.go:89] found id: ""
	I0819 19:16:55.821778  438716 logs.go:276] 0 containers: []
	W0819 19:16:55.821786  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:55.821792  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:55.821855  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:55.861032  438716 cri.go:89] found id: ""
	I0819 19:16:55.861066  438716 logs.go:276] 0 containers: []
	W0819 19:16:55.861077  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:55.861086  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:55.861159  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:55.909978  438716 cri.go:89] found id: ""
	I0819 19:16:55.910004  438716 logs.go:276] 0 containers: []
	W0819 19:16:55.910015  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:55.910024  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:55.910087  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:55.956603  438716 cri.go:89] found id: ""
	I0819 19:16:55.956634  438716 logs.go:276] 0 containers: []
	W0819 19:16:55.956645  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:55.956653  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:55.956722  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:55.999176  438716 cri.go:89] found id: ""
	I0819 19:16:55.999203  438716 logs.go:276] 0 containers: []
	W0819 19:16:55.999216  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:55.999225  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:55.999286  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:56.035141  438716 cri.go:89] found id: ""
	I0819 19:16:56.035172  438716 logs.go:276] 0 containers: []
	W0819 19:16:56.035183  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:56.035192  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:56.035255  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:56.076152  438716 cri.go:89] found id: ""
	I0819 19:16:56.076185  438716 logs.go:276] 0 containers: []
	W0819 19:16:56.076197  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:56.076209  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:56.076226  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:56.136624  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:56.136671  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:56.151867  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:56.151902  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:56.231650  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:56.231696  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:56.231713  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:56.307203  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:56.307247  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:58.848295  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:58.861984  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:58.862172  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:58.900089  438716 cri.go:89] found id: ""
	I0819 19:16:58.900114  438716 logs.go:276] 0 containers: []
	W0819 19:16:58.900124  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:58.900132  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:58.900203  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:58.932528  438716 cri.go:89] found id: ""
	I0819 19:16:58.932551  438716 logs.go:276] 0 containers: []
	W0819 19:16:58.932559  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:58.932565  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:58.932618  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:58.967255  438716 cri.go:89] found id: ""
	I0819 19:16:58.967283  438716 logs.go:276] 0 containers: []
	W0819 19:16:58.967291  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:58.967298  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:58.967349  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:59.000887  438716 cri.go:89] found id: ""
	I0819 19:16:59.000923  438716 logs.go:276] 0 containers: []
	W0819 19:16:59.000934  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:59.000942  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:59.001009  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:59.041386  438716 cri.go:89] found id: ""
	I0819 19:16:59.041417  438716 logs.go:276] 0 containers: []
	W0819 19:16:59.041428  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:59.041436  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:59.041499  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:59.080036  438716 cri.go:89] found id: ""
	I0819 19:16:59.080078  438716 logs.go:276] 0 containers: []
	W0819 19:16:59.080090  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:59.080099  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:59.080168  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:59.113946  438716 cri.go:89] found id: ""
	I0819 19:16:59.113982  438716 logs.go:276] 0 containers: []
	W0819 19:16:59.113995  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:59.114004  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:59.114066  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:59.155413  438716 cri.go:89] found id: ""
	I0819 19:16:59.155437  438716 logs.go:276] 0 containers: []
	W0819 19:16:59.155446  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:59.155456  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:59.155477  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:59.223795  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:59.223815  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:59.223828  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:59.304516  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:59.304554  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:59.344975  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:59.345005  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:59.397751  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:59.397789  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:58.402453  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:00.901494  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:02.043611  438245 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.355651212s)
	I0819 19:17:02.043735  438245 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 19:17:02.066981  438245 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 19:17:02.083179  438245 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 19:17:02.100807  438245 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 19:17:02.100829  438245 kubeadm.go:157] found existing configuration files:
	
	I0819 19:17:02.100877  438245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0819 19:17:02.116462  438245 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 19:17:02.116534  438245 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 19:17:02.127313  438245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0819 19:17:02.147096  438245 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 19:17:02.147170  438245 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 19:17:02.159262  438245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0819 19:17:02.168825  438245 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 19:17:02.168918  438245 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 19:17:02.179354  438245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0819 19:17:02.188982  438245 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 19:17:02.189051  438245 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 19:17:02.199291  438245 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 19:17:01.914433  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:17:01.927468  438716 kubeadm.go:597] duration metric: took 4m3.453401239s to restartPrimaryControlPlane
	W0819 19:17:01.927564  438716 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0819 19:17:01.927600  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 19:17:02.647971  438716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 19:17:02.665946  438716 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 19:17:02.676665  438716 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 19:17:02.686818  438716 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 19:17:02.686840  438716 kubeadm.go:157] found existing configuration files:
	
	I0819 19:17:02.686885  438716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 19:17:02.697160  438716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 19:17:02.697228  438716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 19:17:02.707774  438716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 19:17:02.717251  438716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 19:17:02.717310  438716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 19:17:02.727481  438716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 19:17:02.738085  438716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 19:17:02.738141  438716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 19:17:02.749286  438716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 19:17:02.759965  438716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 19:17:02.760025  438716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 19:17:02.770753  438716 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 19:17:02.835857  438716 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0819 19:17:02.835940  438716 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 19:17:02.983775  438716 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 19:17:02.983974  438716 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 19:17:02.984149  438716 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0819 19:17:03.173404  438716 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 19:17:03.175412  438716 out.go:235]   - Generating certificates and keys ...
	I0819 19:17:03.175520  438716 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 19:17:03.175659  438716 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 19:17:03.175805  438716 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 19:17:03.175913  438716 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 19:17:03.176021  438716 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 19:17:03.176125  438716 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 19:17:03.176626  438716 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 19:17:03.177624  438716 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 19:17:03.178399  438716 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 19:17:03.179325  438716 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 19:17:03.179599  438716 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 19:17:03.179702  438716 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 19:17:03.416467  438716 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 19:17:03.505378  438716 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 19:17:03.588959  438716 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 19:17:03.680602  438716 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 19:17:03.697717  438716 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 19:17:03.700436  438716 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 19:17:03.700579  438716 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 19:17:03.858804  438716 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 19:17:03.861395  438716 out.go:235]   - Booting up control plane ...
	I0819 19:17:03.861520  438716 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 19:17:03.877387  438716 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 19:17:03.878611  438716 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 19:17:03.882842  438716 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 19:17:03.887436  438716 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0819 19:17:02.902839  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:05.402376  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:02.248409  438245 kubeadm.go:310] W0819 19:17:02.217617    2563 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 19:17:02.250447  438245 kubeadm.go:310] W0819 19:17:02.219827    2563 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 19:17:02.377127  438245 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 19:17:06.962848  438295 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:17:06.984774  438295 api_server.go:72] duration metric: took 4m23.117653428s to wait for apiserver process to appear ...
	I0819 19:17:06.984811  438295 api_server.go:88] waiting for apiserver healthz status ...
	I0819 19:17:06.984865  438295 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:17:06.984939  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:17:07.025158  438295 cri.go:89] found id: "d66ad075c652a3b446078444a32327c07459f74199be8f89197067dbad566d5a"
	I0819 19:17:07.025201  438295 cri.go:89] found id: ""
	I0819 19:17:07.025213  438295 logs.go:276] 1 containers: [d66ad075c652a3b446078444a32327c07459f74199be8f89197067dbad566d5a]
	I0819 19:17:07.025287  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:07.032365  438295 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:17:07.032446  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:17:07.073368  438295 cri.go:89] found id: "a3cb2c04e3eb3398fa324b660ca1864f22175cbf41fd84eae34a24ce7928b672"
	I0819 19:17:07.073394  438295 cri.go:89] found id: ""
	I0819 19:17:07.073403  438295 logs.go:276] 1 containers: [a3cb2c04e3eb3398fa324b660ca1864f22175cbf41fd84eae34a24ce7928b672]
	I0819 19:17:07.073463  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:07.078781  438295 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:17:07.078891  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:17:07.123263  438295 cri.go:89] found id: "a6bc5b24f616e32fdffb80b6ed0201250b02f143c8217d56ef90dc55551d709f"
	I0819 19:17:07.123293  438295 cri.go:89] found id: ""
	I0819 19:17:07.123303  438295 logs.go:276] 1 containers: [a6bc5b24f616e32fdffb80b6ed0201250b02f143c8217d56ef90dc55551d709f]
	I0819 19:17:07.123365  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:07.128485  438295 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:17:07.128579  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:17:07.167105  438295 cri.go:89] found id: "c09c2a3840c6b84c4d187a5b4938f1e79c515609ad3ff7077a163e94acd5fc22"
	I0819 19:17:07.167137  438295 cri.go:89] found id: ""
	I0819 19:17:07.167148  438295 logs.go:276] 1 containers: [c09c2a3840c6b84c4d187a5b4938f1e79c515609ad3ff7077a163e94acd5fc22]
	I0819 19:17:07.167215  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:07.171571  438295 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:17:07.171641  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:17:07.215524  438295 cri.go:89] found id: "3e23a8501fe9333693618c26b918ed665ca9f2ea955dfc771ddbd90f4af91338"
	I0819 19:17:07.215547  438295 cri.go:89] found id: ""
	I0819 19:17:07.215555  438295 logs.go:276] 1 containers: [3e23a8501fe9333693618c26b918ed665ca9f2ea955dfc771ddbd90f4af91338]
	I0819 19:17:07.215621  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:07.221604  438295 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:17:07.221676  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:17:07.263106  438295 cri.go:89] found id: "6e6dab43bac16fb6a2155177fd2cb01da57c882a322ae89145bc332c50c87071"
	I0819 19:17:07.263140  438295 cri.go:89] found id: ""
	I0819 19:17:07.263149  438295 logs.go:276] 1 containers: [6e6dab43bac16fb6a2155177fd2cb01da57c882a322ae89145bc332c50c87071]
	I0819 19:17:07.263209  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:07.267703  438295 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:17:07.267770  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:17:07.316006  438295 cri.go:89] found id: ""
	I0819 19:17:07.316042  438295 logs.go:276] 0 containers: []
	W0819 19:17:07.316054  438295 logs.go:278] No container was found matching "kindnet"
	I0819 19:17:07.316062  438295 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 19:17:07.316132  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 19:17:07.361100  438295 cri.go:89] found id: "902796698c02b97c3f50f231cba5dfbc00bc7e8344f104fe7a36109e1d10a4f8"
	I0819 19:17:07.361123  438295 cri.go:89] found id: "44a4290db8405288dc877d1dbfa8f1a4976cb6221431aef419db3cdff822d3b6"
	I0819 19:17:07.361126  438295 cri.go:89] found id: ""
	I0819 19:17:07.361133  438295 logs.go:276] 2 containers: [902796698c02b97c3f50f231cba5dfbc00bc7e8344f104fe7a36109e1d10a4f8 44a4290db8405288dc877d1dbfa8f1a4976cb6221431aef419db3cdff822d3b6]
	I0819 19:17:07.361190  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:07.366949  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:07.372724  438295 logs.go:123] Gathering logs for kubelet ...
	I0819 19:17:07.372748  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 19:17:07.413540  438295 logs.go:138] Found kubelet problem: Aug 19 19:12:40 embed-certs-024748 kubelet[936]: W0819 19:12:40.671901     936 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:embed-certs-024748" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-024748' and this object
	W0819 19:17:07.413722  438295 logs.go:138] Found kubelet problem: Aug 19 19:12:40 embed-certs-024748 kubelet[936]: E0819 19:12:40.672098     936 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:embed-certs-024748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-024748' and this object" logger="UnhandledError"
	W0819 19:17:07.413858  438295 logs.go:138] Found kubelet problem: Aug 19 19:12:40 embed-certs-024748 kubelet[936]: W0819 19:12:40.672624     936 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-024748" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-024748' and this object
	W0819 19:17:07.414017  438295 logs.go:138] Found kubelet problem: Aug 19 19:12:40 embed-certs-024748 kubelet[936]: E0819 19:12:40.672667     936 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:embed-certs-024748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-024748' and this object" logger="UnhandledError"
	I0819 19:17:07.452061  438295 logs.go:123] Gathering logs for coredns [a6bc5b24f616e32fdffb80b6ed0201250b02f143c8217d56ef90dc55551d709f] ...
	I0819 19:17:07.452104  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6bc5b24f616e32fdffb80b6ed0201250b02f143c8217d56ef90dc55551d709f"
	I0819 19:17:07.490598  438295 logs.go:123] Gathering logs for kube-scheduler [c09c2a3840c6b84c4d187a5b4938f1e79c515609ad3ff7077a163e94acd5fc22] ...
	I0819 19:17:07.490636  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c09c2a3840c6b84c4d187a5b4938f1e79c515609ad3ff7077a163e94acd5fc22"
	I0819 19:17:07.530454  438295 logs.go:123] Gathering logs for kube-proxy [3e23a8501fe9333693618c26b918ed665ca9f2ea955dfc771ddbd90f4af91338] ...
	I0819 19:17:07.530486  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e23a8501fe9333693618c26b918ed665ca9f2ea955dfc771ddbd90f4af91338"
	I0819 19:17:07.581488  438295 logs.go:123] Gathering logs for storage-provisioner [902796698c02b97c3f50f231cba5dfbc00bc7e8344f104fe7a36109e1d10a4f8] ...
	I0819 19:17:07.581528  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 902796698c02b97c3f50f231cba5dfbc00bc7e8344f104fe7a36109e1d10a4f8"
	I0819 19:17:07.621752  438295 logs.go:123] Gathering logs for storage-provisioner [44a4290db8405288dc877d1dbfa8f1a4976cb6221431aef419db3cdff822d3b6] ...
	I0819 19:17:07.621787  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44a4290db8405288dc877d1dbfa8f1a4976cb6221431aef419db3cdff822d3b6"
	I0819 19:17:07.661330  438295 logs.go:123] Gathering logs for container status ...
	I0819 19:17:07.661365  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:17:07.709227  438295 logs.go:123] Gathering logs for dmesg ...
	I0819 19:17:07.709261  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:17:07.724634  438295 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:17:07.724670  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 19:17:07.850212  438295 logs.go:123] Gathering logs for kube-apiserver [d66ad075c652a3b446078444a32327c07459f74199be8f89197067dbad566d5a] ...
	I0819 19:17:07.850247  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d66ad075c652a3b446078444a32327c07459f74199be8f89197067dbad566d5a"
	I0819 19:17:07.894464  438295 logs.go:123] Gathering logs for etcd [a3cb2c04e3eb3398fa324b660ca1864f22175cbf41fd84eae34a24ce7928b672] ...
	I0819 19:17:07.894507  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a3cb2c04e3eb3398fa324b660ca1864f22175cbf41fd84eae34a24ce7928b672"
	I0819 19:17:07.943807  438295 logs.go:123] Gathering logs for kube-controller-manager [6e6dab43bac16fb6a2155177fd2cb01da57c882a322ae89145bc332c50c87071] ...
	I0819 19:17:07.943841  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e6dab43bac16fb6a2155177fd2cb01da57c882a322ae89145bc332c50c87071"
	I0819 19:17:08.007428  438295 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:17:08.007463  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:17:08.487397  438295 out.go:358] Setting ErrFile to fd 2...
	I0819 19:17:08.487435  438295 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 19:17:08.487518  438295 out.go:270] X Problems detected in kubelet:
	W0819 19:17:08.487534  438295 out.go:270]   Aug 19 19:12:40 embed-certs-024748 kubelet[936]: W0819 19:12:40.671901     936 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:embed-certs-024748" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-024748' and this object
	W0819 19:17:08.487546  438295 out.go:270]   Aug 19 19:12:40 embed-certs-024748 kubelet[936]: E0819 19:12:40.672098     936 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:embed-certs-024748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-024748' and this object" logger="UnhandledError"
	W0819 19:17:08.487560  438295 out.go:270]   Aug 19 19:12:40 embed-certs-024748 kubelet[936]: W0819 19:12:40.672624     936 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-024748" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-024748' and this object
	W0819 19:17:08.487574  438295 out.go:270]   Aug 19 19:12:40 embed-certs-024748 kubelet[936]: E0819 19:12:40.672667     936 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:embed-certs-024748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-024748' and this object" logger="UnhandledError"
	I0819 19:17:08.487584  438295 out.go:358] Setting ErrFile to fd 2...
	I0819 19:17:08.487598  438295 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:17:10.237580  438245 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0819 19:17:10.237675  438245 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 19:17:10.237792  438245 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 19:17:10.237934  438245 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 19:17:10.238088  438245 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0819 19:17:10.238194  438245 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 19:17:10.239873  438245 out.go:235]   - Generating certificates and keys ...
	I0819 19:17:10.239957  438245 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 19:17:10.240051  438245 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 19:17:10.240187  438245 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 19:17:10.240294  438245 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 19:17:10.240410  438245 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 19:17:10.240495  438245 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 19:17:10.240598  438245 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 19:17:10.240680  438245 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 19:17:10.240747  438245 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 19:17:10.240843  438245 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 19:17:10.240886  438245 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 19:17:10.240958  438245 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 19:17:10.241024  438245 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 19:17:10.241094  438245 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0819 19:17:10.241159  438245 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 19:17:10.241248  438245 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 19:17:10.241328  438245 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 19:17:10.241431  438245 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 19:17:10.241535  438245 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 19:17:10.243764  438245 out.go:235]   - Booting up control plane ...
	I0819 19:17:10.243859  438245 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 19:17:10.243934  438245 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 19:17:10.243994  438245 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 19:17:10.244131  438245 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 19:17:10.244263  438245 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 19:17:10.244301  438245 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 19:17:10.244458  438245 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0819 19:17:10.244611  438245 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0819 19:17:10.244685  438245 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.412341ms
	I0819 19:17:10.244770  438245 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0819 19:17:10.244850  438245 kubeadm.go:310] [api-check] The API server is healthy after 5.002047877s
	I0819 19:17:10.244953  438245 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 19:17:10.245093  438245 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 19:17:10.245199  438245 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 19:17:10.245400  438245 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-982795 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 19:17:10.245465  438245 kubeadm.go:310] [bootstrap-token] Using token: trsfx5.kx2phd1605yhia2w
	I0819 19:17:10.247722  438245 out.go:235]   - Configuring RBAC rules ...
	I0819 19:17:10.247861  438245 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 19:17:10.247955  438245 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 19:17:10.248144  438245 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 19:17:10.248264  438245 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 19:17:10.248379  438245 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 19:17:10.248468  438245 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 19:17:10.248567  438245 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 19:17:10.248612  438245 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 19:17:10.248654  438245 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 19:17:10.248660  438245 kubeadm.go:310] 
	I0819 19:17:10.248708  438245 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 19:17:10.248713  438245 kubeadm.go:310] 
	I0819 19:17:10.248779  438245 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 19:17:10.248786  438245 kubeadm.go:310] 
	I0819 19:17:10.248806  438245 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 19:17:10.248866  438245 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 19:17:10.248910  438245 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 19:17:10.248916  438245 kubeadm.go:310] 
	I0819 19:17:10.248966  438245 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 19:17:10.248972  438245 kubeadm.go:310] 
	I0819 19:17:10.249014  438245 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 19:17:10.249024  438245 kubeadm.go:310] 
	I0819 19:17:10.249069  438245 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 19:17:10.249136  438245 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 19:17:10.249209  438245 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 19:17:10.249221  438245 kubeadm.go:310] 
	I0819 19:17:10.249319  438245 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 19:17:10.249386  438245 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 19:17:10.249392  438245 kubeadm.go:310] 
	I0819 19:17:10.249464  438245 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token trsfx5.kx2phd1605yhia2w \
	I0819 19:17:10.249553  438245 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3fcbd90565c5acbc36a47b2db682cb22dce9b172c9bf3af21e506ebb67608039 \
	I0819 19:17:10.249575  438245 kubeadm.go:310] 	--control-plane 
	I0819 19:17:10.249581  438245 kubeadm.go:310] 
	I0819 19:17:10.249658  438245 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 19:17:10.249664  438245 kubeadm.go:310] 
	I0819 19:17:10.249734  438245 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token trsfx5.kx2phd1605yhia2w \
	I0819 19:17:10.249833  438245 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3fcbd90565c5acbc36a47b2db682cb22dce9b172c9bf3af21e506ebb67608039 
	I0819 19:17:10.249849  438245 cni.go:84] Creating CNI manager for ""
	I0819 19:17:10.249857  438245 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 19:17:10.252133  438245 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 19:17:07.403590  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:09.901861  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:10.253419  438245 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 19:17:10.264266  438245 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 19:17:10.289509  438245 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 19:17:10.289661  438245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-982795 minikube.k8s.io/updated_at=2024_08_19T19_17_10_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=9c2db9d51ec33b5c53a86e9ba3d384ee332e3411 minikube.k8s.io/name=default-k8s-diff-port-982795 minikube.k8s.io/primary=true
	I0819 19:17:10.289663  438245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:17:10.322738  438245 ops.go:34] apiserver oom_adj: -16
	I0819 19:17:10.519946  438245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:17:11.020736  438245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:17:11.520925  438245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:17:12.020276  438245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:17:12.520277  438245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:17:13.020787  438245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:17:13.520048  438245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:17:14.020893  438245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:17:14.520869  438245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:17:14.642214  438245 kubeadm.go:1113] duration metric: took 4.352638211s to wait for elevateKubeSystemPrivileges
	I0819 19:17:14.642251  438245 kubeadm.go:394] duration metric: took 4m59.943476935s to StartCluster
	I0819 19:17:14.642295  438245 settings.go:142] acquiring lock: {Name:mk396fcf49a1d0e69583cf37ff3c819e37118163 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:17:14.642382  438245 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19468-372744/kubeconfig
	I0819 19:17:14.644103  438245 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/kubeconfig: {Name:mk8e7b4e1bb7da665111d2acd83eb48882c66853 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:17:14.644408  438245 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.48 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 19:17:14.644550  438245 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 19:17:14.644641  438245 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-982795"
	I0819 19:17:14.644665  438245 config.go:182] Loaded profile config "default-k8s-diff-port-982795": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:17:14.644687  438245 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-982795"
	W0819 19:17:14.644701  438245 addons.go:243] addon storage-provisioner should already be in state true
	I0819 19:17:14.644712  438245 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-982795"
	I0819 19:17:14.644735  438245 host.go:66] Checking if "default-k8s-diff-port-982795" exists ...
	I0819 19:17:14.644757  438245 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-982795"
	W0819 19:17:14.644770  438245 addons.go:243] addon metrics-server should already be in state true
	I0819 19:17:14.644678  438245 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-982795"
	I0819 19:17:14.644852  438245 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-982795"
	I0819 19:17:14.644797  438245 host.go:66] Checking if "default-k8s-diff-port-982795" exists ...
	I0819 19:17:14.645125  438245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:17:14.645176  438245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:17:14.645272  438245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:17:14.645291  438245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:17:14.645355  438245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:17:14.645401  438245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:17:14.646083  438245 out.go:177] * Verifying Kubernetes components...
	I0819 19:17:14.647579  438245 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:17:14.662756  438245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42581
	I0819 19:17:14.663407  438245 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:17:14.664088  438245 main.go:141] libmachine: Using API Version  1
	I0819 19:17:14.664117  438245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:17:14.664528  438245 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:17:14.665189  438245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:17:14.665222  438245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:17:14.665665  438245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43637
	I0819 19:17:14.665842  438245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44021
	I0819 19:17:14.666204  438245 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:17:14.666321  438245 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:17:14.666761  438245 main.go:141] libmachine: Using API Version  1
	I0819 19:17:14.666783  438245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:17:14.666955  438245 main.go:141] libmachine: Using API Version  1
	I0819 19:17:14.666979  438245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:17:14.667173  438245 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:17:14.667363  438245 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:17:14.667592  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetState
	I0819 19:17:14.667786  438245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:17:14.667818  438245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:17:14.671231  438245 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-982795"
	W0819 19:17:14.671249  438245 addons.go:243] addon default-storageclass should already be in state true
	I0819 19:17:14.671273  438245 host.go:66] Checking if "default-k8s-diff-port-982795" exists ...
	I0819 19:17:14.671507  438245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:17:14.671533  438245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:17:14.682996  438245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36593
	I0819 19:17:14.683560  438245 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:17:14.684268  438245 main.go:141] libmachine: Using API Version  1
	I0819 19:17:14.684292  438245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:17:14.684686  438245 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:17:14.684899  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetState
	I0819 19:17:14.686943  438245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44459
	I0819 19:17:14.687384  438245 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:17:14.687309  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .DriverName
	I0819 19:17:14.687874  438245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46587
	I0819 19:17:14.687965  438245 main.go:141] libmachine: Using API Version  1
	I0819 19:17:14.687980  438245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:17:14.688367  438245 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:17:14.688420  438245 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:17:14.688623  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetState
	I0819 19:17:14.689039  438245 main.go:141] libmachine: Using API Version  1
	I0819 19:17:14.689362  438245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:17:14.689690  438245 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:17:14.690179  438245 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:17:14.690626  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .DriverName
	I0819 19:17:14.690789  438245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:17:14.690823  438245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:17:14.690938  438245 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 19:17:14.690958  438245 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 19:17:14.690979  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHHostname
	I0819 19:17:14.692114  438245 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0819 19:17:11.902284  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:13.903205  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:16.402298  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:14.693147  438245 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0819 19:17:14.693163  438245 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0819 19:17:14.693182  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHHostname
	I0819 19:17:14.694601  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:17:14.695302  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:17:14.695333  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:17:14.695541  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHPort
	I0819 19:17:14.695760  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:17:14.696133  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHUsername
	I0819 19:17:14.696303  438245 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/default-k8s-diff-port-982795/id_rsa Username:docker}
	I0819 19:17:14.696554  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:17:14.696979  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:17:14.697003  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:17:14.697110  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHPort
	I0819 19:17:14.697274  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:17:14.697445  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHUsername
	I0819 19:17:14.697578  438245 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/default-k8s-diff-port-982795/id_rsa Username:docker}
	I0819 19:17:14.708592  438245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38807
	I0819 19:17:14.709140  438245 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:17:14.709716  438245 main.go:141] libmachine: Using API Version  1
	I0819 19:17:14.709737  438245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:17:14.710049  438245 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:17:14.710269  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetState
	I0819 19:17:14.711887  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .DriverName
	I0819 19:17:14.712147  438245 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 19:17:14.712162  438245 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 19:17:14.712179  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHHostname
	I0819 19:17:14.715593  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:17:14.716040  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:17:14.716062  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:17:14.716384  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHPort
	I0819 19:17:14.716561  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:17:14.716710  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHUsername
	I0819 19:17:14.716938  438245 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/default-k8s-diff-port-982795/id_rsa Username:docker}
	I0819 19:17:14.874857  438245 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 19:17:14.903798  438245 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-982795" to be "Ready" ...
	I0819 19:17:14.919842  438245 node_ready.go:49] node "default-k8s-diff-port-982795" has status "Ready":"True"
	I0819 19:17:14.919866  438245 node_ready.go:38] duration metric: took 16.039402ms for node "default-k8s-diff-port-982795" to be "Ready" ...
	I0819 19:17:14.919877  438245 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 19:17:14.932785  438245 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-845gx" in "kube-system" namespace to be "Ready" ...
	I0819 19:17:15.019664  438245 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0819 19:17:15.019718  438245 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0819 19:17:15.030317  438245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 19:17:15.056177  438245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 19:17:15.074202  438245 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0819 19:17:15.074235  438245 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0819 19:17:15.127037  438245 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 19:17:15.127071  438245 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0819 19:17:15.217951  438245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 19:17:15.351034  438245 main.go:141] libmachine: Making call to close driver server
	I0819 19:17:15.351067  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .Close
	I0819 19:17:15.351398  438245 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:17:15.351417  438245 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:17:15.351429  438245 main.go:141] libmachine: Making call to close driver server
	I0819 19:17:15.351441  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .Close
	I0819 19:17:15.351678  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | Closing plugin on server side
	I0819 19:17:15.351728  438245 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:17:15.351750  438245 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:17:15.357999  438245 main.go:141] libmachine: Making call to close driver server
	I0819 19:17:15.358023  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .Close
	I0819 19:17:15.358291  438245 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:17:15.358316  438245 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:17:16.196638  438245 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.140417152s)
	I0819 19:17:16.196694  438245 main.go:141] libmachine: Making call to close driver server
	I0819 19:17:16.196707  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .Close
	I0819 19:17:16.197022  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | Closing plugin on server side
	I0819 19:17:16.197112  438245 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:17:16.197137  438245 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:17:16.197157  438245 main.go:141] libmachine: Making call to close driver server
	I0819 19:17:16.197167  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .Close
	I0819 19:17:16.197449  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | Closing plugin on server side
	I0819 19:17:16.197493  438245 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:17:16.197505  438245 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:17:16.638069  438245 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.42006496s)
	I0819 19:17:16.638141  438245 main.go:141] libmachine: Making call to close driver server
	I0819 19:17:16.638159  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .Close
	I0819 19:17:16.638488  438245 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:17:16.638518  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | Closing plugin on server side
	I0819 19:17:16.638529  438245 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:17:16.638564  438245 main.go:141] libmachine: Making call to close driver server
	I0819 19:17:16.638574  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .Close
	I0819 19:17:16.638861  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | Closing plugin on server side
	I0819 19:17:16.638896  438245 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:17:16.638904  438245 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:17:16.638915  438245 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-982795"
	I0819 19:17:16.641476  438245 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0819 19:17:16.642733  438245 addons.go:510] duration metric: took 1.998196502s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0819 19:17:16.954631  438245 pod_ready.go:103] pod "coredns-6f6b679f8f-845gx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:18.489333  438295 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0819 19:17:18.494609  438295 api_server.go:279] https://192.168.72.96:8443/healthz returned 200:
	ok
	I0819 19:17:18.495587  438295 api_server.go:141] control plane version: v1.31.0
	I0819 19:17:18.495613  438295 api_server.go:131] duration metric: took 11.510793296s to wait for apiserver health ...
	I0819 19:17:18.495624  438295 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 19:17:18.495656  438295 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:17:18.495735  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:17:18.540446  438295 cri.go:89] found id: "d66ad075c652a3b446078444a32327c07459f74199be8f89197067dbad566d5a"
	I0819 19:17:18.540477  438295 cri.go:89] found id: ""
	I0819 19:17:18.540487  438295 logs.go:276] 1 containers: [d66ad075c652a3b446078444a32327c07459f74199be8f89197067dbad566d5a]
	I0819 19:17:18.540555  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:18.551443  438295 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:17:18.551527  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:17:18.592388  438295 cri.go:89] found id: "a3cb2c04e3eb3398fa324b660ca1864f22175cbf41fd84eae34a24ce7928b672"
	I0819 19:17:18.592416  438295 cri.go:89] found id: ""
	I0819 19:17:18.592427  438295 logs.go:276] 1 containers: [a3cb2c04e3eb3398fa324b660ca1864f22175cbf41fd84eae34a24ce7928b672]
	I0819 19:17:18.592495  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:18.597534  438295 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:17:18.597615  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:17:18.637782  438295 cri.go:89] found id: "a6bc5b24f616e32fdffb80b6ed0201250b02f143c8217d56ef90dc55551d709f"
	I0819 19:17:18.637804  438295 cri.go:89] found id: ""
	I0819 19:17:18.637812  438295 logs.go:276] 1 containers: [a6bc5b24f616e32fdffb80b6ed0201250b02f143c8217d56ef90dc55551d709f]
	I0819 19:17:18.637861  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:18.642557  438295 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:17:18.642618  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:17:18.679573  438295 cri.go:89] found id: "c09c2a3840c6b84c4d187a5b4938f1e79c515609ad3ff7077a163e94acd5fc22"
	I0819 19:17:18.679597  438295 cri.go:89] found id: ""
	I0819 19:17:18.679605  438295 logs.go:276] 1 containers: [c09c2a3840c6b84c4d187a5b4938f1e79c515609ad3ff7077a163e94acd5fc22]
	I0819 19:17:18.679657  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:18.684160  438295 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:17:18.684230  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:17:18.726848  438295 cri.go:89] found id: "3e23a8501fe9333693618c26b918ed665ca9f2ea955dfc771ddbd90f4af91338"
	I0819 19:17:18.726881  438295 cri.go:89] found id: ""
	I0819 19:17:18.726889  438295 logs.go:276] 1 containers: [3e23a8501fe9333693618c26b918ed665ca9f2ea955dfc771ddbd90f4af91338]
	I0819 19:17:18.726943  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:18.731422  438295 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:17:18.731484  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:17:18.773623  438295 cri.go:89] found id: "6e6dab43bac16fb6a2155177fd2cb01da57c882a322ae89145bc332c50c87071"
	I0819 19:17:18.773649  438295 cri.go:89] found id: ""
	I0819 19:17:18.773658  438295 logs.go:276] 1 containers: [6e6dab43bac16fb6a2155177fd2cb01da57c882a322ae89145bc332c50c87071]
	I0819 19:17:18.773709  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:18.779609  438295 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:17:18.779687  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:17:18.822876  438295 cri.go:89] found id: ""
	I0819 19:17:18.822911  438295 logs.go:276] 0 containers: []
	W0819 19:17:18.822922  438295 logs.go:278] No container was found matching "kindnet"
	I0819 19:17:18.822931  438295 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 19:17:18.822998  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 19:17:18.868653  438295 cri.go:89] found id: "902796698c02b97c3f50f231cba5dfbc00bc7e8344f104fe7a36109e1d10a4f8"
	I0819 19:17:18.868685  438295 cri.go:89] found id: "44a4290db8405288dc877d1dbfa8f1a4976cb6221431aef419db3cdff822d3b6"
	I0819 19:17:18.868691  438295 cri.go:89] found id: ""
	I0819 19:17:18.868701  438295 logs.go:276] 2 containers: [902796698c02b97c3f50f231cba5dfbc00bc7e8344f104fe7a36109e1d10a4f8 44a4290db8405288dc877d1dbfa8f1a4976cb6221431aef419db3cdff822d3b6]
	I0819 19:17:18.868776  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:18.873136  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:18.877397  438295 logs.go:123] Gathering logs for kube-proxy [3e23a8501fe9333693618c26b918ed665ca9f2ea955dfc771ddbd90f4af91338] ...
	I0819 19:17:18.877425  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e23a8501fe9333693618c26b918ed665ca9f2ea955dfc771ddbd90f4af91338"
	I0819 19:17:18.918085  438295 logs.go:123] Gathering logs for kube-controller-manager [6e6dab43bac16fb6a2155177fd2cb01da57c882a322ae89145bc332c50c87071] ...
	I0819 19:17:18.918118  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e6dab43bac16fb6a2155177fd2cb01da57c882a322ae89145bc332c50c87071"
	I0819 19:17:18.973344  438295 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:17:18.973378  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:17:18.901539  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:20.902550  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:19.440295  438245 pod_ready.go:103] pod "coredns-6f6b679f8f-845gx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:21.939652  438245 pod_ready.go:103] pod "coredns-6f6b679f8f-845gx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:19.443625  438295 logs.go:123] Gathering logs for container status ...
	I0819 19:17:19.443689  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:17:19.492650  438295 logs.go:123] Gathering logs for dmesg ...
	I0819 19:17:19.492696  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:17:19.507957  438295 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:17:19.507996  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 19:17:19.617295  438295 logs.go:123] Gathering logs for coredns [a6bc5b24f616e32fdffb80b6ed0201250b02f143c8217d56ef90dc55551d709f] ...
	I0819 19:17:19.617341  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6bc5b24f616e32fdffb80b6ed0201250b02f143c8217d56ef90dc55551d709f"
	I0819 19:17:19.669869  438295 logs.go:123] Gathering logs for kube-scheduler [c09c2a3840c6b84c4d187a5b4938f1e79c515609ad3ff7077a163e94acd5fc22] ...
	I0819 19:17:19.669930  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c09c2a3840c6b84c4d187a5b4938f1e79c515609ad3ff7077a163e94acd5fc22"
	I0819 19:17:19.706649  438295 logs.go:123] Gathering logs for storage-provisioner [44a4290db8405288dc877d1dbfa8f1a4976cb6221431aef419db3cdff822d3b6] ...
	I0819 19:17:19.706681  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44a4290db8405288dc877d1dbfa8f1a4976cb6221431aef419db3cdff822d3b6"
	I0819 19:17:19.746742  438295 logs.go:123] Gathering logs for kubelet ...
	I0819 19:17:19.746780  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 19:17:19.796224  438295 logs.go:138] Found kubelet problem: Aug 19 19:12:40 embed-certs-024748 kubelet[936]: W0819 19:12:40.671901     936 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:embed-certs-024748" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-024748' and this object
	W0819 19:17:19.796442  438295 logs.go:138] Found kubelet problem: Aug 19 19:12:40 embed-certs-024748 kubelet[936]: E0819 19:12:40.672098     936 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:embed-certs-024748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-024748' and this object" logger="UnhandledError"
	W0819 19:17:19.796622  438295 logs.go:138] Found kubelet problem: Aug 19 19:12:40 embed-certs-024748 kubelet[936]: W0819 19:12:40.672624     936 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-024748" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-024748' and this object
	W0819 19:17:19.796845  438295 logs.go:138] Found kubelet problem: Aug 19 19:12:40 embed-certs-024748 kubelet[936]: E0819 19:12:40.672667     936 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:embed-certs-024748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-024748' and this object" logger="UnhandledError"
	I0819 19:17:19.836283  438295 logs.go:123] Gathering logs for kube-apiserver [d66ad075c652a3b446078444a32327c07459f74199be8f89197067dbad566d5a] ...
	I0819 19:17:19.836328  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d66ad075c652a3b446078444a32327c07459f74199be8f89197067dbad566d5a"
	I0819 19:17:19.889829  438295 logs.go:123] Gathering logs for etcd [a3cb2c04e3eb3398fa324b660ca1864f22175cbf41fd84eae34a24ce7928b672] ...
	I0819 19:17:19.889875  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a3cb2c04e3eb3398fa324b660ca1864f22175cbf41fd84eae34a24ce7928b672"
	I0819 19:17:19.938361  438295 logs.go:123] Gathering logs for storage-provisioner [902796698c02b97c3f50f231cba5dfbc00bc7e8344f104fe7a36109e1d10a4f8] ...
	I0819 19:17:19.938397  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 902796698c02b97c3f50f231cba5dfbc00bc7e8344f104fe7a36109e1d10a4f8"
	I0819 19:17:19.978525  438295 out.go:358] Setting ErrFile to fd 2...
	I0819 19:17:19.978557  438295 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 19:17:19.978628  438295 out.go:270] X Problems detected in kubelet:
	W0819 19:17:19.978642  438295 out.go:270]   Aug 19 19:12:40 embed-certs-024748 kubelet[936]: W0819 19:12:40.671901     936 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:embed-certs-024748" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-024748' and this object
	W0819 19:17:19.978656  438295 out.go:270]   Aug 19 19:12:40 embed-certs-024748 kubelet[936]: E0819 19:12:40.672098     936 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:embed-certs-024748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-024748' and this object" logger="UnhandledError"
	W0819 19:17:19.978669  438295 out.go:270]   Aug 19 19:12:40 embed-certs-024748 kubelet[936]: W0819 19:12:40.672624     936 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-024748" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-024748' and this object
	W0819 19:17:19.978680  438295 out.go:270]   Aug 19 19:12:40 embed-certs-024748 kubelet[936]: E0819 19:12:40.672667     936 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:embed-certs-024748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-024748' and this object" logger="UnhandledError"
	I0819 19:17:19.978690  438295 out.go:358] Setting ErrFile to fd 2...
	I0819 19:17:19.978699  438295 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:17:23.941399  438245 pod_ready.go:93] pod "coredns-6f6b679f8f-845gx" in "kube-system" namespace has status "Ready":"True"
	I0819 19:17:23.941426  438245 pod_ready.go:82] duration metric: took 9.00859927s for pod "coredns-6f6b679f8f-845gx" in "kube-system" namespace to be "Ready" ...
	I0819 19:17:23.941438  438245 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-tlxtt" in "kube-system" namespace to be "Ready" ...
	I0819 19:17:23.946827  438245 pod_ready.go:93] pod "coredns-6f6b679f8f-tlxtt" in "kube-system" namespace has status "Ready":"True"
	I0819 19:17:23.946848  438245 pod_ready.go:82] duration metric: took 5.40058ms for pod "coredns-6f6b679f8f-tlxtt" in "kube-system" namespace to be "Ready" ...
	I0819 19:17:23.946859  438245 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:17:23.956158  438245 pod_ready.go:93] pod "etcd-default-k8s-diff-port-982795" in "kube-system" namespace has status "Ready":"True"
	I0819 19:17:23.956181  438245 pod_ready.go:82] duration metric: took 9.312871ms for pod "etcd-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:17:23.956193  438245 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:17:23.962573  438245 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-982795" in "kube-system" namespace has status "Ready":"True"
	I0819 19:17:23.962595  438245 pod_ready.go:82] duration metric: took 6.3934ms for pod "kube-apiserver-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:17:23.962607  438245 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:17:23.968186  438245 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-982795" in "kube-system" namespace has status "Ready":"True"
	I0819 19:17:23.968206  438245 pod_ready.go:82] duration metric: took 5.591464ms for pod "kube-controller-manager-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:17:23.968214  438245 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2v4hk" in "kube-system" namespace to be "Ready" ...
	I0819 19:17:24.337409  438245 pod_ready.go:93] pod "kube-proxy-2v4hk" in "kube-system" namespace has status "Ready":"True"
	I0819 19:17:24.337443  438245 pod_ready.go:82] duration metric: took 369.220318ms for pod "kube-proxy-2v4hk" in "kube-system" namespace to be "Ready" ...
	I0819 19:17:24.337460  438245 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:17:24.737326  438245 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-982795" in "kube-system" namespace has status "Ready":"True"
	I0819 19:17:24.737362  438245 pod_ready.go:82] duration metric: took 399.891804ms for pod "kube-scheduler-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:17:24.737375  438245 pod_ready.go:39] duration metric: took 9.817484404s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 19:17:24.737396  438245 api_server.go:52] waiting for apiserver process to appear ...
	I0819 19:17:24.737467  438245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:17:24.753681  438245 api_server.go:72] duration metric: took 10.109231411s to wait for apiserver process to appear ...
	I0819 19:17:24.753711  438245 api_server.go:88] waiting for apiserver healthz status ...
	I0819 19:17:24.753734  438245 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8444/healthz ...
	I0819 19:17:24.757976  438245 api_server.go:279] https://192.168.61.48:8444/healthz returned 200:
	ok
	I0819 19:17:24.758875  438245 api_server.go:141] control plane version: v1.31.0
	I0819 19:17:24.758899  438245 api_server.go:131] duration metric: took 5.179486ms to wait for apiserver health ...
	I0819 19:17:24.758908  438245 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 19:17:24.944008  438245 system_pods.go:59] 9 kube-system pods found
	I0819 19:17:24.944053  438245 system_pods.go:61] "coredns-6f6b679f8f-845gx" [95155dd2-d46c-4445-b735-26eae16aaff9] Running
	I0819 19:17:24.944058  438245 system_pods.go:61] "coredns-6f6b679f8f-tlxtt" [150ac4be-bef1-4f0a-ab16-f085284686cb] Running
	I0819 19:17:24.944062  438245 system_pods.go:61] "etcd-default-k8s-diff-port-982795" [eb29f445-6242-4b60-a8d5-7c684df17926] Running
	I0819 19:17:24.944066  438245 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-982795" [2add6270-bf14-43e7-834b-3e629f46efa3] Running
	I0819 19:17:24.944070  438245 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-982795" [6b636d4b-0efa-4cef-b0d4-d4539ddc5c90] Running
	I0819 19:17:24.944073  438245 system_pods.go:61] "kube-proxy-2v4hk" [042d5d54-6557-4d8e-8f4e-2d56e95882ce] Running
	I0819 19:17:24.944076  438245 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-982795" [6eff3815-26b3-4e95-a754-2dc65fd29126] Running
	I0819 19:17:24.944082  438245 system_pods.go:61] "metrics-server-6867b74b74-2dp5r" [04e0ce68-d9a2-426a-a0e9-47f6f7867efd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 19:17:24.944086  438245 system_pods.go:61] "storage-provisioner" [23fcea86-977e-4eb1-9e5a-23d6bdfb09c0] Running
	I0819 19:17:24.944094  438245 system_pods.go:74] duration metric: took 185.180015ms to wait for pod list to return data ...
	I0819 19:17:24.944104  438245 default_sa.go:34] waiting for default service account to be created ...
	I0819 19:17:25.137108  438245 default_sa.go:45] found service account: "default"
	I0819 19:17:25.137147  438245 default_sa.go:55] duration metric: took 193.033434ms for default service account to be created ...
	I0819 19:17:25.137160  438245 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 19:17:25.340115  438245 system_pods.go:86] 9 kube-system pods found
	I0819 19:17:25.340146  438245 system_pods.go:89] "coredns-6f6b679f8f-845gx" [95155dd2-d46c-4445-b735-26eae16aaff9] Running
	I0819 19:17:25.340155  438245 system_pods.go:89] "coredns-6f6b679f8f-tlxtt" [150ac4be-bef1-4f0a-ab16-f085284686cb] Running
	I0819 19:17:25.340161  438245 system_pods.go:89] "etcd-default-k8s-diff-port-982795" [eb29f445-6242-4b60-a8d5-7c684df17926] Running
	I0819 19:17:25.340167  438245 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-982795" [2add6270-bf14-43e7-834b-3e629f46efa3] Running
	I0819 19:17:25.340173  438245 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-982795" [6b636d4b-0efa-4cef-b0d4-d4539ddc5c90] Running
	I0819 19:17:25.340177  438245 system_pods.go:89] "kube-proxy-2v4hk" [042d5d54-6557-4d8e-8f4e-2d56e95882ce] Running
	I0819 19:17:25.340182  438245 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-982795" [6eff3815-26b3-4e95-a754-2dc65fd29126] Running
	I0819 19:17:25.340192  438245 system_pods.go:89] "metrics-server-6867b74b74-2dp5r" [04e0ce68-d9a2-426a-a0e9-47f6f7867efd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 19:17:25.340198  438245 system_pods.go:89] "storage-provisioner" [23fcea86-977e-4eb1-9e5a-23d6bdfb09c0] Running
	I0819 19:17:25.340211  438245 system_pods.go:126] duration metric: took 203.044324ms to wait for k8s-apps to be running ...
	I0819 19:17:25.340224  438245 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 19:17:25.340278  438245 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 19:17:25.355190  438245 system_svc.go:56] duration metric: took 14.954269ms WaitForService to wait for kubelet
	I0819 19:17:25.355223  438245 kubeadm.go:582] duration metric: took 10.710777567s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 19:17:25.355252  438245 node_conditions.go:102] verifying NodePressure condition ...
	I0819 19:17:25.537425  438245 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 19:17:25.537459  438245 node_conditions.go:123] node cpu capacity is 2
	I0819 19:17:25.537472  438245 node_conditions.go:105] duration metric: took 182.213218ms to run NodePressure ...
	I0819 19:17:25.537491  438245 start.go:241] waiting for startup goroutines ...
	I0819 19:17:25.537501  438245 start.go:246] waiting for cluster config update ...
	I0819 19:17:25.537516  438245 start.go:255] writing updated cluster config ...
	I0819 19:17:25.537851  438245 ssh_runner.go:195] Run: rm -f paused
	I0819 19:17:25.589212  438245 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 19:17:25.591352  438245 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-982795" cluster and "default" namespace by default
	I0819 19:17:22.902846  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:25.401911  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:29.988042  438295 system_pods.go:59] 8 kube-system pods found
	I0819 19:17:29.988074  438295 system_pods.go:61] "coredns-6f6b679f8f-7ww4z" [bbde00d4-6027-4d8d-b51e-bd68915da166] Running
	I0819 19:17:29.988080  438295 system_pods.go:61] "etcd-embed-certs-024748" [846ff0f0-5399-43fd-8e7b-1f64997cd291] Running
	I0819 19:17:29.988084  438295 system_pods.go:61] "kube-apiserver-embed-certs-024748" [3ff558d6-e82e-47a0-bb81-15244bee6470] Running
	I0819 19:17:29.988088  438295 system_pods.go:61] "kube-controller-manager-embed-certs-024748" [993b82ba-e8e7-4896-a06b-87c4f08d5985] Running
	I0819 19:17:29.988092  438295 system_pods.go:61] "kube-proxy-bmmbh" [1f77f152-f5f4-40f6-9632-1eaa36b9ea31] Running
	I0819 19:17:29.988095  438295 system_pods.go:61] "kube-scheduler-embed-certs-024748" [34684d4c-2479-45c5-883b-158cf9f974f5] Running
	I0819 19:17:29.988100  438295 system_pods.go:61] "metrics-server-6867b74b74-kxcwh" [15f86629-d916-4fdc-9ecf-9cb1b6c83f85] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 19:17:29.988104  438295 system_pods.go:61] "storage-provisioner" [7acb6ce1-21b6-4cdd-a5cb-76d694fc0a38] Running
	I0819 19:17:29.988113  438295 system_pods.go:74] duration metric: took 11.492481541s to wait for pod list to return data ...
	I0819 19:17:29.988120  438295 default_sa.go:34] waiting for default service account to be created ...
	I0819 19:17:29.991728  438295 default_sa.go:45] found service account: "default"
	I0819 19:17:29.991755  438295 default_sa.go:55] duration metric: took 3.62838ms for default service account to be created ...
	I0819 19:17:29.991764  438295 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 19:17:29.997212  438295 system_pods.go:86] 8 kube-system pods found
	I0819 19:17:29.997237  438295 system_pods.go:89] "coredns-6f6b679f8f-7ww4z" [bbde00d4-6027-4d8d-b51e-bd68915da166] Running
	I0819 19:17:29.997243  438295 system_pods.go:89] "etcd-embed-certs-024748" [846ff0f0-5399-43fd-8e7b-1f64997cd291] Running
	I0819 19:17:29.997247  438295 system_pods.go:89] "kube-apiserver-embed-certs-024748" [3ff558d6-e82e-47a0-bb81-15244bee6470] Running
	I0819 19:17:29.997252  438295 system_pods.go:89] "kube-controller-manager-embed-certs-024748" [993b82ba-e8e7-4896-a06b-87c4f08d5985] Running
	I0819 19:17:29.997256  438295 system_pods.go:89] "kube-proxy-bmmbh" [1f77f152-f5f4-40f6-9632-1eaa36b9ea31] Running
	I0819 19:17:29.997260  438295 system_pods.go:89] "kube-scheduler-embed-certs-024748" [34684d4c-2479-45c5-883b-158cf9f974f5] Running
	I0819 19:17:29.997267  438295 system_pods.go:89] "metrics-server-6867b74b74-kxcwh" [15f86629-d916-4fdc-9ecf-9cb1b6c83f85] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 19:17:29.997270  438295 system_pods.go:89] "storage-provisioner" [7acb6ce1-21b6-4cdd-a5cb-76d694fc0a38] Running
	I0819 19:17:29.997277  438295 system_pods.go:126] duration metric: took 5.507363ms to wait for k8s-apps to be running ...
	I0819 19:17:29.997283  438295 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 19:17:29.997329  438295 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 19:17:30.015349  438295 system_svc.go:56] duration metric: took 18.05422ms WaitForService to wait for kubelet
	I0819 19:17:30.015385  438295 kubeadm.go:582] duration metric: took 4m46.148274918s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 19:17:30.015408  438295 node_conditions.go:102] verifying NodePressure condition ...
	I0819 19:17:30.019744  438295 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 19:17:30.019767  438295 node_conditions.go:123] node cpu capacity is 2
	I0819 19:17:30.019779  438295 node_conditions.go:105] duration metric: took 4.364435ms to run NodePressure ...
	I0819 19:17:30.019791  438295 start.go:241] waiting for startup goroutines ...
	I0819 19:17:30.019798  438295 start.go:246] waiting for cluster config update ...
	I0819 19:17:30.019809  438295 start.go:255] writing updated cluster config ...
	I0819 19:17:30.020080  438295 ssh_runner.go:195] Run: rm -f paused
	I0819 19:17:30.071945  438295 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 19:17:30.073912  438295 out.go:177] * Done! kubectl is now configured to use "embed-certs-024748" cluster and "default" namespace by default
	I0819 19:17:27.901471  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:29.901560  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:32.401214  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:34.402184  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:36.901979  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:38.902132  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:41.401103  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:43.889122  438716 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0819 19:17:43.889226  438716 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 19:17:43.889441  438716 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 19:17:43.402531  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:45.402739  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:48.889647  438716 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 19:17:48.889896  438716 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 19:17:47.902033  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:48.402784  438001 pod_ready.go:82] duration metric: took 4m0.007573449s for pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace to be "Ready" ...
	E0819 19:17:48.402807  438001 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0819 19:17:48.402814  438001 pod_ready.go:39] duration metric: took 4m5.043625176s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 19:17:48.402837  438001 api_server.go:52] waiting for apiserver process to appear ...
	I0819 19:17:48.402866  438001 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:17:48.402916  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:17:48.465049  438001 cri.go:89] found id: "cdac290df2d44c9b30a9c4378f98137a73e603fccd18bc228cca5d017f0a7094"
	I0819 19:17:48.465072  438001 cri.go:89] found id: ""
	I0819 19:17:48.465081  438001 logs.go:276] 1 containers: [cdac290df2d44c9b30a9c4378f98137a73e603fccd18bc228cca5d017f0a7094]
	I0819 19:17:48.465157  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:48.469640  438001 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:17:48.469708  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:17:48.506800  438001 cri.go:89] found id: "27d104597d0ca1b418bd0cab630536ff2d859717c314b48ea994680b21a5bd9a"
	I0819 19:17:48.506825  438001 cri.go:89] found id: ""
	I0819 19:17:48.506836  438001 logs.go:276] 1 containers: [27d104597d0ca1b418bd0cab630536ff2d859717c314b48ea994680b21a5bd9a]
	I0819 19:17:48.506900  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:48.511810  438001 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:17:48.511899  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:17:48.558215  438001 cri.go:89] found id: "6ad390cacd3d89ad9a5e7af71dab26d472a67971ffda086057b7cf0e0a9560aa"
	I0819 19:17:48.558240  438001 cri.go:89] found id: ""
	I0819 19:17:48.558250  438001 logs.go:276] 1 containers: [6ad390cacd3d89ad9a5e7af71dab26d472a67971ffda086057b7cf0e0a9560aa]
	I0819 19:17:48.558308  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:48.562785  438001 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:17:48.562844  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:17:48.602715  438001 cri.go:89] found id: "123f84ccdc9cf1aa830891307b79d42c9166f018bff19b498a5107e428feb92f"
	I0819 19:17:48.602738  438001 cri.go:89] found id: ""
	I0819 19:17:48.602748  438001 logs.go:276] 1 containers: [123f84ccdc9cf1aa830891307b79d42c9166f018bff19b498a5107e428feb92f]
	I0819 19:17:48.602815  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:48.607456  438001 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:17:48.607512  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:17:48.648285  438001 cri.go:89] found id: "236b4296ad713b251ca958489ebfc4ce41bd2cb64d538cf0cf5f72cc9243e94a"
	I0819 19:17:48.648314  438001 cri.go:89] found id: ""
	I0819 19:17:48.648324  438001 logs.go:276] 1 containers: [236b4296ad713b251ca958489ebfc4ce41bd2cb64d538cf0cf5f72cc9243e94a]
	I0819 19:17:48.648374  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:48.653772  438001 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:17:48.653830  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:17:48.697336  438001 cri.go:89] found id: "390aeac356048873634022bb4093a927ddaf293b994b7316b79cfc2c4c329346"
	I0819 19:17:48.697365  438001 cri.go:89] found id: ""
	I0819 19:17:48.697376  438001 logs.go:276] 1 containers: [390aeac356048873634022bb4093a927ddaf293b994b7316b79cfc2c4c329346]
	I0819 19:17:48.697438  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:48.701661  438001 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:17:48.701726  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:17:48.737952  438001 cri.go:89] found id: ""
	I0819 19:17:48.737990  438001 logs.go:276] 0 containers: []
	W0819 19:17:48.738002  438001 logs.go:278] No container was found matching "kindnet"
	I0819 19:17:48.738010  438001 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 19:17:48.738076  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 19:17:48.780047  438001 cri.go:89] found id: "fd16c88623359ff9e44155c82c7e33b07dc040678d1d6f1915a25d80a5db0bbd"
	I0819 19:17:48.780076  438001 cri.go:89] found id: "482a17643a2dedc658bdc88ca54e2ffb40166833acfc42adf452364226e51dc6"
	I0819 19:17:48.780082  438001 cri.go:89] found id: ""
	I0819 19:17:48.780092  438001 logs.go:276] 2 containers: [fd16c88623359ff9e44155c82c7e33b07dc040678d1d6f1915a25d80a5db0bbd 482a17643a2dedc658bdc88ca54e2ffb40166833acfc42adf452364226e51dc6]
	I0819 19:17:48.780168  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:48.784558  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:48.788803  438001 logs.go:123] Gathering logs for kube-apiserver [cdac290df2d44c9b30a9c4378f98137a73e603fccd18bc228cca5d017f0a7094] ...
	I0819 19:17:48.788826  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cdac290df2d44c9b30a9c4378f98137a73e603fccd18bc228cca5d017f0a7094"
	I0819 19:17:48.843469  438001 logs.go:123] Gathering logs for kube-scheduler [123f84ccdc9cf1aa830891307b79d42c9166f018bff19b498a5107e428feb92f] ...
	I0819 19:17:48.843501  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 123f84ccdc9cf1aa830891307b79d42c9166f018bff19b498a5107e428feb92f"
	I0819 19:17:48.884461  438001 logs.go:123] Gathering logs for kube-proxy [236b4296ad713b251ca958489ebfc4ce41bd2cb64d538cf0cf5f72cc9243e94a] ...
	I0819 19:17:48.884495  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 236b4296ad713b251ca958489ebfc4ce41bd2cb64d538cf0cf5f72cc9243e94a"
	I0819 19:17:48.927064  438001 logs.go:123] Gathering logs for storage-provisioner [fd16c88623359ff9e44155c82c7e33b07dc040678d1d6f1915a25d80a5db0bbd] ...
	I0819 19:17:48.927093  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd16c88623359ff9e44155c82c7e33b07dc040678d1d6f1915a25d80a5db0bbd"
	I0819 19:17:48.963812  438001 logs.go:123] Gathering logs for container status ...
	I0819 19:17:48.963845  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:17:49.017381  438001 logs.go:123] Gathering logs for kubelet ...
	I0819 19:17:49.017420  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:17:49.093572  438001 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:17:49.093614  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 19:17:49.236680  438001 logs.go:123] Gathering logs for coredns [6ad390cacd3d89ad9a5e7af71dab26d472a67971ffda086057b7cf0e0a9560aa] ...
	I0819 19:17:49.236721  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ad390cacd3d89ad9a5e7af71dab26d472a67971ffda086057b7cf0e0a9560aa"
	I0819 19:17:49.274636  438001 logs.go:123] Gathering logs for kube-controller-manager [390aeac356048873634022bb4093a927ddaf293b994b7316b79cfc2c4c329346] ...
	I0819 19:17:49.274677  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 390aeac356048873634022bb4093a927ddaf293b994b7316b79cfc2c4c329346"
	I0819 19:17:49.326208  438001 logs.go:123] Gathering logs for storage-provisioner [482a17643a2dedc658bdc88ca54e2ffb40166833acfc42adf452364226e51dc6] ...
	I0819 19:17:49.326242  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 482a17643a2dedc658bdc88ca54e2ffb40166833acfc42adf452364226e51dc6"
	I0819 19:17:49.363589  438001 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:17:49.363628  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:17:49.841705  438001 logs.go:123] Gathering logs for dmesg ...
	I0819 19:17:49.841757  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:17:49.858466  438001 logs.go:123] Gathering logs for etcd [27d104597d0ca1b418bd0cab630536ff2d859717c314b48ea994680b21a5bd9a] ...
	I0819 19:17:49.858504  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27d104597d0ca1b418bd0cab630536ff2d859717c314b48ea994680b21a5bd9a"
	I0819 19:17:52.406197  438001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:17:52.422951  438001 api_server.go:72] duration metric: took 4m16.822246565s to wait for apiserver process to appear ...
	I0819 19:17:52.422981  438001 api_server.go:88] waiting for apiserver healthz status ...
	I0819 19:17:52.423019  438001 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:17:52.423075  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:17:52.464305  438001 cri.go:89] found id: "cdac290df2d44c9b30a9c4378f98137a73e603fccd18bc228cca5d017f0a7094"
	I0819 19:17:52.464327  438001 cri.go:89] found id: ""
	I0819 19:17:52.464335  438001 logs.go:276] 1 containers: [cdac290df2d44c9b30a9c4378f98137a73e603fccd18bc228cca5d017f0a7094]
	I0819 19:17:52.464387  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:52.468824  438001 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:17:52.468904  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:17:52.508907  438001 cri.go:89] found id: "27d104597d0ca1b418bd0cab630536ff2d859717c314b48ea994680b21a5bd9a"
	I0819 19:17:52.508929  438001 cri.go:89] found id: ""
	I0819 19:17:52.508937  438001 logs.go:276] 1 containers: [27d104597d0ca1b418bd0cab630536ff2d859717c314b48ea994680b21a5bd9a]
	I0819 19:17:52.508998  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:52.513206  438001 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:17:52.513281  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:17:52.553908  438001 cri.go:89] found id: "6ad390cacd3d89ad9a5e7af71dab26d472a67971ffda086057b7cf0e0a9560aa"
	I0819 19:17:52.553940  438001 cri.go:89] found id: ""
	I0819 19:17:52.553948  438001 logs.go:276] 1 containers: [6ad390cacd3d89ad9a5e7af71dab26d472a67971ffda086057b7cf0e0a9560aa]
	I0819 19:17:52.554007  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:52.558420  438001 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:17:52.558487  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:17:52.598450  438001 cri.go:89] found id: "123f84ccdc9cf1aa830891307b79d42c9166f018bff19b498a5107e428feb92f"
	I0819 19:17:52.598480  438001 cri.go:89] found id: ""
	I0819 19:17:52.598491  438001 logs.go:276] 1 containers: [123f84ccdc9cf1aa830891307b79d42c9166f018bff19b498a5107e428feb92f]
	I0819 19:17:52.598564  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:52.603421  438001 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:17:52.603485  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:17:52.639017  438001 cri.go:89] found id: "236b4296ad713b251ca958489ebfc4ce41bd2cb64d538cf0cf5f72cc9243e94a"
	I0819 19:17:52.639049  438001 cri.go:89] found id: ""
	I0819 19:17:52.639060  438001 logs.go:276] 1 containers: [236b4296ad713b251ca958489ebfc4ce41bd2cb64d538cf0cf5f72cc9243e94a]
	I0819 19:17:52.639129  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:52.645313  438001 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:17:52.645392  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:17:52.687266  438001 cri.go:89] found id: "390aeac356048873634022bb4093a927ddaf293b994b7316b79cfc2c4c329346"
	I0819 19:17:52.687296  438001 cri.go:89] found id: ""
	I0819 19:17:52.687305  438001 logs.go:276] 1 containers: [390aeac356048873634022bb4093a927ddaf293b994b7316b79cfc2c4c329346]
	I0819 19:17:52.687369  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:52.691770  438001 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:17:52.691830  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:17:52.734067  438001 cri.go:89] found id: ""
	I0819 19:17:52.734098  438001 logs.go:276] 0 containers: []
	W0819 19:17:52.734107  438001 logs.go:278] No container was found matching "kindnet"
	I0819 19:17:52.734113  438001 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 19:17:52.734171  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 19:17:52.781039  438001 cri.go:89] found id: "fd16c88623359ff9e44155c82c7e33b07dc040678d1d6f1915a25d80a5db0bbd"
	I0819 19:17:52.781062  438001 cri.go:89] found id: "482a17643a2dedc658bdc88ca54e2ffb40166833acfc42adf452364226e51dc6"
	I0819 19:17:52.781066  438001 cri.go:89] found id: ""
	I0819 19:17:52.781074  438001 logs.go:276] 2 containers: [fd16c88623359ff9e44155c82c7e33b07dc040678d1d6f1915a25d80a5db0bbd 482a17643a2dedc658bdc88ca54e2ffb40166833acfc42adf452364226e51dc6]
	I0819 19:17:52.781135  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:52.785730  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:52.789946  438001 logs.go:123] Gathering logs for kube-scheduler [123f84ccdc9cf1aa830891307b79d42c9166f018bff19b498a5107e428feb92f] ...
	I0819 19:17:52.789978  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 123f84ccdc9cf1aa830891307b79d42c9166f018bff19b498a5107e428feb92f"
	I0819 19:17:52.830509  438001 logs.go:123] Gathering logs for kube-controller-manager [390aeac356048873634022bb4093a927ddaf293b994b7316b79cfc2c4c329346] ...
	I0819 19:17:52.830541  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 390aeac356048873634022bb4093a927ddaf293b994b7316b79cfc2c4c329346"
	I0819 19:17:52.892964  438001 logs.go:123] Gathering logs for container status ...
	I0819 19:17:52.893017  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:17:52.947999  438001 logs.go:123] Gathering logs for kubelet ...
	I0819 19:17:52.948028  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:17:53.019377  438001 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:17:53.019423  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 19:17:53.134032  438001 logs.go:123] Gathering logs for kube-apiserver [cdac290df2d44c9b30a9c4378f98137a73e603fccd18bc228cca5d017f0a7094] ...
	I0819 19:17:53.134069  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cdac290df2d44c9b30a9c4378f98137a73e603fccd18bc228cca5d017f0a7094"
	I0819 19:17:53.186159  438001 logs.go:123] Gathering logs for etcd [27d104597d0ca1b418bd0cab630536ff2d859717c314b48ea994680b21a5bd9a] ...
	I0819 19:17:53.186193  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27d104597d0ca1b418bd0cab630536ff2d859717c314b48ea994680b21a5bd9a"
	I0819 19:17:53.236918  438001 logs.go:123] Gathering logs for storage-provisioner [482a17643a2dedc658bdc88ca54e2ffb40166833acfc42adf452364226e51dc6] ...
	I0819 19:17:53.236949  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 482a17643a2dedc658bdc88ca54e2ffb40166833acfc42adf452364226e51dc6"
	I0819 19:17:53.275211  438001 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:17:53.275242  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:17:53.710352  438001 logs.go:123] Gathering logs for dmesg ...
	I0819 19:17:53.710396  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:17:53.726691  438001 logs.go:123] Gathering logs for coredns [6ad390cacd3d89ad9a5e7af71dab26d472a67971ffda086057b7cf0e0a9560aa] ...
	I0819 19:17:53.726731  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ad390cacd3d89ad9a5e7af71dab26d472a67971ffda086057b7cf0e0a9560aa"
	I0819 19:17:53.768322  438001 logs.go:123] Gathering logs for kube-proxy [236b4296ad713b251ca958489ebfc4ce41bd2cb64d538cf0cf5f72cc9243e94a] ...
	I0819 19:17:53.768361  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 236b4296ad713b251ca958489ebfc4ce41bd2cb64d538cf0cf5f72cc9243e94a"
	I0819 19:17:53.808546  438001 logs.go:123] Gathering logs for storage-provisioner [fd16c88623359ff9e44155c82c7e33b07dc040678d1d6f1915a25d80a5db0bbd] ...
	I0819 19:17:53.808577  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd16c88623359ff9e44155c82c7e33b07dc040678d1d6f1915a25d80a5db0bbd"
	I0819 19:17:56.362339  438001 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8443/healthz ...
	I0819 19:17:56.366636  438001 api_server.go:279] https://192.168.39.106:8443/healthz returned 200:
	ok
	I0819 19:17:56.367838  438001 api_server.go:141] control plane version: v1.31.0
	I0819 19:17:56.367867  438001 api_server.go:131] duration metric: took 3.944877317s to wait for apiserver health ...
	I0819 19:17:56.367891  438001 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 19:17:56.367925  438001 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:17:56.367991  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:17:56.412151  438001 cri.go:89] found id: "cdac290df2d44c9b30a9c4378f98137a73e603fccd18bc228cca5d017f0a7094"
	I0819 19:17:56.412179  438001 cri.go:89] found id: ""
	I0819 19:17:56.412187  438001 logs.go:276] 1 containers: [cdac290df2d44c9b30a9c4378f98137a73e603fccd18bc228cca5d017f0a7094]
	I0819 19:17:56.412247  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:56.416620  438001 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:17:56.416795  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:17:56.456888  438001 cri.go:89] found id: "27d104597d0ca1b418bd0cab630536ff2d859717c314b48ea994680b21a5bd9a"
	I0819 19:17:56.456918  438001 cri.go:89] found id: ""
	I0819 19:17:56.456927  438001 logs.go:276] 1 containers: [27d104597d0ca1b418bd0cab630536ff2d859717c314b48ea994680b21a5bd9a]
	I0819 19:17:56.456984  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:56.461563  438001 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:17:56.461667  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:17:56.506990  438001 cri.go:89] found id: "6ad390cacd3d89ad9a5e7af71dab26d472a67971ffda086057b7cf0e0a9560aa"
	I0819 19:17:56.507018  438001 cri.go:89] found id: ""
	I0819 19:17:56.507028  438001 logs.go:276] 1 containers: [6ad390cacd3d89ad9a5e7af71dab26d472a67971ffda086057b7cf0e0a9560aa]
	I0819 19:17:56.507099  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:56.511547  438001 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:17:56.511616  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:17:56.551734  438001 cri.go:89] found id: "123f84ccdc9cf1aa830891307b79d42c9166f018bff19b498a5107e428feb92f"
	I0819 19:17:56.551761  438001 cri.go:89] found id: ""
	I0819 19:17:56.551772  438001 logs.go:276] 1 containers: [123f84ccdc9cf1aa830891307b79d42c9166f018bff19b498a5107e428feb92f]
	I0819 19:17:56.551837  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:56.556963  438001 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:17:56.557039  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:17:56.601862  438001 cri.go:89] found id: "236b4296ad713b251ca958489ebfc4ce41bd2cb64d538cf0cf5f72cc9243e94a"
	I0819 19:17:56.601892  438001 cri.go:89] found id: ""
	I0819 19:17:56.601902  438001 logs.go:276] 1 containers: [236b4296ad713b251ca958489ebfc4ce41bd2cb64d538cf0cf5f72cc9243e94a]
	I0819 19:17:56.601971  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:56.606618  438001 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:17:56.606706  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:17:56.649476  438001 cri.go:89] found id: "390aeac356048873634022bb4093a927ddaf293b994b7316b79cfc2c4c329346"
	I0819 19:17:56.649501  438001 cri.go:89] found id: ""
	I0819 19:17:56.649510  438001 logs.go:276] 1 containers: [390aeac356048873634022bb4093a927ddaf293b994b7316b79cfc2c4c329346]
	I0819 19:17:56.649561  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:56.654009  438001 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:17:56.654071  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:17:56.707479  438001 cri.go:89] found id: ""
	I0819 19:17:56.707506  438001 logs.go:276] 0 containers: []
	W0819 19:17:56.707518  438001 logs.go:278] No container was found matching "kindnet"
	I0819 19:17:56.707527  438001 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 19:17:56.707585  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 19:17:56.749937  438001 cri.go:89] found id: "fd16c88623359ff9e44155c82c7e33b07dc040678d1d6f1915a25d80a5db0bbd"
	I0819 19:17:56.749961  438001 cri.go:89] found id: "482a17643a2dedc658bdc88ca54e2ffb40166833acfc42adf452364226e51dc6"
	I0819 19:17:56.749966  438001 cri.go:89] found id: ""
	I0819 19:17:56.749973  438001 logs.go:276] 2 containers: [fd16c88623359ff9e44155c82c7e33b07dc040678d1d6f1915a25d80a5db0bbd 482a17643a2dedc658bdc88ca54e2ffb40166833acfc42adf452364226e51dc6]
	I0819 19:17:56.750026  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:56.754791  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:56.758672  438001 logs.go:123] Gathering logs for etcd [27d104597d0ca1b418bd0cab630536ff2d859717c314b48ea994680b21a5bd9a] ...
	I0819 19:17:56.758700  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27d104597d0ca1b418bd0cab630536ff2d859717c314b48ea994680b21a5bd9a"
	I0819 19:17:56.811420  438001 logs.go:123] Gathering logs for kube-controller-manager [390aeac356048873634022bb4093a927ddaf293b994b7316b79cfc2c4c329346] ...
	I0819 19:17:56.811461  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 390aeac356048873634022bb4093a927ddaf293b994b7316b79cfc2c4c329346"
	I0819 19:17:56.871550  438001 logs.go:123] Gathering logs for storage-provisioner [482a17643a2dedc658bdc88ca54e2ffb40166833acfc42adf452364226e51dc6] ...
	I0819 19:17:56.871588  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 482a17643a2dedc658bdc88ca54e2ffb40166833acfc42adf452364226e51dc6"
	I0819 19:17:56.918183  438001 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:17:56.918224  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:17:57.297614  438001 logs.go:123] Gathering logs for container status ...
	I0819 19:17:57.297653  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:17:57.339092  438001 logs.go:123] Gathering logs for dmesg ...
	I0819 19:17:57.339127  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:17:57.355787  438001 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:17:57.355820  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 19:17:57.486287  438001 logs.go:123] Gathering logs for kube-apiserver [cdac290df2d44c9b30a9c4378f98137a73e603fccd18bc228cca5d017f0a7094] ...
	I0819 19:17:57.486328  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cdac290df2d44c9b30a9c4378f98137a73e603fccd18bc228cca5d017f0a7094"
	I0819 19:17:57.535864  438001 logs.go:123] Gathering logs for coredns [6ad390cacd3d89ad9a5e7af71dab26d472a67971ffda086057b7cf0e0a9560aa] ...
	I0819 19:17:57.535903  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ad390cacd3d89ad9a5e7af71dab26d472a67971ffda086057b7cf0e0a9560aa"
	I0819 19:17:57.577211  438001 logs.go:123] Gathering logs for kube-scheduler [123f84ccdc9cf1aa830891307b79d42c9166f018bff19b498a5107e428feb92f] ...
	I0819 19:17:57.577248  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 123f84ccdc9cf1aa830891307b79d42c9166f018bff19b498a5107e428feb92f"
	I0819 19:17:57.615928  438001 logs.go:123] Gathering logs for kube-proxy [236b4296ad713b251ca958489ebfc4ce41bd2cb64d538cf0cf5f72cc9243e94a] ...
	I0819 19:17:57.615962  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 236b4296ad713b251ca958489ebfc4ce41bd2cb64d538cf0cf5f72cc9243e94a"
	I0819 19:17:57.655413  438001 logs.go:123] Gathering logs for storage-provisioner [fd16c88623359ff9e44155c82c7e33b07dc040678d1d6f1915a25d80a5db0bbd] ...
	I0819 19:17:57.655445  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd16c88623359ff9e44155c82c7e33b07dc040678d1d6f1915a25d80a5db0bbd"
	I0819 19:17:57.704470  438001 logs.go:123] Gathering logs for kubelet ...
	I0819 19:17:57.704502  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:18:00.281191  438001 system_pods.go:59] 8 kube-system pods found
	I0819 19:18:00.281223  438001 system_pods.go:61] "coredns-6f6b679f8f-22lbt" [c8a5cabd-41d4-41cb-91c1-2db1f3471db3] Running
	I0819 19:18:00.281228  438001 system_pods.go:61] "etcd-no-preload-278232" [36d555a1-33e4-4c6c-b24e-2fee4fd84f2b] Running
	I0819 19:18:00.281232  438001 system_pods.go:61] "kube-apiserver-no-preload-278232" [af7173e5-c4ac-4ece-b8b9-bb81cb6b9bfd] Running
	I0819 19:18:00.281235  438001 system_pods.go:61] "kube-controller-manager-no-preload-278232" [2463d97a-5221-40ce-8fd7-08151165d6f7] Running
	I0819 19:18:00.281238  438001 system_pods.go:61] "kube-proxy-rcf49" [85d5814a-1ba9-46be-ab11-17bf40c0f029] Running
	I0819 19:18:00.281241  438001 system_pods.go:61] "kube-scheduler-no-preload-278232" [3b327704-f70c-4d6f-a774-15427a305472] Running
	I0819 19:18:00.281247  438001 system_pods.go:61] "metrics-server-6867b74b74-vxwrs" [e8b74128-b393-4f0f-90fe-e05f20d54acd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 19:18:00.281252  438001 system_pods.go:61] "storage-provisioner" [24766475-1a5b-4f1a-9350-3e891b5272cc] Running
	I0819 19:18:00.281260  438001 system_pods.go:74] duration metric: took 3.913361626s to wait for pod list to return data ...
	I0819 19:18:00.281267  438001 default_sa.go:34] waiting for default service account to be created ...
	I0819 19:18:00.283873  438001 default_sa.go:45] found service account: "default"
	I0819 19:18:00.283898  438001 default_sa.go:55] duration metric: took 2.625775ms for default service account to be created ...
	I0819 19:18:00.283907  438001 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 19:18:00.288985  438001 system_pods.go:86] 8 kube-system pods found
	I0819 19:18:00.289012  438001 system_pods.go:89] "coredns-6f6b679f8f-22lbt" [c8a5cabd-41d4-41cb-91c1-2db1f3471db3] Running
	I0819 19:18:00.289018  438001 system_pods.go:89] "etcd-no-preload-278232" [36d555a1-33e4-4c6c-b24e-2fee4fd84f2b] Running
	I0819 19:18:00.289022  438001 system_pods.go:89] "kube-apiserver-no-preload-278232" [af7173e5-c4ac-4ece-b8b9-bb81cb6b9bfd] Running
	I0819 19:18:00.289028  438001 system_pods.go:89] "kube-controller-manager-no-preload-278232" [2463d97a-5221-40ce-8fd7-08151165d6f7] Running
	I0819 19:18:00.289033  438001 system_pods.go:89] "kube-proxy-rcf49" [85d5814a-1ba9-46be-ab11-17bf40c0f029] Running
	I0819 19:18:00.289038  438001 system_pods.go:89] "kube-scheduler-no-preload-278232" [3b327704-f70c-4d6f-a774-15427a305472] Running
	I0819 19:18:00.289047  438001 system_pods.go:89] "metrics-server-6867b74b74-vxwrs" [e8b74128-b393-4f0f-90fe-e05f20d54acd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 19:18:00.289056  438001 system_pods.go:89] "storage-provisioner" [24766475-1a5b-4f1a-9350-3e891b5272cc] Running
	I0819 19:18:00.289067  438001 system_pods.go:126] duration metric: took 5.154385ms to wait for k8s-apps to be running ...
	I0819 19:18:00.289081  438001 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 19:18:00.289132  438001 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 19:18:00.307128  438001 system_svc.go:56] duration metric: took 18.036826ms WaitForService to wait for kubelet
	I0819 19:18:00.307160  438001 kubeadm.go:582] duration metric: took 4m24.706461383s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 19:18:00.307183  438001 node_conditions.go:102] verifying NodePressure condition ...
	I0819 19:18:00.309818  438001 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 19:18:00.309866  438001 node_conditions.go:123] node cpu capacity is 2
	I0819 19:18:00.309879  438001 node_conditions.go:105] duration metric: took 2.691554ms to run NodePressure ...
	I0819 19:18:00.309892  438001 start.go:241] waiting for startup goroutines ...
	I0819 19:18:00.309901  438001 start.go:246] waiting for cluster config update ...
	I0819 19:18:00.309918  438001 start.go:255] writing updated cluster config ...
	I0819 19:18:00.310268  438001 ssh_runner.go:195] Run: rm -f paused
	I0819 19:18:00.366211  438001 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 19:18:00.368280  438001 out.go:177] * Done! kubectl is now configured to use "no-preload-278232" cluster and "default" namespace by default
	I0819 19:17:58.890611  438716 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 19:17:58.890832  438716 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 19:18:18.891960  438716 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 19:18:18.892243  438716 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 19:18:58.894609  438716 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 19:18:58.894854  438716 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 19:18:58.894869  438716 kubeadm.go:310] 
	I0819 19:18:58.894912  438716 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0819 19:18:58.894967  438716 kubeadm.go:310] 		timed out waiting for the condition
	I0819 19:18:58.894981  438716 kubeadm.go:310] 
	I0819 19:18:58.895024  438716 kubeadm.go:310] 	This error is likely caused by:
	I0819 19:18:58.895072  438716 kubeadm.go:310] 		- The kubelet is not running
	I0819 19:18:58.895344  438716 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0819 19:18:58.895388  438716 kubeadm.go:310] 
	I0819 19:18:58.895518  438716 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0819 19:18:58.895613  438716 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0819 19:18:58.895668  438716 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0819 19:18:58.895695  438716 kubeadm.go:310] 
	I0819 19:18:58.895839  438716 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0819 19:18:58.895959  438716 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0819 19:18:58.895972  438716 kubeadm.go:310] 
	I0819 19:18:58.896072  438716 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0819 19:18:58.896154  438716 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0819 19:18:58.896220  438716 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0819 19:18:58.896284  438716 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0819 19:18:58.896314  438716 kubeadm.go:310] 
	I0819 19:18:58.896819  438716 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 19:18:58.896946  438716 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0819 19:18:58.897028  438716 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0819 19:18:58.897193  438716 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0819 19:18:58.897249  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 19:18:59.361073  438716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 19:18:59.375791  438716 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 19:18:59.387650  438716 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 19:18:59.387697  438716 kubeadm.go:157] found existing configuration files:
	
	I0819 19:18:59.387756  438716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 19:18:59.397345  438716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 19:18:59.397409  438716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 19:18:59.408060  438716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 19:18:59.417658  438716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 19:18:59.417731  438716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 19:18:59.427765  438716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 19:18:59.437636  438716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 19:18:59.437712  438716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 19:18:59.447506  438716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 19:18:59.457100  438716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 19:18:59.457165  438716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 19:18:59.467185  438716 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 19:18:59.540706  438716 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0819 19:18:59.541005  438716 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 19:18:59.694109  438716 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 19:18:59.694238  438716 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 19:18:59.694350  438716 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0819 19:18:59.874268  438716 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 19:18:59.876259  438716 out.go:235]   - Generating certificates and keys ...
	I0819 19:18:59.876362  438716 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 19:18:59.876441  438716 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 19:18:59.876569  438716 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 19:18:59.876654  438716 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 19:18:59.876751  438716 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 19:18:59.876824  438716 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 19:18:59.876900  438716 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 19:18:59.877076  438716 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 19:18:59.877571  438716 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 19:18:59.877997  438716 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 19:18:59.878139  438716 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 19:18:59.878241  438716 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 19:19:00.153380  438716 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 19:19:00.359863  438716 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 19:19:00.470797  438716 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 19:19:00.590041  438716 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 19:19:00.614332  438716 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 19:19:00.615415  438716 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 19:19:00.615473  438716 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 19:19:00.756167  438716 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 19:19:00.757737  438716 out.go:235]   - Booting up control plane ...
	I0819 19:19:00.757873  438716 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 19:19:00.761484  438716 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 19:19:00.762431  438716 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 19:19:00.763241  438716 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 19:19:00.766155  438716 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0819 19:19:40.770166  438716 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0819 19:19:40.770378  438716 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 19:19:40.770543  438716 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 19:19:45.771352  438716 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 19:19:45.771587  438716 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 19:19:55.772027  438716 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 19:19:55.772243  438716 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 19:20:15.773008  438716 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 19:20:15.773238  438716 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 19:20:55.771311  438716 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 19:20:55.771517  438716 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 19:20:55.771530  438716 kubeadm.go:310] 
	I0819 19:20:55.771578  438716 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0819 19:20:55.771750  438716 kubeadm.go:310] 		timed out waiting for the condition
	I0819 19:20:55.771784  438716 kubeadm.go:310] 
	I0819 19:20:55.771845  438716 kubeadm.go:310] 	This error is likely caused by:
	I0819 19:20:55.771891  438716 kubeadm.go:310] 		- The kubelet is not running
	I0819 19:20:55.772014  438716 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0819 19:20:55.772027  438716 kubeadm.go:310] 
	I0819 19:20:55.772125  438716 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0819 19:20:55.772162  438716 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0819 19:20:55.772188  438716 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0819 19:20:55.772196  438716 kubeadm.go:310] 
	I0819 19:20:55.772272  438716 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0819 19:20:55.772336  438716 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0819 19:20:55.772343  438716 kubeadm.go:310] 
	I0819 19:20:55.772439  438716 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0819 19:20:55.772520  438716 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0819 19:20:55.772581  438716 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0819 19:20:55.772637  438716 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0819 19:20:55.772645  438716 kubeadm.go:310] 
	I0819 19:20:55.773758  438716 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 19:20:55.773880  438716 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0819 19:20:55.773971  438716 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0819 19:20:55.774067  438716 kubeadm.go:394] duration metric: took 7m57.361589371s to StartCluster
	I0819 19:20:55.774157  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:20:55.774243  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:20:55.818428  438716 cri.go:89] found id: ""
	I0819 19:20:55.818460  438716 logs.go:276] 0 containers: []
	W0819 19:20:55.818468  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:20:55.818475  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:20:55.818535  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:20:55.857714  438716 cri.go:89] found id: ""
	I0819 19:20:55.857747  438716 logs.go:276] 0 containers: []
	W0819 19:20:55.857758  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:20:55.857766  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:20:55.857841  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:20:55.891917  438716 cri.go:89] found id: ""
	I0819 19:20:55.891948  438716 logs.go:276] 0 containers: []
	W0819 19:20:55.891967  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:20:55.891976  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:20:55.892046  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:20:55.930608  438716 cri.go:89] found id: ""
	I0819 19:20:55.930643  438716 logs.go:276] 0 containers: []
	W0819 19:20:55.930656  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:20:55.930665  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:20:55.930734  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:20:55.966563  438716 cri.go:89] found id: ""
	I0819 19:20:55.966591  438716 logs.go:276] 0 containers: []
	W0819 19:20:55.966600  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:20:55.966607  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:20:55.966670  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:20:56.010392  438716 cri.go:89] found id: ""
	I0819 19:20:56.010421  438716 logs.go:276] 0 containers: []
	W0819 19:20:56.010430  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:20:56.010436  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:20:56.010491  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:20:56.066940  438716 cri.go:89] found id: ""
	I0819 19:20:56.066973  438716 logs.go:276] 0 containers: []
	W0819 19:20:56.066985  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:20:56.066994  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:20:56.067062  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:20:56.118852  438716 cri.go:89] found id: ""
	I0819 19:20:56.118881  438716 logs.go:276] 0 containers: []
	W0819 19:20:56.118894  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:20:56.118909  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:20:56.118925  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:20:56.158224  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:20:56.158263  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:20:56.211882  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:20:56.211925  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:20:56.228082  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:20:56.228124  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:20:56.307857  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:20:56.307880  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:20:56.307893  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0819 19:20:56.414797  438716 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0819 19:20:56.414885  438716 out.go:270] * 
	W0819 19:20:56.415020  438716 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0819 19:20:56.415039  438716 out.go:270] * 
	W0819 19:20:56.416031  438716 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 19:20:56.419869  438716 out.go:201] 
	W0819 19:20:56.421262  438716 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0819 19:20:56.421319  438716 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0819 19:20:56.421351  438716 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0819 19:20:56.422942  438716 out.go:201] 
	
	
	==> CRI-O <==
	Aug 19 19:26:32 embed-certs-024748 crio[729]: time="2024-08-19 19:26:32.145163041Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095592145142747,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3eefcc35-6522-4110-bf29-6ba1e5822253 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:26:32 embed-certs-024748 crio[729]: time="2024-08-19 19:26:32.145988913Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c1212f56-37c2-48a6-80c2-8b85329ec063 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:26:32 embed-certs-024748 crio[729]: time="2024-08-19 19:26:32.146059327Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c1212f56-37c2-48a6-80c2-8b85329ec063 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:26:32 embed-certs-024748 crio[729]: time="2024-08-19 19:26:32.146951488Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:902796698c02b97c3f50f231cba5dfbc00bc7e8344f104fe7a36109e1d10a4f8,PodSandboxId:2cd56c89cb3500385d16c5b82561348e2422ac59ce004cda825f81be1d188ece,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724094792920726029,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7acb6ce1-21b6-4cdd-a5cb-76d694fc0a38,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89e69d6f405cec355f5cd65f38a963570166553b8598f4fca5b73a80d437338d,PodSandboxId:bf6fc22a7831f2da0f48530f74acaeb6bd79a7a1af15d958a29a142868066ff6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724094771757901345,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5ea66261-4ba9-4b4c-9d2f-4ad3490d3ed5,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6bc5b24f616e32fdffb80b6ed0201250b02f143c8217d56ef90dc55551d709f,PodSandboxId:0c91d6c776a7fded12e3aaa9edf57d7888401809c72d077dc752208355bfb3cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724094768498234032,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-7ww4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbde00d4-6027-4d8d-b51e-bd68915da166,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e23a8501fe9333693618c26b918ed665ca9f2ea955dfc771ddbd90f4af91338,PodSandboxId:472436dd2272fa86a82a775f24e7cf1ddccabcfd91d314c0adba450bd1bcb6c0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724094762648038232,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bmmbh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f77f152-f5f4-40f6-9
632-1eaa36b9ea31,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44a4290db8405288dc877d1dbfa8f1a4976cb6221431aef419db3cdff822d3b6,PodSandboxId:2cd56c89cb3500385d16c5b82561348e2422ac59ce004cda825f81be1d188ece,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724094762645492196,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7acb6ce1-21b6-4cdd-a5cb-76d694fc0a
38,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c09c2a3840c6b84c4d187a5b4938f1e79c515609ad3ff7077a163e94acd5fc22,PodSandboxId:f276ebca5e26f21d36a567d969915505ddf60b7ea37d9c8b78d529962a2fcc8d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724094757719644043,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-024748,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e50e02ebe8c4a08870ffac68ea5d2832,},Annotat
ions:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e6dab43bac16fb6a2155177fd2cb01da57c882a322ae89145bc332c50c87071,PodSandboxId:0c32a47af88ab983143fe824a6c65ca5175d816ef2a93f2233540e92436fbae4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724094757709229899,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-024748,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b58db56f9002dd73de08465fd3a
06c18,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d66ad075c652a3b446078444a32327c07459f74199be8f89197067dbad566d5a,PodSandboxId:9ecf2a88c0af30fddbde514ebf4371ab2edb96b5b3d009b04c544d1fecea9381,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724094757706929424,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-024748,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfe3830de5cba7ee0cea7d338361cf28,},Anno
tations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3cb2c04e3eb3398fa324b660ca1864f22175cbf41fd84eae34a24ce7928b672,PodSandboxId:f6cd7683df1d2eace86e8ace9e6f78d5db7173e03f1f652874fa0a76909a253c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724094757704514233,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-024748,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ff95a4cad5e2fe07b2e8f0bc0f26a77,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c1212f56-37c2-48a6-80c2-8b85329ec063 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:26:32 embed-certs-024748 crio[729]: time="2024-08-19 19:26:32.188248305Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bfb115ef-c5a8-4933-b9a4-1a3230f8c71d name=/runtime.v1.RuntimeService/Version
	Aug 19 19:26:32 embed-certs-024748 crio[729]: time="2024-08-19 19:26:32.188410271Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bfb115ef-c5a8-4933-b9a4-1a3230f8c71d name=/runtime.v1.RuntimeService/Version
	Aug 19 19:26:32 embed-certs-024748 crio[729]: time="2024-08-19 19:26:32.190001056Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7407d8a1-a544-4937-a6c6-91f85d28b040 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:26:32 embed-certs-024748 crio[729]: time="2024-08-19 19:26:32.190454134Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095592190429473,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7407d8a1-a544-4937-a6c6-91f85d28b040 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:26:32 embed-certs-024748 crio[729]: time="2024-08-19 19:26:32.191070316Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d4a72206-5f82-49cd-b403-0193be65316d name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:26:32 embed-certs-024748 crio[729]: time="2024-08-19 19:26:32.191120634Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d4a72206-5f82-49cd-b403-0193be65316d name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:26:32 embed-certs-024748 crio[729]: time="2024-08-19 19:26:32.191361676Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:902796698c02b97c3f50f231cba5dfbc00bc7e8344f104fe7a36109e1d10a4f8,PodSandboxId:2cd56c89cb3500385d16c5b82561348e2422ac59ce004cda825f81be1d188ece,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724094792920726029,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7acb6ce1-21b6-4cdd-a5cb-76d694fc0a38,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89e69d6f405cec355f5cd65f38a963570166553b8598f4fca5b73a80d437338d,PodSandboxId:bf6fc22a7831f2da0f48530f74acaeb6bd79a7a1af15d958a29a142868066ff6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724094771757901345,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5ea66261-4ba9-4b4c-9d2f-4ad3490d3ed5,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6bc5b24f616e32fdffb80b6ed0201250b02f143c8217d56ef90dc55551d709f,PodSandboxId:0c91d6c776a7fded12e3aaa9edf57d7888401809c72d077dc752208355bfb3cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724094768498234032,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-7ww4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbde00d4-6027-4d8d-b51e-bd68915da166,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e23a8501fe9333693618c26b918ed665ca9f2ea955dfc771ddbd90f4af91338,PodSandboxId:472436dd2272fa86a82a775f24e7cf1ddccabcfd91d314c0adba450bd1bcb6c0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724094762648038232,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bmmbh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f77f152-f5f4-40f6-9
632-1eaa36b9ea31,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44a4290db8405288dc877d1dbfa8f1a4976cb6221431aef419db3cdff822d3b6,PodSandboxId:2cd56c89cb3500385d16c5b82561348e2422ac59ce004cda825f81be1d188ece,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724094762645492196,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7acb6ce1-21b6-4cdd-a5cb-76d694fc0a
38,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c09c2a3840c6b84c4d187a5b4938f1e79c515609ad3ff7077a163e94acd5fc22,PodSandboxId:f276ebca5e26f21d36a567d969915505ddf60b7ea37d9c8b78d529962a2fcc8d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724094757719644043,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-024748,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e50e02ebe8c4a08870ffac68ea5d2832,},Annotat
ions:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e6dab43bac16fb6a2155177fd2cb01da57c882a322ae89145bc332c50c87071,PodSandboxId:0c32a47af88ab983143fe824a6c65ca5175d816ef2a93f2233540e92436fbae4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724094757709229899,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-024748,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b58db56f9002dd73de08465fd3a
06c18,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d66ad075c652a3b446078444a32327c07459f74199be8f89197067dbad566d5a,PodSandboxId:9ecf2a88c0af30fddbde514ebf4371ab2edb96b5b3d009b04c544d1fecea9381,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724094757706929424,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-024748,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfe3830de5cba7ee0cea7d338361cf28,},Anno
tations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3cb2c04e3eb3398fa324b660ca1864f22175cbf41fd84eae34a24ce7928b672,PodSandboxId:f6cd7683df1d2eace86e8ace9e6f78d5db7173e03f1f652874fa0a76909a253c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724094757704514233,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-024748,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ff95a4cad5e2fe07b2e8f0bc0f26a77,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d4a72206-5f82-49cd-b403-0193be65316d name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:26:32 embed-certs-024748 crio[729]: time="2024-08-19 19:26:32.227243282Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=30736827-4561-478c-bbba-22a8d929d41c name=/runtime.v1.RuntimeService/Version
	Aug 19 19:26:32 embed-certs-024748 crio[729]: time="2024-08-19 19:26:32.227372398Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=30736827-4561-478c-bbba-22a8d929d41c name=/runtime.v1.RuntimeService/Version
	Aug 19 19:26:32 embed-certs-024748 crio[729]: time="2024-08-19 19:26:32.228738480Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cfce99da-a4b1-4d8d-952b-6eff18bfd9ff name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:26:32 embed-certs-024748 crio[729]: time="2024-08-19 19:26:32.229128107Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095592229109023,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cfce99da-a4b1-4d8d-952b-6eff18bfd9ff name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:26:32 embed-certs-024748 crio[729]: time="2024-08-19 19:26:32.229775779Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=43ab2cd0-907a-4047-82b8-386ee8d36ef8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:26:32 embed-certs-024748 crio[729]: time="2024-08-19 19:26:32.229876669Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=43ab2cd0-907a-4047-82b8-386ee8d36ef8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:26:32 embed-certs-024748 crio[729]: time="2024-08-19 19:26:32.230085010Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:902796698c02b97c3f50f231cba5dfbc00bc7e8344f104fe7a36109e1d10a4f8,PodSandboxId:2cd56c89cb3500385d16c5b82561348e2422ac59ce004cda825f81be1d188ece,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724094792920726029,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7acb6ce1-21b6-4cdd-a5cb-76d694fc0a38,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89e69d6f405cec355f5cd65f38a963570166553b8598f4fca5b73a80d437338d,PodSandboxId:bf6fc22a7831f2da0f48530f74acaeb6bd79a7a1af15d958a29a142868066ff6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724094771757901345,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5ea66261-4ba9-4b4c-9d2f-4ad3490d3ed5,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6bc5b24f616e32fdffb80b6ed0201250b02f143c8217d56ef90dc55551d709f,PodSandboxId:0c91d6c776a7fded12e3aaa9edf57d7888401809c72d077dc752208355bfb3cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724094768498234032,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-7ww4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbde00d4-6027-4d8d-b51e-bd68915da166,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e23a8501fe9333693618c26b918ed665ca9f2ea955dfc771ddbd90f4af91338,PodSandboxId:472436dd2272fa86a82a775f24e7cf1ddccabcfd91d314c0adba450bd1bcb6c0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724094762648038232,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bmmbh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f77f152-f5f4-40f6-9
632-1eaa36b9ea31,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44a4290db8405288dc877d1dbfa8f1a4976cb6221431aef419db3cdff822d3b6,PodSandboxId:2cd56c89cb3500385d16c5b82561348e2422ac59ce004cda825f81be1d188ece,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724094762645492196,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7acb6ce1-21b6-4cdd-a5cb-76d694fc0a
38,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c09c2a3840c6b84c4d187a5b4938f1e79c515609ad3ff7077a163e94acd5fc22,PodSandboxId:f276ebca5e26f21d36a567d969915505ddf60b7ea37d9c8b78d529962a2fcc8d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724094757719644043,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-024748,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e50e02ebe8c4a08870ffac68ea5d2832,},Annotat
ions:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e6dab43bac16fb6a2155177fd2cb01da57c882a322ae89145bc332c50c87071,PodSandboxId:0c32a47af88ab983143fe824a6c65ca5175d816ef2a93f2233540e92436fbae4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724094757709229899,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-024748,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b58db56f9002dd73de08465fd3a
06c18,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d66ad075c652a3b446078444a32327c07459f74199be8f89197067dbad566d5a,PodSandboxId:9ecf2a88c0af30fddbde514ebf4371ab2edb96b5b3d009b04c544d1fecea9381,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724094757706929424,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-024748,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfe3830de5cba7ee0cea7d338361cf28,},Anno
tations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3cb2c04e3eb3398fa324b660ca1864f22175cbf41fd84eae34a24ce7928b672,PodSandboxId:f6cd7683df1d2eace86e8ace9e6f78d5db7173e03f1f652874fa0a76909a253c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724094757704514233,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-024748,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ff95a4cad5e2fe07b2e8f0bc0f26a77,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=43ab2cd0-907a-4047-82b8-386ee8d36ef8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:26:32 embed-certs-024748 crio[729]: time="2024-08-19 19:26:32.262685074Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fc6b573b-7d93-42f6-8e66-acd65154c345 name=/runtime.v1.RuntimeService/Version
	Aug 19 19:26:32 embed-certs-024748 crio[729]: time="2024-08-19 19:26:32.262969058Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fc6b573b-7d93-42f6-8e66-acd65154c345 name=/runtime.v1.RuntimeService/Version
	Aug 19 19:26:32 embed-certs-024748 crio[729]: time="2024-08-19 19:26:32.264517581Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c3b78665-011e-45ab-932d-e180acdf38bd name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:26:32 embed-certs-024748 crio[729]: time="2024-08-19 19:26:32.264896972Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095592264877517,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c3b78665-011e-45ab-932d-e180acdf38bd name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:26:32 embed-certs-024748 crio[729]: time="2024-08-19 19:26:32.265461846Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6081e0cf-c34d-40a3-bded-9849fba55f45 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:26:32 embed-certs-024748 crio[729]: time="2024-08-19 19:26:32.265530129Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6081e0cf-c34d-40a3-bded-9849fba55f45 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:26:32 embed-certs-024748 crio[729]: time="2024-08-19 19:26:32.265751130Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:902796698c02b97c3f50f231cba5dfbc00bc7e8344f104fe7a36109e1d10a4f8,PodSandboxId:2cd56c89cb3500385d16c5b82561348e2422ac59ce004cda825f81be1d188ece,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724094792920726029,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7acb6ce1-21b6-4cdd-a5cb-76d694fc0a38,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89e69d6f405cec355f5cd65f38a963570166553b8598f4fca5b73a80d437338d,PodSandboxId:bf6fc22a7831f2da0f48530f74acaeb6bd79a7a1af15d958a29a142868066ff6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724094771757901345,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5ea66261-4ba9-4b4c-9d2f-4ad3490d3ed5,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6bc5b24f616e32fdffb80b6ed0201250b02f143c8217d56ef90dc55551d709f,PodSandboxId:0c91d6c776a7fded12e3aaa9edf57d7888401809c72d077dc752208355bfb3cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724094768498234032,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-7ww4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbde00d4-6027-4d8d-b51e-bd68915da166,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e23a8501fe9333693618c26b918ed665ca9f2ea955dfc771ddbd90f4af91338,PodSandboxId:472436dd2272fa86a82a775f24e7cf1ddccabcfd91d314c0adba450bd1bcb6c0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724094762648038232,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bmmbh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f77f152-f5f4-40f6-9
632-1eaa36b9ea31,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44a4290db8405288dc877d1dbfa8f1a4976cb6221431aef419db3cdff822d3b6,PodSandboxId:2cd56c89cb3500385d16c5b82561348e2422ac59ce004cda825f81be1d188ece,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724094762645492196,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7acb6ce1-21b6-4cdd-a5cb-76d694fc0a
38,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c09c2a3840c6b84c4d187a5b4938f1e79c515609ad3ff7077a163e94acd5fc22,PodSandboxId:f276ebca5e26f21d36a567d969915505ddf60b7ea37d9c8b78d529962a2fcc8d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724094757719644043,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-024748,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e50e02ebe8c4a08870ffac68ea5d2832,},Annotat
ions:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e6dab43bac16fb6a2155177fd2cb01da57c882a322ae89145bc332c50c87071,PodSandboxId:0c32a47af88ab983143fe824a6c65ca5175d816ef2a93f2233540e92436fbae4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724094757709229899,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-024748,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b58db56f9002dd73de08465fd3a
06c18,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d66ad075c652a3b446078444a32327c07459f74199be8f89197067dbad566d5a,PodSandboxId:9ecf2a88c0af30fddbde514ebf4371ab2edb96b5b3d009b04c544d1fecea9381,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724094757706929424,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-024748,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfe3830de5cba7ee0cea7d338361cf28,},Anno
tations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3cb2c04e3eb3398fa324b660ca1864f22175cbf41fd84eae34a24ce7928b672,PodSandboxId:f6cd7683df1d2eace86e8ace9e6f78d5db7173e03f1f652874fa0a76909a253c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724094757704514233,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-024748,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ff95a4cad5e2fe07b2e8f0bc0f26a77,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6081e0cf-c34d-40a3-bded-9849fba55f45 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	902796698c02b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Running             storage-provisioner       2                   2cd56c89cb350       storage-provisioner
	89e69d6f405ce       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   bf6fc22a7831f       busybox
	a6bc5b24f616e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   1                   0c91d6c776a7f       coredns-6f6b679f8f-7ww4z
	3e23a8501fe93       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      13 minutes ago      Running             kube-proxy                1                   472436dd2272f       kube-proxy-bmmbh
	44a4290db8405       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   2cd56c89cb350       storage-provisioner
	c09c2a3840c6b       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      13 minutes ago      Running             kube-scheduler            1                   f276ebca5e26f       kube-scheduler-embed-certs-024748
	6e6dab43bac16       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      13 minutes ago      Running             kube-controller-manager   1                   0c32a47af88ab       kube-controller-manager-embed-certs-024748
	d66ad075c652a       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      13 minutes ago      Running             kube-apiserver            1                   9ecf2a88c0af3       kube-apiserver-embed-certs-024748
	a3cb2c04e3eb3       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago      Running             etcd                      1                   f6cd7683df1d2       etcd-embed-certs-024748
	
	
	==> coredns [a6bc5b24f616e32fdffb80b6ed0201250b02f143c8217d56ef90dc55551d709f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:54879 - 58031 "HINFO IN 5320066620498500483.4879752652727281099. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023169116s
	
	
	==> describe nodes <==
	Name:               embed-certs-024748
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-024748
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9c2db9d51ec33b5c53a86e9ba3d384ee332e3411
	                    minikube.k8s.io/name=embed-certs-024748
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T19_03_59_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 19:03:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-024748
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 19:26:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 19:23:23 +0000   Mon, 19 Aug 2024 19:03:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 19:23:23 +0000   Mon, 19 Aug 2024 19:03:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 19:23:23 +0000   Mon, 19 Aug 2024 19:03:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 19:23:23 +0000   Mon, 19 Aug 2024 19:12:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.96
	  Hostname:    embed-certs-024748
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c1d25bc85cf54318a724e1632e8d037c
	  System UUID:                c1d25bc8-5cf5-4318-a724-e1632e8d037c
	  Boot ID:                    10a9592c-f3d9-46b1-ae6c-c03919493ddc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 coredns-6f6b679f8f-7ww4z                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     22m
	  kube-system                 etcd-embed-certs-024748                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         22m
	  kube-system                 kube-apiserver-embed-certs-024748             250m (12%)    0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-controller-manager-embed-certs-024748    200m (10%)    0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-proxy-bmmbh                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-scheduler-embed-certs-024748             100m (5%)     0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 metrics-server-6867b74b74-kxcwh               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         21m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 22m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     22m                kubelet          Node embed-certs-024748 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    22m                kubelet          Node embed-certs-024748 status is now: NodeHasNoDiskPressure
	  Normal  NodeReady                22m                kubelet          Node embed-certs-024748 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  22m                kubelet          Node embed-certs-024748 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           22m                node-controller  Node embed-certs-024748 event: Registered Node embed-certs-024748 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node embed-certs-024748 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node embed-certs-024748 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node embed-certs-024748 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node embed-certs-024748 event: Registered Node embed-certs-024748 in Controller
	
	
	==> dmesg <==
	[Aug19 19:12] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.060475] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041906] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.963569] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.445760] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.603121] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.037605] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	[  +0.065932] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058817] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +0.208895] systemd-fstab-generator[671]: Ignoring "noauto" option for root device
	[  +0.141663] systemd-fstab-generator[683]: Ignoring "noauto" option for root device
	[  +0.310180] systemd-fstab-generator[714]: Ignoring "noauto" option for root device
	[  +4.273716] systemd-fstab-generator[810]: Ignoring "noauto" option for root device
	[  +0.058777] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.777321] systemd-fstab-generator[929]: Ignoring "noauto" option for root device
	[  +6.136672] kauditd_printk_skb: 97 callbacks suppressed
	[  +1.389340] systemd-fstab-generator[1542]: Ignoring "noauto" option for root device
	[  +3.877866] kauditd_printk_skb: 80 callbacks suppressed
	[Aug19 19:13] kauditd_printk_skb: 33 callbacks suppressed
	
	
	==> etcd [a3cb2c04e3eb3398fa324b660ca1864f22175cbf41fd84eae34a24ce7928b672] <==
	{"level":"info","ts":"2024-08-19T19:12:38.150695Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-19T19:12:38.152588Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"47cb81eff46e1a33","initial-advertise-peer-urls":["https://192.168.72.96:2380"],"listen-peer-urls":["https://192.168.72.96:2380"],"advertise-client-urls":["https://192.168.72.96:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.96:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-19T19:12:38.154351Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-19T19:12:38.154615Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.72.96:2380"}
	{"level":"info","ts":"2024-08-19T19:12:38.154673Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.72.96:2380"}
	{"level":"info","ts":"2024-08-19T19:12:39.205410Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"47cb81eff46e1a33 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-19T19:12:39.205598Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"47cb81eff46e1a33 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-19T19:12:39.205660Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"47cb81eff46e1a33 received MsgPreVoteResp from 47cb81eff46e1a33 at term 2"}
	{"level":"info","ts":"2024-08-19T19:12:39.205703Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"47cb81eff46e1a33 became candidate at term 3"}
	{"level":"info","ts":"2024-08-19T19:12:39.205733Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"47cb81eff46e1a33 received MsgVoteResp from 47cb81eff46e1a33 at term 3"}
	{"level":"info","ts":"2024-08-19T19:12:39.205786Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"47cb81eff46e1a33 became leader at term 3"}
	{"level":"info","ts":"2024-08-19T19:12:39.205820Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 47cb81eff46e1a33 elected leader 47cb81eff46e1a33 at term 3"}
	{"level":"info","ts":"2024-08-19T19:12:39.210654Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"47cb81eff46e1a33","local-member-attributes":"{Name:embed-certs-024748 ClientURLs:[https://192.168.72.96:2379]}","request-path":"/0/members/47cb81eff46e1a33/attributes","cluster-id":"2b714774277180ad","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-19T19:12:39.211325Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T19:12:39.212258Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T19:12:39.218236Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-19T19:12:39.226337Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T19:12:39.226572Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-19T19:12:39.226609Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-19T19:12:39.227089Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T19:12:39.227829Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.96:2379"}
	{"level":"warn","ts":"2024-08-19T19:12:57.384257Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"157.230792ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1888012562079734476 > lease_revoke:<id:1a33916c0e3cd929>","response":"size:27"}
	{"level":"info","ts":"2024-08-19T19:22:39.268891Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":868}
	{"level":"info","ts":"2024-08-19T19:22:39.279559Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":868,"took":"9.952226ms","hash":1381353552,"current-db-size-bytes":2719744,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":2719744,"current-db-size-in-use":"2.7 MB"}
	{"level":"info","ts":"2024-08-19T19:22:39.279669Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1381353552,"revision":868,"compact-revision":-1}
	
	
	==> kernel <==
	 19:26:32 up 14 min,  0 users,  load average: 0.53, 0.38, 0.22
	Linux embed-certs-024748 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [d66ad075c652a3b446078444a32327c07459f74199be8f89197067dbad566d5a] <==
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0819 19:22:41.655256       1 handler_proxy.go:99] no RequestInfo found in the context
	E0819 19:22:41.655476       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0819 19:22:41.657537       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0819 19:22:41.657601       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0819 19:23:41.657949       1 handler_proxy.go:99] no RequestInfo found in the context
	E0819 19:23:41.658129       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0819 19:23:41.658201       1 handler_proxy.go:99] no RequestInfo found in the context
	E0819 19:23:41.658232       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0819 19:23:41.659246       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0819 19:23:41.659359       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0819 19:25:41.659808       1 handler_proxy.go:99] no RequestInfo found in the context
	E0819 19:25:41.659922       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0819 19:25:41.659836       1 handler_proxy.go:99] no RequestInfo found in the context
	E0819 19:25:41.660017       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0819 19:25:41.661400       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0819 19:25:41.661434       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [6e6dab43bac16fb6a2155177fd2cb01da57c882a322ae89145bc332c50c87071] <==
	E0819 19:21:14.200383       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 19:21:14.775278       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 19:21:44.207595       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 19:21:44.783907       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 19:22:14.213748       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 19:22:14.791221       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 19:22:44.223962       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 19:22:44.800637       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 19:23:14.229657       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 19:23:14.810108       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0819 19:23:23.808898       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-024748"
	E0819 19:23:44.236173       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 19:23:44.820583       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0819 19:23:49.725683       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="208.86µs"
	I0819 19:24:01.726955       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="178.511µs"
	E0819 19:24:14.243821       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 19:24:14.830141       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 19:24:44.251717       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 19:24:44.839477       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 19:25:14.261633       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 19:25:14.846625       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 19:25:44.267773       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 19:25:44.855568       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 19:26:14.275707       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 19:26:14.862887       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [3e23a8501fe9333693618c26b918ed665ca9f2ea955dfc771ddbd90f4af91338] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 19:12:42.915570       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0819 19:12:42.928183       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.72.96"]
	E0819 19:12:42.928260       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 19:12:42.976110       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 19:12:42.976159       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 19:12:42.976193       1 server_linux.go:169] "Using iptables Proxier"
	I0819 19:12:42.978950       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 19:12:42.979386       1 server.go:483] "Version info" version="v1.31.0"
	I0819 19:12:42.979415       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 19:12:42.982732       1 config.go:197] "Starting service config controller"
	I0819 19:12:42.982777       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 19:12:42.982800       1 config.go:104] "Starting endpoint slice config controller"
	I0819 19:12:42.982804       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 19:12:42.984120       1 config.go:326] "Starting node config controller"
	I0819 19:12:42.984144       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 19:12:43.083364       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0819 19:12:43.083449       1 shared_informer.go:320] Caches are synced for service config
	I0819 19:12:43.084762       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [c09c2a3840c6b84c4d187a5b4938f1e79c515609ad3ff7077a163e94acd5fc22] <==
	I0819 19:12:38.700079       1 serving.go:386] Generated self-signed cert in-memory
	W0819 19:12:40.633695       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0819 19:12:40.633739       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0819 19:12:40.633749       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0819 19:12:40.633754       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0819 19:12:40.695066       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0819 19:12:40.695121       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 19:12:40.703715       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0819 19:12:40.703966       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0819 19:12:40.703987       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0819 19:12:40.704264       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0819 19:12:40.805570       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 19 19:25:16 embed-certs-024748 kubelet[936]: E0819 19:25:16.872385     936 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095516871714653,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:25:26 embed-certs-024748 kubelet[936]: E0819 19:25:26.711728     936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-kxcwh" podUID="15f86629-d916-4fdc-9ecf-9cb1b6c83f85"
	Aug 19 19:25:26 embed-certs-024748 kubelet[936]: E0819 19:25:26.875043     936 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095526874661290,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:25:26 embed-certs-024748 kubelet[936]: E0819 19:25:26.875108     936 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095526874661290,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:25:36 embed-certs-024748 kubelet[936]: E0819 19:25:36.734569     936 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 19:25:36 embed-certs-024748 kubelet[936]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 19:25:36 embed-certs-024748 kubelet[936]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 19:25:36 embed-certs-024748 kubelet[936]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 19:25:36 embed-certs-024748 kubelet[936]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 19:25:36 embed-certs-024748 kubelet[936]: E0819 19:25:36.877838     936 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095536877202600,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:25:36 embed-certs-024748 kubelet[936]: E0819 19:25:36.877874     936 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095536877202600,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:25:38 embed-certs-024748 kubelet[936]: E0819 19:25:38.712526     936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-kxcwh" podUID="15f86629-d916-4fdc-9ecf-9cb1b6c83f85"
	Aug 19 19:25:46 embed-certs-024748 kubelet[936]: E0819 19:25:46.879840     936 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095546879492656,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:25:46 embed-certs-024748 kubelet[936]: E0819 19:25:46.880164     936 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095546879492656,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:25:50 embed-certs-024748 kubelet[936]: E0819 19:25:50.712440     936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-kxcwh" podUID="15f86629-d916-4fdc-9ecf-9cb1b6c83f85"
	Aug 19 19:25:56 embed-certs-024748 kubelet[936]: E0819 19:25:56.883780     936 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095556882906233,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:25:56 embed-certs-024748 kubelet[936]: E0819 19:25:56.883905     936 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095556882906233,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:26:05 embed-certs-024748 kubelet[936]: E0819 19:26:05.712420     936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-kxcwh" podUID="15f86629-d916-4fdc-9ecf-9cb1b6c83f85"
	Aug 19 19:26:06 embed-certs-024748 kubelet[936]: E0819 19:26:06.886563     936 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095566885809148,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:26:06 embed-certs-024748 kubelet[936]: E0819 19:26:06.886620     936 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095566885809148,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:26:16 embed-certs-024748 kubelet[936]: E0819 19:26:16.889264     936 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095576888893131,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:26:16 embed-certs-024748 kubelet[936]: E0819 19:26:16.889363     936 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095576888893131,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:26:18 embed-certs-024748 kubelet[936]: E0819 19:26:18.712021     936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-kxcwh" podUID="15f86629-d916-4fdc-9ecf-9cb1b6c83f85"
	Aug 19 19:26:26 embed-certs-024748 kubelet[936]: E0819 19:26:26.894894     936 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095586892747423,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:26:26 embed-certs-024748 kubelet[936]: E0819 19:26:26.894941     936 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095586892747423,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [44a4290db8405288dc877d1dbfa8f1a4976cb6221431aef419db3cdff822d3b6] <==
	I0819 19:12:42.788642       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0819 19:13:12.796084       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [902796698c02b97c3f50f231cba5dfbc00bc7e8344f104fe7a36109e1d10a4f8] <==
	I0819 19:13:13.018254       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0819 19:13:13.030894       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0819 19:13:13.031102       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0819 19:13:30.437350       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0819 19:13:30.437857       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-024748_6b5cddaf-f03c-4e72-9562-f24f0996e8ad!
	I0819 19:13:30.437928       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ab9322fd-2e11-4b42-8a8e-29ec8425fd9d", APIVersion:"v1", ResourceVersion:"653", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-024748_6b5cddaf-f03c-4e72-9562-f24f0996e8ad became leader
	I0819 19:13:30.538391       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-024748_6b5cddaf-f03c-4e72-9562-f24f0996e8ad!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-024748 -n embed-certs-024748
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-024748 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-kxcwh
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-024748 describe pod metrics-server-6867b74b74-kxcwh
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-024748 describe pod metrics-server-6867b74b74-kxcwh: exit status 1 (65.574001ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-kxcwh" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-024748 describe pod metrics-server-6867b74b74-kxcwh: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0819 19:18:21.654043  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/enable-default-cni-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:19:14.028639  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/auto-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:19:44.663841  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/kindnet-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:20:13.188005  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:20:24.365999  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/functional-499773/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:20:37.092544  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/auto-571803/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-278232 -n no-preload-278232
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-08-19 19:27:00.902268376 +0000 UTC m=+6168.102195189
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-278232 -n no-preload-278232
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-278232 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-278232 logs -n 25: (2.112586401s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-571803                           | enable-default-cni-571803    | jenkins | v1.33.1 | 19 Aug 24 19:03 UTC | 19 Aug 24 19:03 UTC |
	|         | sudo cat                                               |                              |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-571803                           | enable-default-cni-571803    | jenkins | v1.33.1 | 19 Aug 24 19:03 UTC | 19 Aug 24 19:03 UTC |
	|         | sudo containerd config dump                            |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-571803                           | enable-default-cni-571803    | jenkins | v1.33.1 | 19 Aug 24 19:03 UTC | 19 Aug 24 19:03 UTC |
	|         | sudo systemctl status crio                             |                              |         |         |                     |                     |
	|         | --all --full --no-pager                                |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-571803                           | enable-default-cni-571803    | jenkins | v1.33.1 | 19 Aug 24 19:03 UTC | 19 Aug 24 19:03 UTC |
	|         | sudo systemctl cat crio                                |                              |         |         |                     |                     |
	|         | --no-pager                                             |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-571803                           | enable-default-cni-571803    | jenkins | v1.33.1 | 19 Aug 24 19:03 UTC | 19 Aug 24 19:03 UTC |
	|         | sudo find /etc/crio -type f                            |                              |         |         |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                          |                              |         |         |                     |                     |
	|         | \;                                                     |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-571803                           | enable-default-cni-571803    | jenkins | v1.33.1 | 19 Aug 24 19:03 UTC | 19 Aug 24 19:03 UTC |
	|         | sudo crio config                                       |                              |         |         |                     |                     |
	| delete  | -p enable-default-cni-571803                           | enable-default-cni-571803    | jenkins | v1.33.1 | 19 Aug 24 19:03 UTC | 19 Aug 24 19:03 UTC |
	| delete  | -p                                                     | disable-driver-mounts-737091 | jenkins | v1.33.1 | 19 Aug 24 19:03 UTC | 19 Aug 24 19:03 UTC |
	|         | disable-driver-mounts-737091                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-982795 | jenkins | v1.33.1 | 19 Aug 24 19:03 UTC | 19 Aug 24 19:04 UTC |
	|         | default-k8s-diff-port-982795                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-278232             | no-preload-278232            | jenkins | v1.33.1 | 19 Aug 24 19:04 UTC | 19 Aug 24 19:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-278232                                   | no-preload-278232            | jenkins | v1.33.1 | 19 Aug 24 19:04 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-982795  | default-k8s-diff-port-982795 | jenkins | v1.33.1 | 19 Aug 24 19:04 UTC | 19 Aug 24 19:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-982795 | jenkins | v1.33.1 | 19 Aug 24 19:04 UTC |                     |
	|         | default-k8s-diff-port-982795                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-024748            | embed-certs-024748           | jenkins | v1.33.1 | 19 Aug 24 19:04 UTC | 19 Aug 24 19:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-024748                                  | embed-certs-024748           | jenkins | v1.33.1 | 19 Aug 24 19:04 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-104669        | old-k8s-version-104669       | jenkins | v1.33.1 | 19 Aug 24 19:06 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-278232                  | no-preload-278232            | jenkins | v1.33.1 | 19 Aug 24 19:07 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-278232                                   | no-preload-278232            | jenkins | v1.33.1 | 19 Aug 24 19:07 UTC | 19 Aug 24 19:18 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-982795       | default-k8s-diff-port-982795 | jenkins | v1.33.1 | 19 Aug 24 19:07 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-024748                 | embed-certs-024748           | jenkins | v1.33.1 | 19 Aug 24 19:07 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-982795 | jenkins | v1.33.1 | 19 Aug 24 19:07 UTC | 19 Aug 24 19:17 UTC |
	|         | default-k8s-diff-port-982795                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-024748                                  | embed-certs-024748           | jenkins | v1.33.1 | 19 Aug 24 19:07 UTC | 19 Aug 24 19:17 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-104669                              | old-k8s-version-104669       | jenkins | v1.33.1 | 19 Aug 24 19:08 UTC | 19 Aug 24 19:08 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-104669             | old-k8s-version-104669       | jenkins | v1.33.1 | 19 Aug 24 19:08 UTC | 19 Aug 24 19:08 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-104669                              | old-k8s-version-104669       | jenkins | v1.33.1 | 19 Aug 24 19:08 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 19:08:30
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 19:08:30.532545  438716 out.go:345] Setting OutFile to fd 1 ...
	I0819 19:08:30.532649  438716 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:08:30.532657  438716 out.go:358] Setting ErrFile to fd 2...
	I0819 19:08:30.532661  438716 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:08:30.532811  438716 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19468-372744/.minikube/bin
	I0819 19:08:30.533379  438716 out.go:352] Setting JSON to false
	I0819 19:08:30.534373  438716 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":10253,"bootTime":1724084257,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 19:08:30.534451  438716 start.go:139] virtualization: kvm guest
	I0819 19:08:30.536658  438716 out.go:177] * [old-k8s-version-104669] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 19:08:30.537921  438716 out.go:177]   - MINIKUBE_LOCATION=19468
	I0819 19:08:30.537959  438716 notify.go:220] Checking for updates...
	I0819 19:08:30.540501  438716 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 19:08:30.541864  438716 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19468-372744/kubeconfig
	I0819 19:08:30.543170  438716 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19468-372744/.minikube
	I0819 19:08:30.544395  438716 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 19:08:30.545614  438716 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 19:08:30.547072  438716 config.go:182] Loaded profile config "old-k8s-version-104669": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0819 19:08:30.547468  438716 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:08:30.547570  438716 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:08:30.563059  438716 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34139
	I0819 19:08:30.563506  438716 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:08:30.564068  438716 main.go:141] libmachine: Using API Version  1
	I0819 19:08:30.564091  438716 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:08:30.564474  438716 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:08:30.564719  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .DriverName
	I0819 19:08:30.566599  438716 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0819 19:08:30.568124  438716 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 19:08:30.568503  438716 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:08:30.568541  438716 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:08:30.583805  438716 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35313
	I0819 19:08:30.584314  438716 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:08:30.584805  438716 main.go:141] libmachine: Using API Version  1
	I0819 19:08:30.584827  438716 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:08:30.585131  438716 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:08:30.585320  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .DriverName
	I0819 19:08:30.621020  438716 out.go:177] * Using the kvm2 driver based on existing profile
	I0819 19:08:30.622137  438716 start.go:297] selected driver: kvm2
	I0819 19:08:30.622158  438716 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-104669 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-104669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.32 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 19:08:30.622252  438716 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 19:08:30.622998  438716 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 19:08:30.623082  438716 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19468-372744/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 19:08:30.638616  438716 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 19:08:30.638998  438716 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 19:08:30.639047  438716 cni.go:84] Creating CNI manager for ""
	I0819 19:08:30.639059  438716 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 19:08:30.639097  438716 start.go:340] cluster config:
	{Name:old-k8s-version-104669 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-104669 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.32 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 19:08:30.639243  438716 iso.go:125] acquiring lock: {Name:mk4c0ac1c3202b1a296739df622960e7a0bd8566 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 19:08:30.641823  438716 out.go:177] * Starting "old-k8s-version-104669" primary control-plane node in "old-k8s-version-104669" cluster
	I0819 19:08:30.915976  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:08:30.643167  438716 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0819 19:08:30.643197  438716 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0819 19:08:30.643205  438716 cache.go:56] Caching tarball of preloaded images
	I0819 19:08:30.643300  438716 preload.go:172] Found /home/jenkins/minikube-integration/19468-372744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 19:08:30.643311  438716 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0819 19:08:30.643409  438716 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/config.json ...
	I0819 19:08:30.643583  438716 start.go:360] acquireMachinesLock for old-k8s-version-104669: {Name:mk24ba67a747357e9ce40f1e460d2bb0bc59cc75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 19:08:33.988031  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:08:40.067999  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:08:43.140051  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:08:49.219991  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:08:52.292013  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:08:58.371952  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:09:01.444061  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:09:07.523958  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:09:10.595977  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:09:16.675955  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:09:19.748037  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:09:25.828064  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:09:28.899972  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:09:34.980044  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:09:38.052066  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:09:44.131960  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:09:47.203926  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:09:53.283992  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:09:56.355952  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:10:02.435994  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:10:05.508042  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:10:11.587960  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:10:14.660027  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:10:20.740007  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:10:23.811991  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:10:29.891998  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:10:32.963959  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:10:39.043942  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:10:42.116029  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:10:48.195984  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:10:51.267954  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:10:57.347922  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:11:00.419952  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:11:06.499978  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:11:09.572013  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:11:15.652066  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:11:18.724012  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:11:24.804001  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:11:27.875961  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:11:33.956046  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:11:37.027998  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:11:43.108014  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:11:46.179987  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:11:49.184190  438245 start.go:364] duration metric: took 4m21.835882225s to acquireMachinesLock for "default-k8s-diff-port-982795"
	I0819 19:11:49.184280  438245 start.go:96] Skipping create...Using existing machine configuration
	I0819 19:11:49.184296  438245 fix.go:54] fixHost starting: 
	I0819 19:11:49.184628  438245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:11:49.184661  438245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:11:49.200544  438245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38241
	I0819 19:11:49.200994  438245 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:11:49.201530  438245 main.go:141] libmachine: Using API Version  1
	I0819 19:11:49.201560  438245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:11:49.201953  438245 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:11:49.202151  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .DriverName
	I0819 19:11:49.202296  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetState
	I0819 19:11:49.203841  438245 fix.go:112] recreateIfNeeded on default-k8s-diff-port-982795: state=Stopped err=<nil>
	I0819 19:11:49.203875  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .DriverName
	W0819 19:11:49.204042  438245 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 19:11:49.205721  438245 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-982795" ...
	I0819 19:11:49.181717  438001 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 19:11:49.181755  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetMachineName
	I0819 19:11:49.182097  438001 buildroot.go:166] provisioning hostname "no-preload-278232"
	I0819 19:11:49.182131  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetMachineName
	I0819 19:11:49.182392  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHHostname
	I0819 19:11:49.184006  438001 machine.go:96] duration metric: took 4m37.423775019s to provisionDockerMachine
	I0819 19:11:49.184078  438001 fix.go:56] duration metric: took 4m37.445408913s for fixHost
	I0819 19:11:49.184091  438001 start.go:83] releasing machines lock for "no-preload-278232", held for 4m37.44544277s
	W0819 19:11:49.184116  438001 start.go:714] error starting host: provision: host is not running
	W0819 19:11:49.184274  438001 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0819 19:11:49.184288  438001 start.go:729] Will try again in 5 seconds ...
	I0819 19:11:49.206739  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .Start
	I0819 19:11:49.206892  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Ensuring networks are active...
	I0819 19:11:49.207586  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Ensuring network default is active
	I0819 19:11:49.207947  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Ensuring network mk-default-k8s-diff-port-982795 is active
	I0819 19:11:49.208368  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Getting domain xml...
	I0819 19:11:49.209114  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Creating domain...
	I0819 19:11:50.421290  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting to get IP...
	I0819 19:11:50.422082  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:11:50.422490  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | unable to find current IP address of domain default-k8s-diff-port-982795 in network mk-default-k8s-diff-port-982795
	I0819 19:11:50.422562  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | I0819 19:11:50.422473  439403 retry.go:31] will retry after 273.434317ms: waiting for machine to come up
	I0819 19:11:50.698167  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:11:50.698598  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | unable to find current IP address of domain default-k8s-diff-port-982795 in network mk-default-k8s-diff-port-982795
	I0819 19:11:50.698635  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | I0819 19:11:50.698569  439403 retry.go:31] will retry after 367.841325ms: waiting for machine to come up
	I0819 19:11:51.068401  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:11:51.068996  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | unable to find current IP address of domain default-k8s-diff-port-982795 in network mk-default-k8s-diff-port-982795
	I0819 19:11:51.069019  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | I0819 19:11:51.068942  439403 retry.go:31] will retry after 460.053559ms: waiting for machine to come up
	I0819 19:11:51.530228  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:11:51.530700  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | unable to find current IP address of domain default-k8s-diff-port-982795 in network mk-default-k8s-diff-port-982795
	I0819 19:11:51.530730  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | I0819 19:11:51.530636  439403 retry.go:31] will retry after 498.222116ms: waiting for machine to come up
	I0819 19:11:52.030322  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:11:52.030771  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | unable to find current IP address of domain default-k8s-diff-port-982795 in network mk-default-k8s-diff-port-982795
	I0819 19:11:52.030808  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | I0819 19:11:52.030710  439403 retry.go:31] will retry after 750.75175ms: waiting for machine to come up
	I0819 19:11:54.186765  438001 start.go:360] acquireMachinesLock for no-preload-278232: {Name:mk24ba67a747357e9ce40f1e460d2bb0bc59cc75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 19:11:52.782638  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:11:52.783001  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | unable to find current IP address of domain default-k8s-diff-port-982795 in network mk-default-k8s-diff-port-982795
	I0819 19:11:52.783027  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | I0819 19:11:52.782952  439403 retry.go:31] will retry after 576.883195ms: waiting for machine to come up
	I0819 19:11:53.361702  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:11:53.362105  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | unable to find current IP address of domain default-k8s-diff-port-982795 in network mk-default-k8s-diff-port-982795
	I0819 19:11:53.362138  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | I0819 19:11:53.362035  439403 retry.go:31] will retry after 900.512446ms: waiting for machine to come up
	I0819 19:11:54.264656  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:11:54.265032  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | unable to find current IP address of domain default-k8s-diff-port-982795 in network mk-default-k8s-diff-port-982795
	I0819 19:11:54.265052  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | I0819 19:11:54.264984  439403 retry.go:31] will retry after 1.339005367s: waiting for machine to come up
	I0819 19:11:55.605816  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:11:55.606348  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | unable to find current IP address of domain default-k8s-diff-port-982795 in network mk-default-k8s-diff-port-982795
	I0819 19:11:55.606378  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | I0819 19:11:55.606304  439403 retry.go:31] will retry after 1.517824531s: waiting for machine to come up
	I0819 19:11:57.126027  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:11:57.126400  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | unable to find current IP address of domain default-k8s-diff-port-982795 in network mk-default-k8s-diff-port-982795
	I0819 19:11:57.126426  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | I0819 19:11:57.126340  439403 retry.go:31] will retry after 2.220939365s: waiting for machine to come up
	I0819 19:11:59.348649  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:11:59.349041  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | unable to find current IP address of domain default-k8s-diff-port-982795 in network mk-default-k8s-diff-port-982795
	I0819 19:11:59.349072  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | I0819 19:11:59.348987  439403 retry.go:31] will retry after 2.830298687s: waiting for machine to come up
	I0819 19:12:02.182934  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:02.183398  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | unable to find current IP address of domain default-k8s-diff-port-982795 in network mk-default-k8s-diff-port-982795
	I0819 19:12:02.183422  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | I0819 19:12:02.183348  439403 retry.go:31] will retry after 2.302725829s: waiting for machine to come up
	I0819 19:12:04.487648  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:04.488074  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | unable to find current IP address of domain default-k8s-diff-port-982795 in network mk-default-k8s-diff-port-982795
	I0819 19:12:04.488108  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | I0819 19:12:04.488016  439403 retry.go:31] will retry after 2.932250361s: waiting for machine to come up
	I0819 19:12:08.736669  438295 start.go:364] duration metric: took 4m39.596501254s to acquireMachinesLock for "embed-certs-024748"
	I0819 19:12:08.736755  438295 start.go:96] Skipping create...Using existing machine configuration
	I0819 19:12:08.736776  438295 fix.go:54] fixHost starting: 
	I0819 19:12:08.737277  438295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:08.737326  438295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:08.754873  438295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36829
	I0819 19:12:08.755301  438295 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:08.755839  438295 main.go:141] libmachine: Using API Version  1
	I0819 19:12:08.755866  438295 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:08.756184  438295 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:08.756383  438295 main.go:141] libmachine: (embed-certs-024748) Calling .DriverName
	I0819 19:12:08.756525  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetState
	I0819 19:12:08.758092  438295 fix.go:112] recreateIfNeeded on embed-certs-024748: state=Stopped err=<nil>
	I0819 19:12:08.758134  438295 main.go:141] libmachine: (embed-certs-024748) Calling .DriverName
	W0819 19:12:08.758299  438295 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 19:12:08.760922  438295 out.go:177] * Restarting existing kvm2 VM for "embed-certs-024748" ...
	I0819 19:12:08.762335  438295 main.go:141] libmachine: (embed-certs-024748) Calling .Start
	I0819 19:12:08.762509  438295 main.go:141] libmachine: (embed-certs-024748) Ensuring networks are active...
	I0819 19:12:08.763274  438295 main.go:141] libmachine: (embed-certs-024748) Ensuring network default is active
	I0819 19:12:08.763647  438295 main.go:141] libmachine: (embed-certs-024748) Ensuring network mk-embed-certs-024748 is active
	I0819 19:12:08.764057  438295 main.go:141] libmachine: (embed-certs-024748) Getting domain xml...
	I0819 19:12:08.764765  438295 main.go:141] libmachine: (embed-certs-024748) Creating domain...
	I0819 19:12:07.424132  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.424589  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Found IP for machine: 192.168.61.48
	I0819 19:12:07.424615  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Reserving static IP address...
	I0819 19:12:07.424634  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has current primary IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.425178  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Reserved static IP address: 192.168.61.48
	I0819 19:12:07.425205  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for SSH to be available...
	I0819 19:12:07.425237  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-982795", mac: "52:54:00:d4:19:cd", ip: "192.168.61.48"} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:07.425283  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | skip adding static IP to network mk-default-k8s-diff-port-982795 - found existing host DHCP lease matching {name: "default-k8s-diff-port-982795", mac: "52:54:00:d4:19:cd", ip: "192.168.61.48"}
	I0819 19:12:07.425304  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | Getting to WaitForSSH function...
	I0819 19:12:07.427600  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.427969  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:07.428001  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.428179  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | Using SSH client type: external
	I0819 19:12:07.428245  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | Using SSH private key: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/default-k8s-diff-port-982795/id_rsa (-rw-------)
	I0819 19:12:07.428297  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.48 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19468-372744/.minikube/machines/default-k8s-diff-port-982795/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 19:12:07.428321  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | About to run SSH command:
	I0819 19:12:07.428339  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | exit 0
	I0819 19:12:07.547727  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | SSH cmd err, output: <nil>: 
	I0819 19:12:07.548095  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetConfigRaw
	I0819 19:12:07.548741  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetIP
	I0819 19:12:07.551308  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.551700  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:07.551733  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.551967  438245 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/default-k8s-diff-port-982795/config.json ...
	I0819 19:12:07.552164  438245 machine.go:93] provisionDockerMachine start ...
	I0819 19:12:07.552186  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .DriverName
	I0819 19:12:07.552427  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHHostname
	I0819 19:12:07.554782  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.555062  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:07.555080  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.555219  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHPort
	I0819 19:12:07.555427  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:12:07.555586  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:12:07.555767  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHUsername
	I0819 19:12:07.555912  438245 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:07.556152  438245 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.48 22 <nil> <nil>}
	I0819 19:12:07.556168  438245 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 19:12:07.655996  438245 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 19:12:07.656027  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetMachineName
	I0819 19:12:07.656301  438245 buildroot.go:166] provisioning hostname "default-k8s-diff-port-982795"
	I0819 19:12:07.656329  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetMachineName
	I0819 19:12:07.656530  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHHostname
	I0819 19:12:07.658956  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.659311  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:07.659344  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.659439  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHPort
	I0819 19:12:07.659617  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:12:07.659813  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:12:07.659937  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHUsername
	I0819 19:12:07.660112  438245 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:07.660291  438245 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.48 22 <nil> <nil>}
	I0819 19:12:07.660302  438245 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-982795 && echo "default-k8s-diff-port-982795" | sudo tee /etc/hostname
	I0819 19:12:07.773590  438245 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-982795
	
	I0819 19:12:07.773615  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHHostname
	I0819 19:12:07.776994  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.777360  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:07.777399  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.777580  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHPort
	I0819 19:12:07.777860  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:12:07.778060  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:12:07.778273  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHUsername
	I0819 19:12:07.778457  438245 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:07.778665  438245 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.48 22 <nil> <nil>}
	I0819 19:12:07.778687  438245 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-982795' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-982795/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-982795' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 19:12:07.884662  438245 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 19:12:07.884718  438245 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19468-372744/.minikube CaCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19468-372744/.minikube}
	I0819 19:12:07.884751  438245 buildroot.go:174] setting up certificates
	I0819 19:12:07.884768  438245 provision.go:84] configureAuth start
	I0819 19:12:07.884782  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetMachineName
	I0819 19:12:07.885101  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetIP
	I0819 19:12:07.887844  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.888262  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:07.888293  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.888439  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHHostname
	I0819 19:12:07.890581  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.890977  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:07.891005  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.891136  438245 provision.go:143] copyHostCerts
	I0819 19:12:07.891219  438245 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem, removing ...
	I0819 19:12:07.891240  438245 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem
	I0819 19:12:07.891306  438245 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem (1082 bytes)
	I0819 19:12:07.891398  438245 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem, removing ...
	I0819 19:12:07.891406  438245 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem
	I0819 19:12:07.891430  438245 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem (1123 bytes)
	I0819 19:12:07.891487  438245 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem, removing ...
	I0819 19:12:07.891494  438245 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem
	I0819 19:12:07.891517  438245 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem (1675 bytes)
	I0819 19:12:07.891570  438245 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-982795 san=[127.0.0.1 192.168.61.48 default-k8s-diff-port-982795 localhost minikube]
	I0819 19:12:08.083963  438245 provision.go:177] copyRemoteCerts
	I0819 19:12:08.084024  438245 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 19:12:08.084086  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHHostname
	I0819 19:12:08.086637  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:08.086961  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:08.087005  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:08.087144  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHPort
	I0819 19:12:08.087357  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:12:08.087507  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHUsername
	I0819 19:12:08.087694  438245 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/default-k8s-diff-port-982795/id_rsa Username:docker}
	I0819 19:12:08.166312  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 19:12:08.194124  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0819 19:12:08.221817  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 19:12:08.249674  438245 provision.go:87] duration metric: took 364.885827ms to configureAuth
	I0819 19:12:08.249709  438245 buildroot.go:189] setting minikube options for container-runtime
	I0819 19:12:08.249891  438245 config.go:182] Loaded profile config "default-k8s-diff-port-982795": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:12:08.249983  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHHostname
	I0819 19:12:08.253045  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:08.253438  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:08.253469  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:08.253647  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHPort
	I0819 19:12:08.253856  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:12:08.254071  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:12:08.254266  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHUsername
	I0819 19:12:08.254481  438245 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:08.254700  438245 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.48 22 <nil> <nil>}
	I0819 19:12:08.254722  438245 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 19:12:08.508775  438245 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 19:12:08.508808  438245 machine.go:96] duration metric: took 956.629475ms to provisionDockerMachine
	I0819 19:12:08.508824  438245 start.go:293] postStartSetup for "default-k8s-diff-port-982795" (driver="kvm2")
	I0819 19:12:08.508838  438245 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 19:12:08.508868  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .DriverName
	I0819 19:12:08.509214  438245 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 19:12:08.509259  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHHostname
	I0819 19:12:08.512004  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:08.512341  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:08.512378  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:08.512517  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHPort
	I0819 19:12:08.512688  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:12:08.512867  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHUsername
	I0819 19:12:08.513059  438245 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/default-k8s-diff-port-982795/id_rsa Username:docker}
	I0819 19:12:08.594287  438245 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 19:12:08.598742  438245 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 19:12:08.598774  438245 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-372744/.minikube/addons for local assets ...
	I0819 19:12:08.598849  438245 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-372744/.minikube/files for local assets ...
	I0819 19:12:08.598943  438245 filesync.go:149] local asset: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem -> 3800092.pem in /etc/ssl/certs
	I0819 19:12:08.599029  438245 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 19:12:08.608416  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem --> /etc/ssl/certs/3800092.pem (1708 bytes)
	I0819 19:12:08.633880  438245 start.go:296] duration metric: took 125.036785ms for postStartSetup
	I0819 19:12:08.633930  438245 fix.go:56] duration metric: took 19.449641939s for fixHost
	I0819 19:12:08.633955  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHHostname
	I0819 19:12:08.636729  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:08.637006  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:08.637030  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:08.637248  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHPort
	I0819 19:12:08.637483  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:12:08.637672  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:12:08.637791  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHUsername
	I0819 19:12:08.637954  438245 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:08.638170  438245 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.48 22 <nil> <nil>}
	I0819 19:12:08.638186  438245 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 19:12:08.736519  438245 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724094728.710064462
	
	I0819 19:12:08.736540  438245 fix.go:216] guest clock: 1724094728.710064462
	I0819 19:12:08.736548  438245 fix.go:229] Guest: 2024-08-19 19:12:08.710064462 +0000 UTC Remote: 2024-08-19 19:12:08.633934039 +0000 UTC m=+281.422189217 (delta=76.130423ms)
	I0819 19:12:08.736568  438245 fix.go:200] guest clock delta is within tolerance: 76.130423ms
	I0819 19:12:08.736580  438245 start.go:83] releasing machines lock for "default-k8s-diff-port-982795", held for 19.552337255s
	I0819 19:12:08.736604  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .DriverName
	I0819 19:12:08.736918  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetIP
	I0819 19:12:08.739570  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:08.740030  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:08.740057  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:08.740222  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .DriverName
	I0819 19:12:08.740762  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .DriverName
	I0819 19:12:08.740960  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .DriverName
	I0819 19:12:08.741037  438245 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 19:12:08.741100  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHHostname
	I0819 19:12:08.741185  438245 ssh_runner.go:195] Run: cat /version.json
	I0819 19:12:08.741206  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHHostname
	I0819 19:12:08.743899  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:08.744037  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:08.744282  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:08.744304  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:08.744439  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHPort
	I0819 19:12:08.744576  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:08.744599  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:12:08.744607  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:08.744689  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHPort
	I0819 19:12:08.744786  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHUsername
	I0819 19:12:08.744858  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:12:08.744923  438245 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/default-k8s-diff-port-982795/id_rsa Username:docker}
	I0819 19:12:08.744997  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHUsername
	I0819 19:12:08.745143  438245 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/default-k8s-diff-port-982795/id_rsa Username:docker}
	I0819 19:12:08.820672  438245 ssh_runner.go:195] Run: systemctl --version
	I0819 19:12:08.847046  438245 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 19:12:08.989725  438245 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 19:12:08.996607  438245 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 19:12:08.996680  438245 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 19:12:09.013017  438245 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 19:12:09.013067  438245 start.go:495] detecting cgroup driver to use...
	I0819 19:12:09.013144  438245 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 19:12:09.030338  438245 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 19:12:09.044580  438245 docker.go:217] disabling cri-docker service (if available) ...
	I0819 19:12:09.044635  438245 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 19:12:09.058825  438245 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 19:12:09.073358  438245 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 19:12:09.194611  438245 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 19:12:09.333368  438245 docker.go:233] disabling docker service ...
	I0819 19:12:09.333446  438245 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 19:12:09.348775  438245 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 19:12:09.362911  438245 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 19:12:09.503015  438245 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 19:12:09.621246  438245 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 19:12:09.638480  438245 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 19:12:09.659346  438245 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 19:12:09.659406  438245 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:09.672088  438245 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 19:12:09.672166  438245 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:09.683704  438245 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:09.694847  438245 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:09.706339  438245 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 19:12:09.718658  438245 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:09.730645  438245 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:09.750843  438245 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:09.762551  438245 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 19:12:09.772960  438245 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 19:12:09.773037  438245 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 19:12:09.788362  438245 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 19:12:09.798695  438245 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:12:09.923389  438245 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 19:12:10.063317  438245 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 19:12:10.063413  438245 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 19:12:10.068449  438245 start.go:563] Will wait 60s for crictl version
	I0819 19:12:10.068540  438245 ssh_runner.go:195] Run: which crictl
	I0819 19:12:10.072807  438245 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 19:12:10.114058  438245 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 19:12:10.114151  438245 ssh_runner.go:195] Run: crio --version
	I0819 19:12:10.147919  438245 ssh_runner.go:195] Run: crio --version
	I0819 19:12:10.180009  438245 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 19:12:10.181218  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetIP
	I0819 19:12:10.184626  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:10.185015  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:10.185049  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:10.185243  438245 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0819 19:12:10.189653  438245 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 19:12:10.203439  438245 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-982795 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-982795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.48 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 19:12:10.203608  438245 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 19:12:10.203668  438245 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 19:12:10.241427  438245 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0819 19:12:10.241511  438245 ssh_runner.go:195] Run: which lz4
	I0819 19:12:10.245734  438245 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 19:12:10.250082  438245 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 19:12:10.250112  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0819 19:12:11.694285  438245 crio.go:462] duration metric: took 1.448590086s to copy over tarball
	I0819 19:12:11.694371  438245 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 19:12:10.028225  438295 main.go:141] libmachine: (embed-certs-024748) Waiting to get IP...
	I0819 19:12:10.029208  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:10.029696  438295 main.go:141] libmachine: (embed-certs-024748) DBG | unable to find current IP address of domain embed-certs-024748 in network mk-embed-certs-024748
	I0819 19:12:10.029752  438295 main.go:141] libmachine: (embed-certs-024748) DBG | I0819 19:12:10.029666  439540 retry.go:31] will retry after 276.66184ms: waiting for machine to come up
	I0819 19:12:10.308339  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:10.308762  438295 main.go:141] libmachine: (embed-certs-024748) DBG | unable to find current IP address of domain embed-certs-024748 in network mk-embed-certs-024748
	I0819 19:12:10.308804  438295 main.go:141] libmachine: (embed-certs-024748) DBG | I0819 19:12:10.308710  439540 retry.go:31] will retry after 279.376198ms: waiting for machine to come up
	I0819 19:12:10.590326  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:10.591084  438295 main.go:141] libmachine: (embed-certs-024748) DBG | unable to find current IP address of domain embed-certs-024748 in network mk-embed-certs-024748
	I0819 19:12:10.591117  438295 main.go:141] libmachine: (embed-certs-024748) DBG | I0819 19:12:10.590861  439540 retry.go:31] will retry after 364.735563ms: waiting for machine to come up
	I0819 19:12:10.957592  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:10.958075  438295 main.go:141] libmachine: (embed-certs-024748) DBG | unable to find current IP address of domain embed-certs-024748 in network mk-embed-certs-024748
	I0819 19:12:10.958100  438295 main.go:141] libmachine: (embed-certs-024748) DBG | I0819 19:12:10.958033  439540 retry.go:31] will retry after 384.275284ms: waiting for machine to come up
	I0819 19:12:11.343631  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:11.344169  438295 main.go:141] libmachine: (embed-certs-024748) DBG | unable to find current IP address of domain embed-certs-024748 in network mk-embed-certs-024748
	I0819 19:12:11.344192  438295 main.go:141] libmachine: (embed-certs-024748) DBG | I0819 19:12:11.344125  439540 retry.go:31] will retry after 572.182522ms: waiting for machine to come up
	I0819 19:12:11.917660  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:11.918150  438295 main.go:141] libmachine: (embed-certs-024748) DBG | unable to find current IP address of domain embed-certs-024748 in network mk-embed-certs-024748
	I0819 19:12:11.918179  438295 main.go:141] libmachine: (embed-certs-024748) DBG | I0819 19:12:11.918093  439540 retry.go:31] will retry after 767.807058ms: waiting for machine to come up
	I0819 19:12:12.687256  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:12.687782  438295 main.go:141] libmachine: (embed-certs-024748) DBG | unable to find current IP address of domain embed-certs-024748 in network mk-embed-certs-024748
	I0819 19:12:12.687815  438295 main.go:141] libmachine: (embed-certs-024748) DBG | I0819 19:12:12.687728  439540 retry.go:31] will retry after 715.897037ms: waiting for machine to come up
	I0819 19:12:13.406041  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:13.406653  438295 main.go:141] libmachine: (embed-certs-024748) DBG | unable to find current IP address of domain embed-certs-024748 in network mk-embed-certs-024748
	I0819 19:12:13.406690  438295 main.go:141] libmachine: (embed-certs-024748) DBG | I0819 19:12:13.406577  439540 retry.go:31] will retry after 1.301579737s: waiting for machine to come up
	I0819 19:12:13.847779  438245 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.153373496s)
	I0819 19:12:13.847810  438245 crio.go:469] duration metric: took 2.153488101s to extract the tarball
	I0819 19:12:13.847817  438245 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 19:12:13.885520  438245 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 19:12:13.929775  438245 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 19:12:13.929809  438245 cache_images.go:84] Images are preloaded, skipping loading
	I0819 19:12:13.929838  438245 kubeadm.go:934] updating node { 192.168.61.48 8444 v1.31.0 crio true true} ...
	I0819 19:12:13.930019  438245 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-982795 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.48
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-982795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 19:12:13.930113  438245 ssh_runner.go:195] Run: crio config
	I0819 19:12:13.977098  438245 cni.go:84] Creating CNI manager for ""
	I0819 19:12:13.977123  438245 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 19:12:13.977136  438245 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 19:12:13.977176  438245 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.48 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-982795 NodeName:default-k8s-diff-port-982795 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.48"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.48 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 19:12:13.977382  438245 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.48
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-982795"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.48
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.48"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 19:12:13.977461  438245 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 19:12:13.987276  438245 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 19:12:13.987381  438245 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 19:12:13.996666  438245 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0819 19:12:14.013822  438245 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 19:12:14.030936  438245 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0819 19:12:14.048575  438245 ssh_runner.go:195] Run: grep 192.168.61.48	control-plane.minikube.internal$ /etc/hosts
	I0819 19:12:14.052809  438245 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.48	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 19:12:14.065177  438245 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:12:14.185159  438245 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 19:12:14.202906  438245 certs.go:68] Setting up /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/default-k8s-diff-port-982795 for IP: 192.168.61.48
	I0819 19:12:14.202934  438245 certs.go:194] generating shared ca certs ...
	I0819 19:12:14.202966  438245 certs.go:226] acquiring lock for ca certs: {Name:mk639e03f593e0bccac045f6e9f5ba3b96cc81e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:12:14.203184  438245 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key
	I0819 19:12:14.203266  438245 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key
	I0819 19:12:14.203282  438245 certs.go:256] generating profile certs ...
	I0819 19:12:14.203399  438245 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/default-k8s-diff-port-982795/client.key
	I0819 19:12:14.203487  438245 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/default-k8s-diff-port-982795/apiserver.key.a3c7a519
	I0819 19:12:14.203552  438245 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/default-k8s-diff-port-982795/proxy-client.key
	I0819 19:12:14.203757  438245 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem (1338 bytes)
	W0819 19:12:14.203820  438245 certs.go:480] ignoring /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009_empty.pem, impossibly tiny 0 bytes
	I0819 19:12:14.203834  438245 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 19:12:14.203866  438245 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem (1082 bytes)
	I0819 19:12:14.203899  438245 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem (1123 bytes)
	I0819 19:12:14.203929  438245 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem (1675 bytes)
	I0819 19:12:14.203994  438245 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem (1708 bytes)
	I0819 19:12:14.205025  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 19:12:14.258243  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 19:12:14.295380  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 19:12:14.330511  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 19:12:14.358547  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/default-k8s-diff-port-982795/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0819 19:12:14.386938  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/default-k8s-diff-port-982795/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 19:12:14.415021  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/default-k8s-diff-port-982795/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 19:12:14.439531  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/default-k8s-diff-port-982795/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 19:12:14.463969  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 19:12:14.487638  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem --> /usr/share/ca-certificates/380009.pem (1338 bytes)
	I0819 19:12:14.511571  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem --> /usr/share/ca-certificates/3800092.pem (1708 bytes)
	I0819 19:12:14.535223  438245 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 19:12:14.552922  438245 ssh_runner.go:195] Run: openssl version
	I0819 19:12:14.559078  438245 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 19:12:14.570605  438245 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:12:14.575411  438245 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:12:14.575484  438245 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:12:14.581714  438245 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 19:12:14.592896  438245 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/380009.pem && ln -fs /usr/share/ca-certificates/380009.pem /etc/ssl/certs/380009.pem"
	I0819 19:12:14.604306  438245 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/380009.pem
	I0819 19:12:14.609139  438245 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 17:56 /usr/share/ca-certificates/380009.pem
	I0819 19:12:14.609212  438245 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/380009.pem
	I0819 19:12:14.615160  438245 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/380009.pem /etc/ssl/certs/51391683.0"
	I0819 19:12:14.626010  438245 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3800092.pem && ln -fs /usr/share/ca-certificates/3800092.pem /etc/ssl/certs/3800092.pem"
	I0819 19:12:14.636821  438245 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3800092.pem
	I0819 19:12:14.641308  438245 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 17:56 /usr/share/ca-certificates/3800092.pem
	I0819 19:12:14.641358  438245 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3800092.pem
	I0819 19:12:14.646898  438245 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3800092.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 19:12:14.657905  438245 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 19:12:14.662780  438245 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 19:12:14.668934  438245 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 19:12:14.674693  438245 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 19:12:14.680683  438245 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 19:12:14.686689  438245 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 19:12:14.692678  438245 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 19:12:14.698784  438245 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-982795 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-982795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.48 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 19:12:14.698930  438245 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 19:12:14.699006  438245 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 19:12:14.740881  438245 cri.go:89] found id: ""
	I0819 19:12:14.740964  438245 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 19:12:14.751589  438245 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 19:12:14.751613  438245 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 19:12:14.751665  438245 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 19:12:14.761837  438245 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 19:12:14.762870  438245 kubeconfig.go:125] found "default-k8s-diff-port-982795" server: "https://192.168.61.48:8444"
	I0819 19:12:14.765176  438245 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 19:12:14.775114  438245 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.48
	I0819 19:12:14.775147  438245 kubeadm.go:1160] stopping kube-system containers ...
	I0819 19:12:14.775161  438245 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0819 19:12:14.775228  438245 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 19:12:14.811373  438245 cri.go:89] found id: ""
	I0819 19:12:14.811442  438245 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0819 19:12:14.829656  438245 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 19:12:14.840215  438245 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 19:12:14.840236  438245 kubeadm.go:157] found existing configuration files:
	
	I0819 19:12:14.840288  438245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0819 19:12:14.850017  438245 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 19:12:14.850075  438245 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 19:12:14.860060  438245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0819 19:12:14.869589  438245 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 19:12:14.869645  438245 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 19:12:14.879249  438245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0819 19:12:14.888475  438245 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 19:12:14.888532  438245 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 19:12:14.898151  438245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0819 19:12:14.907628  438245 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 19:12:14.907737  438245 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 19:12:14.917581  438245 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 19:12:14.927119  438245 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:12:15.037162  438245 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:12:16.355430  438245 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.318225023s)
	I0819 19:12:16.355461  438245 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:12:16.566565  438245 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:12:16.649402  438245 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:12:16.775956  438245 api_server.go:52] waiting for apiserver process to appear ...
	I0819 19:12:16.776067  438245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:12:14.709988  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:14.710397  438295 main.go:141] libmachine: (embed-certs-024748) DBG | unable to find current IP address of domain embed-certs-024748 in network mk-embed-certs-024748
	I0819 19:12:14.710429  438295 main.go:141] libmachine: (embed-certs-024748) DBG | I0819 19:12:14.710338  439540 retry.go:31] will retry after 1.420823505s: waiting for machine to come up
	I0819 19:12:16.133160  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:16.133558  438295 main.go:141] libmachine: (embed-certs-024748) DBG | unable to find current IP address of domain embed-certs-024748 in network mk-embed-certs-024748
	I0819 19:12:16.133587  438295 main.go:141] libmachine: (embed-certs-024748) DBG | I0819 19:12:16.133531  439540 retry.go:31] will retry after 1.71697779s: waiting for machine to come up
	I0819 19:12:17.852342  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:17.852884  438295 main.go:141] libmachine: (embed-certs-024748) DBG | unable to find current IP address of domain embed-certs-024748 in network mk-embed-certs-024748
	I0819 19:12:17.852922  438295 main.go:141] libmachine: (embed-certs-024748) DBG | I0819 19:12:17.852836  439540 retry.go:31] will retry after 2.816782354s: waiting for machine to come up
	I0819 19:12:17.277067  438245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:12:17.777027  438245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:12:17.797513  438245 api_server.go:72] duration metric: took 1.021572879s to wait for apiserver process to appear ...
	I0819 19:12:17.797554  438245 api_server.go:88] waiting for apiserver healthz status ...
	I0819 19:12:17.797596  438245 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8444/healthz ...
	I0819 19:12:17.798191  438245 api_server.go:269] stopped: https://192.168.61.48:8444/healthz: Get "https://192.168.61.48:8444/healthz": dial tcp 192.168.61.48:8444: connect: connection refused
	I0819 19:12:18.297907  438245 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8444/healthz ...
	I0819 19:12:20.177305  438245 api_server.go:279] https://192.168.61.48:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 19:12:20.177345  438245 api_server.go:103] status: https://192.168.61.48:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 19:12:20.177367  438245 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8444/healthz ...
	I0819 19:12:20.244091  438245 api_server.go:279] https://192.168.61.48:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 19:12:20.244140  438245 api_server.go:103] status: https://192.168.61.48:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 19:12:20.298403  438245 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8444/healthz ...
	I0819 19:12:20.304289  438245 api_server.go:279] https://192.168.61.48:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 19:12:20.304325  438245 api_server.go:103] status: https://192.168.61.48:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 19:12:20.797876  438245 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8444/healthz ...
	I0819 19:12:20.803894  438245 api_server.go:279] https://192.168.61.48:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 19:12:20.803935  438245 api_server.go:103] status: https://192.168.61.48:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 19:12:21.298284  438245 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8444/healthz ...
	I0819 19:12:21.320292  438245 api_server.go:279] https://192.168.61.48:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 19:12:21.320320  438245 api_server.go:103] status: https://192.168.61.48:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 19:12:21.797829  438245 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8444/healthz ...
	I0819 19:12:21.802183  438245 api_server.go:279] https://192.168.61.48:8444/healthz returned 200:
	ok
	I0819 19:12:21.809866  438245 api_server.go:141] control plane version: v1.31.0
	I0819 19:12:21.809902  438245 api_server.go:131] duration metric: took 4.012339897s to wait for apiserver health ...
	I0819 19:12:21.809914  438245 cni.go:84] Creating CNI manager for ""
	I0819 19:12:21.809944  438245 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 19:12:21.811668  438245 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 19:12:21.813183  438245 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 19:12:21.826170  438245 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 19:12:21.850473  438245 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 19:12:21.865379  438245 system_pods.go:59] 8 kube-system pods found
	I0819 19:12:21.865422  438245 system_pods.go:61] "coredns-6f6b679f8f-dwbnt" [9b8d7ee3-15ca-475b-b659-d5c3b10890fe] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0819 19:12:21.865442  438245 system_pods.go:61] "etcd-default-k8s-diff-port-982795" [6686e6f6-485d-4c57-89a1-af4f27b6216e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0819 19:12:21.865455  438245 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-982795" [fcfb5a0d-6d6c-4c30-a17f-43106f3dd5ae] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0819 19:12:21.865475  438245 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-982795" [346bf3b5-57e7-4f30-a6ed-959dc9e8941d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0819 19:12:21.865485  438245 system_pods.go:61] "kube-proxy-wrczx" [acabdc8e-5397-4531-afcb-57a8f4c48618] Running
	I0819 19:12:21.865493  438245 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-982795" [82de0c57-e712-4c0c-b751-a17cb0dd75b2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0819 19:12:21.865503  438245 system_pods.go:61] "metrics-server-6867b74b74-5hlnx" [394c87af-a198-4fea-8a30-32a8c3e80884] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 19:12:21.865522  438245 system_pods.go:61] "storage-provisioner" [35f70989-846d-4ec5-b879-a22625ee94ce] Running
	I0819 19:12:21.865534  438245 system_pods.go:74] duration metric: took 15.035147ms to wait for pod list to return data ...
	I0819 19:12:21.865545  438245 node_conditions.go:102] verifying NodePressure condition ...
	I0819 19:12:21.870314  438245 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 19:12:21.870350  438245 node_conditions.go:123] node cpu capacity is 2
	I0819 19:12:21.870366  438245 node_conditions.go:105] duration metric: took 4.813819ms to run NodePressure ...
	I0819 19:12:21.870390  438245 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:12:22.130916  438245 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0819 19:12:22.134889  438245 kubeadm.go:739] kubelet initialised
	I0819 19:12:22.134912  438245 kubeadm.go:740] duration metric: took 3.970465ms waiting for restarted kubelet to initialise ...
	I0819 19:12:22.134920  438245 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 19:12:22.139345  438245 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-dwbnt" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:20.672189  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:20.672655  438295 main.go:141] libmachine: (embed-certs-024748) DBG | unable to find current IP address of domain embed-certs-024748 in network mk-embed-certs-024748
	I0819 19:12:20.672682  438295 main.go:141] libmachine: (embed-certs-024748) DBG | I0819 19:12:20.672613  439540 retry.go:31] will retry after 2.76896974s: waiting for machine to come up
	I0819 19:12:23.442804  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:23.443223  438295 main.go:141] libmachine: (embed-certs-024748) DBG | unable to find current IP address of domain embed-certs-024748 in network mk-embed-certs-024748
	I0819 19:12:23.443268  438295 main.go:141] libmachine: (embed-certs-024748) DBG | I0819 19:12:23.443170  439540 retry.go:31] will retry after 4.199459292s: waiting for machine to come up
	I0819 19:12:24.145329  438245 pod_ready.go:103] pod "coredns-6f6b679f8f-dwbnt" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:26.645695  438245 pod_ready.go:103] pod "coredns-6f6b679f8f-dwbnt" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:27.644842  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:27.645376  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has current primary IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:27.645403  438295 main.go:141] libmachine: (embed-certs-024748) Found IP for machine: 192.168.72.96
	I0819 19:12:27.645417  438295 main.go:141] libmachine: (embed-certs-024748) Reserving static IP address...
	I0819 19:12:27.645874  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "embed-certs-024748", mac: "52:54:00:f0:8b:43", ip: "192.168.72.96"} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:27.645902  438295 main.go:141] libmachine: (embed-certs-024748) Reserved static IP address: 192.168.72.96
	I0819 19:12:27.645919  438295 main.go:141] libmachine: (embed-certs-024748) DBG | skip adding static IP to network mk-embed-certs-024748 - found existing host DHCP lease matching {name: "embed-certs-024748", mac: "52:54:00:f0:8b:43", ip: "192.168.72.96"}
	I0819 19:12:27.645952  438295 main.go:141] libmachine: (embed-certs-024748) Waiting for SSH to be available...
	I0819 19:12:27.645974  438295 main.go:141] libmachine: (embed-certs-024748) DBG | Getting to WaitForSSH function...
	I0819 19:12:27.648195  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:27.648471  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:27.648496  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:27.648717  438295 main.go:141] libmachine: (embed-certs-024748) DBG | Using SSH client type: external
	I0819 19:12:27.648744  438295 main.go:141] libmachine: (embed-certs-024748) DBG | Using SSH private key: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/embed-certs-024748/id_rsa (-rw-------)
	I0819 19:12:27.648773  438295 main.go:141] libmachine: (embed-certs-024748) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.96 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19468-372744/.minikube/machines/embed-certs-024748/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 19:12:27.648792  438295 main.go:141] libmachine: (embed-certs-024748) DBG | About to run SSH command:
	I0819 19:12:27.648808  438295 main.go:141] libmachine: (embed-certs-024748) DBG | exit 0
	I0819 19:12:27.775964  438295 main.go:141] libmachine: (embed-certs-024748) DBG | SSH cmd err, output: <nil>: 
	I0819 19:12:27.776344  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetConfigRaw
	I0819 19:12:27.777100  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetIP
	I0819 19:12:27.780096  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:27.780535  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:27.780570  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:27.780936  438295 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/embed-certs-024748/config.json ...
	I0819 19:12:27.781721  438295 machine.go:93] provisionDockerMachine start ...
	I0819 19:12:27.781748  438295 main.go:141] libmachine: (embed-certs-024748) Calling .DriverName
	I0819 19:12:27.781974  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHHostname
	I0819 19:12:27.784482  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:27.784838  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:27.784868  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:27.785066  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHPort
	I0819 19:12:27.785254  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:27.785452  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:27.785617  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHUsername
	I0819 19:12:27.785789  438295 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:27.786038  438295 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.96 22 <nil> <nil>}
	I0819 19:12:27.786059  438295 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 19:12:27.904337  438295 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 19:12:27.904375  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetMachineName
	I0819 19:12:27.904675  438295 buildroot.go:166] provisioning hostname "embed-certs-024748"
	I0819 19:12:27.904711  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetMachineName
	I0819 19:12:27.904932  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHHostname
	I0819 19:12:27.907960  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:27.908325  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:27.908354  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:27.908446  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHPort
	I0819 19:12:27.908659  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:27.908825  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:27.909012  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHUsername
	I0819 19:12:27.909234  438295 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:27.909441  438295 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.96 22 <nil> <nil>}
	I0819 19:12:27.909458  438295 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-024748 && echo "embed-certs-024748" | sudo tee /etc/hostname
	I0819 19:12:28.036564  438295 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-024748
	
	I0819 19:12:28.036597  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHHostname
	I0819 19:12:28.039385  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:28.039798  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:28.039827  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:28.040071  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHPort
	I0819 19:12:28.040327  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:28.040493  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:28.040652  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHUsername
	I0819 19:12:28.040882  438295 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:28.041113  438295 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.96 22 <nil> <nil>}
	I0819 19:12:28.041138  438295 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-024748' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-024748/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-024748' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 19:12:28.162311  438295 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 19:12:28.162348  438295 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19468-372744/.minikube CaCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19468-372744/.minikube}
	I0819 19:12:28.162368  438295 buildroot.go:174] setting up certificates
	I0819 19:12:28.162376  438295 provision.go:84] configureAuth start
	I0819 19:12:28.162385  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetMachineName
	I0819 19:12:28.162703  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetIP
	I0819 19:12:28.165171  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:28.165563  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:28.165593  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:28.165727  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHHostname
	I0819 19:12:28.167917  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:28.168199  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:28.168221  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:28.168411  438295 provision.go:143] copyHostCerts
	I0819 19:12:28.168469  438295 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem, removing ...
	I0819 19:12:28.168491  438295 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem
	I0819 19:12:28.168560  438295 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem (1082 bytes)
	I0819 19:12:28.168693  438295 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem, removing ...
	I0819 19:12:28.168704  438295 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem
	I0819 19:12:28.168736  438295 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem (1123 bytes)
	I0819 19:12:28.168814  438295 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem, removing ...
	I0819 19:12:28.168824  438295 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem
	I0819 19:12:28.168853  438295 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem (1675 bytes)
	I0819 19:12:28.168942  438295 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem org=jenkins.embed-certs-024748 san=[127.0.0.1 192.168.72.96 embed-certs-024748 localhost minikube]
	I0819 19:12:28.447064  438295 provision.go:177] copyRemoteCerts
	I0819 19:12:28.447129  438295 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 19:12:28.447158  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHHostname
	I0819 19:12:28.449851  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:28.450138  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:28.450163  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:28.450344  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHPort
	I0819 19:12:28.450541  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:28.450713  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHUsername
	I0819 19:12:28.450832  438295 sshutil.go:53] new ssh client: &{IP:192.168.72.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/embed-certs-024748/id_rsa Username:docker}
	I0819 19:12:28.537815  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 19:12:28.562408  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0819 19:12:28.586728  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 19:12:28.611119  438295 provision.go:87] duration metric: took 448.726133ms to configureAuth
	I0819 19:12:28.611158  438295 buildroot.go:189] setting minikube options for container-runtime
	I0819 19:12:28.611351  438295 config.go:182] Loaded profile config "embed-certs-024748": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:12:28.611428  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHHostname
	I0819 19:12:28.614168  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:28.614543  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:28.614571  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:28.614736  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHPort
	I0819 19:12:28.614941  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:28.615083  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:28.615192  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHUsername
	I0819 19:12:28.615302  438295 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:28.615454  438295 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.96 22 <nil> <nil>}
	I0819 19:12:28.615469  438295 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 19:12:28.890054  438295 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 19:12:28.890086  438295 machine.go:96] duration metric: took 1.10834874s to provisionDockerMachine
	I0819 19:12:28.890100  438295 start.go:293] postStartSetup for "embed-certs-024748" (driver="kvm2")
	I0819 19:12:28.890120  438295 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 19:12:28.890146  438295 main.go:141] libmachine: (embed-certs-024748) Calling .DriverName
	I0819 19:12:28.890469  438295 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 19:12:28.890499  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHHostname
	I0819 19:12:28.893251  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:28.893579  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:28.893605  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:28.893733  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHPort
	I0819 19:12:28.893895  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:28.894102  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHUsername
	I0819 19:12:28.894220  438295 sshutil.go:53] new ssh client: &{IP:192.168.72.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/embed-certs-024748/id_rsa Username:docker}
	I0819 19:12:28.979381  438295 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 19:12:28.983921  438295 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 19:12:28.983952  438295 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-372744/.minikube/addons for local assets ...
	I0819 19:12:28.984048  438295 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-372744/.minikube/files for local assets ...
	I0819 19:12:28.984156  438295 filesync.go:149] local asset: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem -> 3800092.pem in /etc/ssl/certs
	I0819 19:12:28.984250  438295 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 19:12:28.994964  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem --> /etc/ssl/certs/3800092.pem (1708 bytes)
	I0819 19:12:29.018801  438295 start.go:296] duration metric: took 128.685446ms for postStartSetup
	I0819 19:12:29.018843  438295 fix.go:56] duration metric: took 20.282076509s for fixHost
	I0819 19:12:29.018870  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHHostname
	I0819 19:12:29.021554  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:29.021848  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:29.021875  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:29.022066  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHPort
	I0819 19:12:29.022261  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:29.022428  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:29.022526  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHUsername
	I0819 19:12:29.022678  438295 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:29.022900  438295 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.96 22 <nil> <nil>}
	I0819 19:12:29.022915  438295 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 19:12:29.132976  438716 start.go:364] duration metric: took 3m58.489348567s to acquireMachinesLock for "old-k8s-version-104669"
	I0819 19:12:29.133047  438716 start.go:96] Skipping create...Using existing machine configuration
	I0819 19:12:29.133055  438716 fix.go:54] fixHost starting: 
	I0819 19:12:29.133485  438716 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:29.133524  438716 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:29.151330  438716 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39213
	I0819 19:12:29.151778  438716 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:29.152271  438716 main.go:141] libmachine: Using API Version  1
	I0819 19:12:29.152301  438716 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:29.152682  438716 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:29.152883  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .DriverName
	I0819 19:12:29.153065  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetState
	I0819 19:12:29.154399  438716 fix.go:112] recreateIfNeeded on old-k8s-version-104669: state=Stopped err=<nil>
	I0819 19:12:29.154444  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .DriverName
	W0819 19:12:29.154684  438716 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 19:12:29.156349  438716 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-104669" ...
	I0819 19:12:29.157631  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .Start
	I0819 19:12:29.157825  438716 main.go:141] libmachine: (old-k8s-version-104669) Ensuring networks are active...
	I0819 19:12:29.158635  438716 main.go:141] libmachine: (old-k8s-version-104669) Ensuring network default is active
	I0819 19:12:29.159041  438716 main.go:141] libmachine: (old-k8s-version-104669) Ensuring network mk-old-k8s-version-104669 is active
	I0819 19:12:29.159509  438716 main.go:141] libmachine: (old-k8s-version-104669) Getting domain xml...
	I0819 19:12:29.160383  438716 main.go:141] libmachine: (old-k8s-version-104669) Creating domain...
	I0819 19:12:30.452488  438716 main.go:141] libmachine: (old-k8s-version-104669) Waiting to get IP...
	I0819 19:12:30.453743  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:30.454237  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:30.454323  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:30.454193  439728 retry.go:31] will retry after 197.440033ms: waiting for machine to come up
	I0819 19:12:29.132812  438295 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724094749.105537362
	
	I0819 19:12:29.132839  438295 fix.go:216] guest clock: 1724094749.105537362
	I0819 19:12:29.132850  438295 fix.go:229] Guest: 2024-08-19 19:12:29.105537362 +0000 UTC Remote: 2024-08-19 19:12:29.018848957 +0000 UTC m=+300.015027560 (delta=86.688405ms)
	I0819 19:12:29.132877  438295 fix.go:200] guest clock delta is within tolerance: 86.688405ms
	I0819 19:12:29.132884  438295 start.go:83] releasing machines lock for "embed-certs-024748", held for 20.396159242s
	I0819 19:12:29.132912  438295 main.go:141] libmachine: (embed-certs-024748) Calling .DriverName
	I0819 19:12:29.133179  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetIP
	I0819 19:12:29.136110  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:29.136532  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:29.136565  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:29.136750  438295 main.go:141] libmachine: (embed-certs-024748) Calling .DriverName
	I0819 19:12:29.137307  438295 main.go:141] libmachine: (embed-certs-024748) Calling .DriverName
	I0819 19:12:29.137532  438295 main.go:141] libmachine: (embed-certs-024748) Calling .DriverName
	I0819 19:12:29.137616  438295 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 19:12:29.137690  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHHostname
	I0819 19:12:29.137758  438295 ssh_runner.go:195] Run: cat /version.json
	I0819 19:12:29.137781  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHHostname
	I0819 19:12:29.140500  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:29.140820  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:29.140870  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:29.140903  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:29.141067  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHPort
	I0819 19:12:29.141266  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:29.141385  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:29.141430  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:29.141443  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHUsername
	I0819 19:12:29.141586  438295 sshutil.go:53] new ssh client: &{IP:192.168.72.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/embed-certs-024748/id_rsa Username:docker}
	I0819 19:12:29.141639  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHPort
	I0819 19:12:29.141790  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:29.141957  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHUsername
	I0819 19:12:29.142123  438295 sshutil.go:53] new ssh client: &{IP:192.168.72.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/embed-certs-024748/id_rsa Username:docker}
	I0819 19:12:29.242886  438295 ssh_runner.go:195] Run: systemctl --version
	I0819 19:12:29.249276  438295 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 19:12:29.393872  438295 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 19:12:29.401874  438295 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 19:12:29.401954  438295 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 19:12:29.421973  438295 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 19:12:29.422004  438295 start.go:495] detecting cgroup driver to use...
	I0819 19:12:29.422081  438295 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 19:12:29.442823  438295 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 19:12:29.462663  438295 docker.go:217] disabling cri-docker service (if available) ...
	I0819 19:12:29.462720  438295 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 19:12:29.477896  438295 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 19:12:29.492591  438295 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 19:12:29.613759  438295 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 19:12:29.770719  438295 docker.go:233] disabling docker service ...
	I0819 19:12:29.770805  438295 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 19:12:29.785787  438295 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 19:12:29.802879  438295 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 19:12:29.947633  438295 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 19:12:30.082602  438295 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 19:12:30.097628  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 19:12:30.118671  438295 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 19:12:30.118735  438295 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:30.131287  438295 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 19:12:30.131354  438295 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:30.143008  438295 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:30.156358  438295 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:30.172123  438295 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 19:12:30.188196  438295 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:30.201487  438295 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:30.219887  438295 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:30.235685  438295 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 19:12:30.246112  438295 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 19:12:30.246202  438295 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 19:12:30.259732  438295 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 19:12:30.269866  438295 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:12:30.397522  438295 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 19:12:30.545249  438295 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 19:12:30.545349  438295 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 19:12:30.550473  438295 start.go:563] Will wait 60s for crictl version
	I0819 19:12:30.550528  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:12:30.554782  438295 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 19:12:30.597634  438295 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 19:12:30.597736  438295 ssh_runner.go:195] Run: crio --version
	I0819 19:12:30.628137  438295 ssh_runner.go:195] Run: crio --version
	I0819 19:12:30.660912  438295 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 19:12:29.146475  438245 pod_ready.go:103] pod "coredns-6f6b679f8f-dwbnt" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:31.147618  438245 pod_ready.go:93] pod "coredns-6f6b679f8f-dwbnt" in "kube-system" namespace has status "Ready":"True"
	I0819 19:12:31.147651  438245 pod_ready.go:82] duration metric: took 9.00827926s for pod "coredns-6f6b679f8f-dwbnt" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:31.147665  438245 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:31.153305  438245 pod_ready.go:93] pod "etcd-default-k8s-diff-port-982795" in "kube-system" namespace has status "Ready":"True"
	I0819 19:12:31.153331  438245 pod_ready.go:82] duration metric: took 5.657625ms for pod "etcd-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:31.153347  438245 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:31.159009  438245 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-982795" in "kube-system" namespace has status "Ready":"True"
	I0819 19:12:31.159037  438245 pod_ready.go:82] duration metric: took 5.680194ms for pod "kube-apiserver-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:31.159050  438245 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:31.165478  438245 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-982795" in "kube-system" namespace has status "Ready":"True"
	I0819 19:12:31.165504  438245 pod_ready.go:82] duration metric: took 6.444529ms for pod "kube-controller-manager-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:31.165517  438245 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-wrczx" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:31.180293  438245 pod_ready.go:93] pod "kube-proxy-wrczx" in "kube-system" namespace has status "Ready":"True"
	I0819 19:12:31.180324  438245 pod_ready.go:82] duration metric: took 14.798883ms for pod "kube-proxy-wrczx" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:31.180337  438245 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:30.662168  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetIP
	I0819 19:12:30.665057  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:30.665455  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:30.665486  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:30.665660  438295 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0819 19:12:30.669911  438295 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 19:12:30.682755  438295 kubeadm.go:883] updating cluster {Name:embed-certs-024748 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-024748 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.96 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 19:12:30.682883  438295 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 19:12:30.682936  438295 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 19:12:30.724160  438295 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0819 19:12:30.724233  438295 ssh_runner.go:195] Run: which lz4
	I0819 19:12:30.728710  438295 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 19:12:30.733279  438295 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 19:12:30.733317  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0819 19:12:32.178568  438295 crio.go:462] duration metric: took 1.449881121s to copy over tarball
	I0819 19:12:32.178642  438295 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 19:12:30.653917  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:30.654521  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:30.654566  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:30.654436  439728 retry.go:31] will retry after 317.038756ms: waiting for machine to come up
	I0819 19:12:30.973003  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:30.973530  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:30.973560  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:30.973487  439728 retry.go:31] will retry after 486.945032ms: waiting for machine to come up
	I0819 19:12:31.461937  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:31.462438  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:31.462470  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:31.462389  439728 retry.go:31] will retry after 441.288745ms: waiting for machine to come up
	I0819 19:12:31.904947  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:31.905564  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:31.905617  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:31.905472  439728 retry.go:31] will retry after 752.583403ms: waiting for machine to come up
	I0819 19:12:32.659642  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:32.660175  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:32.660207  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:32.660128  439728 retry.go:31] will retry after 932.705928ms: waiting for machine to come up
	I0819 19:12:33.594983  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:33.595529  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:33.595556  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:33.595466  439728 retry.go:31] will retry after 936.558157ms: waiting for machine to come up
	I0819 19:12:34.533158  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:34.533717  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:34.533743  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:34.533656  439728 retry.go:31] will retry after 1.435945188s: waiting for machine to come up
	I0819 19:12:33.186835  438245 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-982795" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:35.187500  438245 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-982795" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:35.686905  438245 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-982795" in "kube-system" namespace has status "Ready":"True"
	I0819 19:12:35.686932  438245 pod_ready.go:82] duration metric: took 4.50658625s for pod "kube-scheduler-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:35.686945  438245 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:34.321347  438295 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.14267077s)
	I0819 19:12:34.321379  438295 crio.go:469] duration metric: took 2.142777016s to extract the tarball
	I0819 19:12:34.321390  438295 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 19:12:34.357670  438295 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 19:12:34.403313  438295 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 19:12:34.403344  438295 cache_images.go:84] Images are preloaded, skipping loading
	I0819 19:12:34.403358  438295 kubeadm.go:934] updating node { 192.168.72.96 8443 v1.31.0 crio true true} ...
	I0819 19:12:34.403495  438295 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-024748 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.96
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-024748 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 19:12:34.403576  438295 ssh_runner.go:195] Run: crio config
	I0819 19:12:34.450415  438295 cni.go:84] Creating CNI manager for ""
	I0819 19:12:34.450443  438295 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 19:12:34.450461  438295 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 19:12:34.450490  438295 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.96 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-024748 NodeName:embed-certs-024748 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.96"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.96 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 19:12:34.450646  438295 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.96
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-024748"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.96
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.96"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 19:12:34.450723  438295 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 19:12:34.461183  438295 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 19:12:34.461313  438295 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 19:12:34.470516  438295 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0819 19:12:34.488844  438295 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 19:12:34.505450  438295 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0819 19:12:34.522456  438295 ssh_runner.go:195] Run: grep 192.168.72.96	control-plane.minikube.internal$ /etc/hosts
	I0819 19:12:34.526272  438295 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.96	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 19:12:34.539079  438295 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:12:34.665665  438295 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 19:12:34.683237  438295 certs.go:68] Setting up /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/embed-certs-024748 for IP: 192.168.72.96
	I0819 19:12:34.683265  438295 certs.go:194] generating shared ca certs ...
	I0819 19:12:34.683287  438295 certs.go:226] acquiring lock for ca certs: {Name:mk639e03f593e0bccac045f6e9f5ba3b96cc81e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:12:34.683471  438295 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key
	I0819 19:12:34.683536  438295 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key
	I0819 19:12:34.683550  438295 certs.go:256] generating profile certs ...
	I0819 19:12:34.683687  438295 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/embed-certs-024748/client.key
	I0819 19:12:34.683776  438295 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/embed-certs-024748/apiserver.key.89193d03
	I0819 19:12:34.683828  438295 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/embed-certs-024748/proxy-client.key
	I0819 19:12:34.683991  438295 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem (1338 bytes)
	W0819 19:12:34.684035  438295 certs.go:480] ignoring /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009_empty.pem, impossibly tiny 0 bytes
	I0819 19:12:34.684047  438295 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 19:12:34.684074  438295 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem (1082 bytes)
	I0819 19:12:34.684112  438295 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem (1123 bytes)
	I0819 19:12:34.684159  438295 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem (1675 bytes)
	I0819 19:12:34.684224  438295 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem (1708 bytes)
	I0819 19:12:34.685127  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 19:12:34.718591  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 19:12:34.758439  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 19:12:34.790143  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 19:12:34.828113  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/embed-certs-024748/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0819 19:12:34.860389  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/embed-certs-024748/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 19:12:34.898361  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/embed-certs-024748/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 19:12:34.924677  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/embed-certs-024748/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 19:12:34.951630  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem --> /usr/share/ca-certificates/3800092.pem (1708 bytes)
	I0819 19:12:34.977435  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 19:12:35.002048  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem --> /usr/share/ca-certificates/380009.pem (1338 bytes)
	I0819 19:12:35.026934  438295 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 19:12:35.044476  438295 ssh_runner.go:195] Run: openssl version
	I0819 19:12:35.050174  438295 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3800092.pem && ln -fs /usr/share/ca-certificates/3800092.pem /etc/ssl/certs/3800092.pem"
	I0819 19:12:35.061299  438295 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3800092.pem
	I0819 19:12:35.065978  438295 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 17:56 /usr/share/ca-certificates/3800092.pem
	I0819 19:12:35.066047  438295 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3800092.pem
	I0819 19:12:35.072572  438295 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3800092.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 19:12:35.083760  438295 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 19:12:35.094492  438295 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:12:35.099152  438295 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:12:35.099229  438295 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:12:35.105124  438295 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 19:12:35.115950  438295 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/380009.pem && ln -fs /usr/share/ca-certificates/380009.pem /etc/ssl/certs/380009.pem"
	I0819 19:12:35.126845  438295 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/380009.pem
	I0819 19:12:35.131568  438295 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 17:56 /usr/share/ca-certificates/380009.pem
	I0819 19:12:35.131650  438295 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/380009.pem
	I0819 19:12:35.137851  438295 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/380009.pem /etc/ssl/certs/51391683.0"
	I0819 19:12:35.148818  438295 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 19:12:35.153800  438295 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 19:12:35.159720  438295 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 19:12:35.165740  438295 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 19:12:35.171705  438295 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 19:12:35.177574  438295 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 19:12:35.183935  438295 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 19:12:35.192681  438295 kubeadm.go:392] StartCluster: {Name:embed-certs-024748 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-024748 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.96 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 19:12:35.192845  438295 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 19:12:35.192908  438295 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 19:12:35.231688  438295 cri.go:89] found id: ""
	I0819 19:12:35.231791  438295 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 19:12:35.242835  438295 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 19:12:35.242859  438295 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 19:12:35.242944  438295 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 19:12:35.255695  438295 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 19:12:35.257036  438295 kubeconfig.go:125] found "embed-certs-024748" server: "https://192.168.72.96:8443"
	I0819 19:12:35.259422  438295 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 19:12:35.271730  438295 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.96
	I0819 19:12:35.271758  438295 kubeadm.go:1160] stopping kube-system containers ...
	I0819 19:12:35.271772  438295 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0819 19:12:35.271820  438295 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 19:12:35.321065  438295 cri.go:89] found id: ""
	I0819 19:12:35.321155  438295 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0819 19:12:35.337802  438295 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 19:12:35.347699  438295 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 19:12:35.347726  438295 kubeadm.go:157] found existing configuration files:
	
	I0819 19:12:35.347785  438295 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 19:12:35.357108  438295 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 19:12:35.357178  438295 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 19:12:35.366805  438295 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 19:12:35.376864  438295 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 19:12:35.376938  438295 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 19:12:35.387018  438295 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 19:12:35.396966  438295 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 19:12:35.397045  438295 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 19:12:35.406192  438295 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 19:12:35.415325  438295 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 19:12:35.415401  438295 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 19:12:35.424450  438295 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 19:12:35.433931  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:12:35.549294  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:12:36.306930  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:12:36.517086  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:12:36.587680  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:12:36.680728  438295 api_server.go:52] waiting for apiserver process to appear ...
	I0819 19:12:36.680825  438295 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:12:37.181054  438295 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:12:37.681059  438295 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:12:38.181588  438295 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:12:38.197155  438295 api_server.go:72] duration metric: took 1.516436456s to wait for apiserver process to appear ...
	I0819 19:12:38.197184  438295 api_server.go:88] waiting for apiserver healthz status ...
	I0819 19:12:38.197212  438295 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0819 19:12:35.971138  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:35.971576  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:35.971607  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:35.971514  439728 retry.go:31] will retry after 1.521077744s: waiting for machine to come up
	I0819 19:12:37.493931  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:37.494389  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:37.494415  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:37.494361  439728 retry.go:31] will retry after 1.632508579s: waiting for machine to come up
	I0819 19:12:39.128939  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:39.129429  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:39.129456  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:39.129392  439728 retry.go:31] will retry after 2.634061376s: waiting for machine to come up
	I0819 19:12:40.567608  438295 api_server.go:279] https://192.168.72.96:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 19:12:40.567654  438295 api_server.go:103] status: https://192.168.72.96:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 19:12:40.567669  438295 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0819 19:12:40.593405  438295 api_server.go:279] https://192.168.72.96:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 19:12:40.593456  438295 api_server.go:103] status: https://192.168.72.96:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 19:12:40.697607  438295 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0819 19:12:40.713767  438295 api_server.go:279] https://192.168.72.96:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 19:12:40.713806  438295 api_server.go:103] status: https://192.168.72.96:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 19:12:41.197299  438295 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0819 19:12:41.203307  438295 api_server.go:279] https://192.168.72.96:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 19:12:41.203338  438295 api_server.go:103] status: https://192.168.72.96:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 19:12:41.697903  438295 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0819 19:12:41.705142  438295 api_server.go:279] https://192.168.72.96:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 19:12:41.705174  438295 api_server.go:103] status: https://192.168.72.96:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 19:12:42.197361  438295 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0819 19:12:42.202272  438295 api_server.go:279] https://192.168.72.96:8443/healthz returned 200:
	ok
	I0819 19:12:42.209788  438295 api_server.go:141] control plane version: v1.31.0
	I0819 19:12:42.209819  438295 api_server.go:131] duration metric: took 4.012627755s to wait for apiserver health ...
	I0819 19:12:42.209829  438295 cni.go:84] Creating CNI manager for ""
	I0819 19:12:42.209836  438295 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 19:12:42.211612  438295 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 19:12:37.693171  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:39.693397  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:41.693523  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:42.212889  438295 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 19:12:42.223277  438295 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 19:12:42.242392  438295 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 19:12:42.256273  438295 system_pods.go:59] 8 kube-system pods found
	I0819 19:12:42.256321  438295 system_pods.go:61] "coredns-6f6b679f8f-7ww4z" [bbde00d4-6027-4d8d-b51e-bd68915da166] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0819 19:12:42.256331  438295 system_pods.go:61] "etcd-embed-certs-024748" [846ff0f0-5399-43fd-8e7b-1f64997cd291] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0819 19:12:42.256348  438295 system_pods.go:61] "kube-apiserver-embed-certs-024748" [3ff558d6-e82e-47a0-bb81-15244bee6470] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0819 19:12:42.256366  438295 system_pods.go:61] "kube-controller-manager-embed-certs-024748" [993b82ba-e8e7-4896-a06b-87c4f08d5985] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0819 19:12:42.256383  438295 system_pods.go:61] "kube-proxy-bmmbh" [1f77f152-f5f4-40f6-9632-1eaa36b9ea31] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0819 19:12:42.256393  438295 system_pods.go:61] "kube-scheduler-embed-certs-024748" [34684d4c-2479-45c5-883b-158cf9f974f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0819 19:12:42.256403  438295 system_pods.go:61] "metrics-server-6867b74b74-kxcwh" [15f86629-d916-4fdc-9ecf-9cb1b6c83f85] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 19:12:42.256409  438295 system_pods.go:61] "storage-provisioner" [7acb6ce1-21b6-4cdd-a5cb-76d694fc0a38] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0819 19:12:42.256418  438295 system_pods.go:74] duration metric: took 14.004598ms to wait for pod list to return data ...
	I0819 19:12:42.256428  438295 node_conditions.go:102] verifying NodePressure condition ...
	I0819 19:12:42.263308  438295 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 19:12:42.263340  438295 node_conditions.go:123] node cpu capacity is 2
	I0819 19:12:42.263354  438295 node_conditions.go:105] duration metric: took 6.920993ms to run NodePressure ...
	I0819 19:12:42.263376  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:12:42.533917  438295 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0819 19:12:42.545853  438295 kubeadm.go:739] kubelet initialised
	I0819 19:12:42.545886  438295 kubeadm.go:740] duration metric: took 11.931664ms waiting for restarted kubelet to initialise ...
	I0819 19:12:42.545899  438295 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 19:12:42.553125  438295 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-7ww4z" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:42.559120  438295 pod_ready.go:98] node "embed-certs-024748" hosting pod "coredns-6f6b679f8f-7ww4z" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:42.559148  438295 pod_ready.go:82] duration metric: took 5.984169ms for pod "coredns-6f6b679f8f-7ww4z" in "kube-system" namespace to be "Ready" ...
	E0819 19:12:42.559158  438295 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-024748" hosting pod "coredns-6f6b679f8f-7ww4z" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:42.559164  438295 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:42.564830  438295 pod_ready.go:98] node "embed-certs-024748" hosting pod "etcd-embed-certs-024748" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:42.564852  438295 pod_ready.go:82] duration metric: took 5.681326ms for pod "etcd-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	E0819 19:12:42.564860  438295 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-024748" hosting pod "etcd-embed-certs-024748" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:42.564867  438295 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:42.571982  438295 pod_ready.go:98] node "embed-certs-024748" hosting pod "kube-apiserver-embed-certs-024748" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:42.572027  438295 pod_ready.go:82] duration metric: took 7.150945ms for pod "kube-apiserver-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	E0819 19:12:42.572038  438295 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-024748" hosting pod "kube-apiserver-embed-certs-024748" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:42.572045  438295 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:42.648692  438295 pod_ready.go:98] node "embed-certs-024748" hosting pod "kube-controller-manager-embed-certs-024748" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:42.648721  438295 pod_ready.go:82] duration metric: took 76.665633ms for pod "kube-controller-manager-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	E0819 19:12:42.648730  438295 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-024748" hosting pod "kube-controller-manager-embed-certs-024748" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:42.648737  438295 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-bmmbh" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:43.045619  438295 pod_ready.go:98] node "embed-certs-024748" hosting pod "kube-proxy-bmmbh" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:43.045648  438295 pod_ready.go:82] duration metric: took 396.90414ms for pod "kube-proxy-bmmbh" in "kube-system" namespace to be "Ready" ...
	E0819 19:12:43.045658  438295 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-024748" hosting pod "kube-proxy-bmmbh" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:43.045665  438295 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:43.446302  438295 pod_ready.go:98] node "embed-certs-024748" hosting pod "kube-scheduler-embed-certs-024748" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:43.446331  438295 pod_ready.go:82] duration metric: took 400.658861ms for pod "kube-scheduler-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	E0819 19:12:43.446342  438295 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-024748" hosting pod "kube-scheduler-embed-certs-024748" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:43.446359  438295 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:43.845457  438295 pod_ready.go:98] node "embed-certs-024748" hosting pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:43.845488  438295 pod_ready.go:82] duration metric: took 399.120328ms for pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace to be "Ready" ...
	E0819 19:12:43.845499  438295 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-024748" hosting pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:43.845506  438295 pod_ready.go:39] duration metric: took 1.299593775s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 19:12:43.845526  438295 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 19:12:43.864357  438295 ops.go:34] apiserver oom_adj: -16
	I0819 19:12:43.864384  438295 kubeadm.go:597] duration metric: took 8.621518076s to restartPrimaryControlPlane
	I0819 19:12:43.864394  438295 kubeadm.go:394] duration metric: took 8.671725617s to StartCluster
	I0819 19:12:43.864414  438295 settings.go:142] acquiring lock: {Name:mk396fcf49a1d0e69583cf37ff3c819e37118163 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:12:43.864495  438295 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19468-372744/kubeconfig
	I0819 19:12:43.866775  438295 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/kubeconfig: {Name:mk8e7b4e1bb7da665111d2acd83eb48882c66853 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:12:43.867073  438295 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.96 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 19:12:43.867296  438295 config.go:182] Loaded profile config "embed-certs-024748": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:12:43.867195  438295 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 19:12:43.867354  438295 addons.go:69] Setting metrics-server=true in profile "embed-certs-024748"
	I0819 19:12:43.867362  438295 addons.go:69] Setting default-storageclass=true in profile "embed-certs-024748"
	I0819 19:12:43.867397  438295 addons.go:234] Setting addon metrics-server=true in "embed-certs-024748"
	I0819 19:12:43.867402  438295 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-024748"
	W0819 19:12:43.867409  438295 addons.go:243] addon metrics-server should already be in state true
	I0819 19:12:43.867437  438295 host.go:66] Checking if "embed-certs-024748" exists ...
	I0819 19:12:43.867354  438295 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-024748"
	I0819 19:12:43.867502  438295 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-024748"
	W0819 19:12:43.867514  438295 addons.go:243] addon storage-provisioner should already be in state true
	I0819 19:12:43.867538  438295 host.go:66] Checking if "embed-certs-024748" exists ...
	I0819 19:12:43.867761  438295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:43.867796  438295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:43.867839  438295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:43.867873  438295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:43.867889  438295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:43.867908  438295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:43.869989  438295 out.go:177] * Verifying Kubernetes components...
	I0819 19:12:43.871464  438295 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:12:43.883655  438295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33557
	I0819 19:12:43.883871  438295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33763
	I0819 19:12:43.884279  438295 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:43.884323  438295 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:43.884790  438295 main.go:141] libmachine: Using API Version  1
	I0819 19:12:43.884809  438295 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:43.884935  438295 main.go:141] libmachine: Using API Version  1
	I0819 19:12:43.884953  438295 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:43.885204  438295 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:43.885275  438295 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:43.885380  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetState
	I0819 19:12:43.885886  438295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:43.885928  438295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:43.886840  438295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40467
	I0819 19:12:43.887309  438295 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:43.887792  438295 main.go:141] libmachine: Using API Version  1
	I0819 19:12:43.887802  438295 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:43.888109  438295 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:43.888670  438295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:43.888697  438295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:43.888973  438295 addons.go:234] Setting addon default-storageclass=true in "embed-certs-024748"
	W0819 19:12:43.888988  438295 addons.go:243] addon default-storageclass should already be in state true
	I0819 19:12:43.889020  438295 host.go:66] Checking if "embed-certs-024748" exists ...
	I0819 19:12:43.889270  438295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:43.889304  438295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:43.905278  438295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40907
	I0819 19:12:43.905278  438295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41133
	I0819 19:12:43.905734  438295 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:43.905877  438295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33393
	I0819 19:12:43.905983  438295 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:43.906299  438295 main.go:141] libmachine: Using API Version  1
	I0819 19:12:43.906320  438295 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:43.906366  438295 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:43.906443  438295 main.go:141] libmachine: Using API Version  1
	I0819 19:12:43.906457  438295 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:43.906822  438295 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:43.906898  438295 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:43.906995  438295 main.go:141] libmachine: Using API Version  1
	I0819 19:12:43.907006  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetState
	I0819 19:12:43.907012  438295 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:43.907371  438295 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:43.907473  438295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:43.907523  438295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:43.907534  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetState
	I0819 19:12:43.909443  438295 main.go:141] libmachine: (embed-certs-024748) Calling .DriverName
	I0819 19:12:43.909529  438295 main.go:141] libmachine: (embed-certs-024748) Calling .DriverName
	I0819 19:12:43.911431  438295 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0819 19:12:43.911437  438295 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:12:43.913061  438295 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0819 19:12:43.913090  438295 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0819 19:12:43.913115  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHHostname
	I0819 19:12:43.913180  438295 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 19:12:43.913199  438295 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 19:12:43.913216  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHHostname
	I0819 19:12:43.916642  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:43.916813  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:43.917110  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:43.917135  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:43.917166  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:43.917193  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:43.917463  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHPort
	I0819 19:12:43.917668  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHPort
	I0819 19:12:43.917671  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:43.917846  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:43.917867  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHUsername
	I0819 19:12:43.918014  438295 sshutil.go:53] new ssh client: &{IP:192.168.72.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/embed-certs-024748/id_rsa Username:docker}
	I0819 19:12:43.918032  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHUsername
	I0819 19:12:43.918148  438295 sshutil.go:53] new ssh client: &{IP:192.168.72.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/embed-certs-024748/id_rsa Username:docker}
	I0819 19:12:43.926337  438295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46687
	I0819 19:12:43.926813  438295 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:43.927333  438295 main.go:141] libmachine: Using API Version  1
	I0819 19:12:43.927354  438295 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:43.927762  438295 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:43.927965  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetState
	I0819 19:12:43.929591  438295 main.go:141] libmachine: (embed-certs-024748) Calling .DriverName
	I0819 19:12:43.929910  438295 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 19:12:43.929926  438295 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 19:12:43.929942  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHHostname
	I0819 19:12:43.933032  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:43.933387  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:43.933406  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:43.933626  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHPort
	I0819 19:12:43.933850  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:43.933992  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHUsername
	I0819 19:12:43.934118  438295 sshutil.go:53] new ssh client: &{IP:192.168.72.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/embed-certs-024748/id_rsa Username:docker}
	I0819 19:12:44.078901  438295 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 19:12:44.098542  438295 node_ready.go:35] waiting up to 6m0s for node "embed-certs-024748" to be "Ready" ...
	I0819 19:12:44.180050  438295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 19:12:44.196186  438295 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0819 19:12:44.196210  438295 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0819 19:12:44.220001  438295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 19:12:44.231145  438295 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0819 19:12:44.231180  438295 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0819 19:12:44.267800  438295 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 19:12:44.267831  438295 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0819 19:12:44.323078  438295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 19:12:45.276298  438295 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.096199779s)
	I0819 19:12:45.276336  438295 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.056298773s)
	I0819 19:12:45.276383  438295 main.go:141] libmachine: Making call to close driver server
	I0819 19:12:45.276395  438295 main.go:141] libmachine: (embed-certs-024748) Calling .Close
	I0819 19:12:45.276385  438295 main.go:141] libmachine: Making call to close driver server
	I0819 19:12:45.276462  438295 main.go:141] libmachine: (embed-certs-024748) Calling .Close
	I0819 19:12:45.276714  438295 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:12:45.276757  438295 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:12:45.276777  438295 main.go:141] libmachine: Making call to close driver server
	I0819 19:12:45.276793  438295 main.go:141] libmachine: (embed-certs-024748) Calling .Close
	I0819 19:12:45.276860  438295 main.go:141] libmachine: (embed-certs-024748) DBG | Closing plugin on server side
	I0819 19:12:45.276874  438295 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:12:45.276940  438295 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:12:45.276956  438295 main.go:141] libmachine: Making call to close driver server
	I0819 19:12:45.276964  438295 main.go:141] libmachine: (embed-certs-024748) Calling .Close
	I0819 19:12:45.277134  438295 main.go:141] libmachine: (embed-certs-024748) DBG | Closing plugin on server side
	I0819 19:12:45.277195  438295 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:12:45.277239  438295 main.go:141] libmachine: (embed-certs-024748) DBG | Closing plugin on server side
	I0819 19:12:45.277258  438295 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:12:45.277277  438295 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:12:45.277304  438295 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:12:45.284982  438295 main.go:141] libmachine: Making call to close driver server
	I0819 19:12:45.285007  438295 main.go:141] libmachine: (embed-certs-024748) Calling .Close
	I0819 19:12:45.285304  438295 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:12:45.285324  438295 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:12:45.293973  438295 main.go:141] libmachine: Making call to close driver server
	I0819 19:12:45.293994  438295 main.go:141] libmachine: (embed-certs-024748) Calling .Close
	I0819 19:12:45.294247  438295 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:12:45.294265  438295 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:12:45.294274  438295 main.go:141] libmachine: Making call to close driver server
	I0819 19:12:45.294282  438295 main.go:141] libmachine: (embed-certs-024748) Calling .Close
	I0819 19:12:45.295704  438295 main.go:141] libmachine: (embed-certs-024748) DBG | Closing plugin on server side
	I0819 19:12:45.295787  438295 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:12:45.295813  438295 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:12:45.295828  438295 addons.go:475] Verifying addon metrics-server=true in "embed-certs-024748"
	I0819 19:12:45.297684  438295 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0819 19:12:41.765706  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:41.766129  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:41.766182  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:41.766093  439728 retry.go:31] will retry after 3.464758587s: waiting for machine to come up
	I0819 19:12:45.232640  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:45.233118  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:45.233151  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:45.233066  439728 retry.go:31] will retry after 3.551527195s: waiting for machine to come up
	I0819 19:12:43.694387  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:46.194627  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:45.298844  438295 addons.go:510] duration metric: took 1.431699078s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0819 19:12:46.103096  438295 node_ready.go:53] node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:48.603205  438295 node_ready.go:53] node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:50.084809  438001 start.go:364] duration metric: took 55.89796214s to acquireMachinesLock for "no-preload-278232"
	I0819 19:12:50.084884  438001 start.go:96] Skipping create...Using existing machine configuration
	I0819 19:12:50.084895  438001 fix.go:54] fixHost starting: 
	I0819 19:12:50.085416  438001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:50.085459  438001 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:50.103796  438001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41569
	I0819 19:12:50.104278  438001 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:50.104900  438001 main.go:141] libmachine: Using API Version  1
	I0819 19:12:50.104934  438001 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:50.105335  438001 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:50.105544  438001 main.go:141] libmachine: (no-preload-278232) Calling .DriverName
	I0819 19:12:50.105703  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetState
	I0819 19:12:50.107422  438001 fix.go:112] recreateIfNeeded on no-preload-278232: state=Stopped err=<nil>
	I0819 19:12:50.107444  438001 main.go:141] libmachine: (no-preload-278232) Calling .DriverName
	W0819 19:12:50.107602  438001 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 19:12:50.109328  438001 out.go:177] * Restarting existing kvm2 VM for "no-preload-278232" ...
	I0819 19:12:48.787197  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:48.787586  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has current primary IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:48.787611  438716 main.go:141] libmachine: (old-k8s-version-104669) Found IP for machine: 192.168.50.32
	I0819 19:12:48.787625  438716 main.go:141] libmachine: (old-k8s-version-104669) Reserving static IP address...
	I0819 19:12:48.788104  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "old-k8s-version-104669", mac: "52:54:00:8c:ff:a3", ip: "192.168.50.32"} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:48.788140  438716 main.go:141] libmachine: (old-k8s-version-104669) Reserved static IP address: 192.168.50.32
	I0819 19:12:48.788164  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | skip adding static IP to network mk-old-k8s-version-104669 - found existing host DHCP lease matching {name: "old-k8s-version-104669", mac: "52:54:00:8c:ff:a3", ip: "192.168.50.32"}
	I0819 19:12:48.788186  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | Getting to WaitForSSH function...
	I0819 19:12:48.788202  438716 main.go:141] libmachine: (old-k8s-version-104669) Waiting for SSH to be available...
	I0819 19:12:48.790365  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:48.790765  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:48.790793  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:48.790994  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | Using SSH client type: external
	I0819 19:12:48.791034  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | Using SSH private key: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/old-k8s-version-104669/id_rsa (-rw-------)
	I0819 19:12:48.791073  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.32 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19468-372744/.minikube/machines/old-k8s-version-104669/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 19:12:48.791087  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | About to run SSH command:
	I0819 19:12:48.791103  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | exit 0
	I0819 19:12:48.920087  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | SSH cmd err, output: <nil>: 
	I0819 19:12:48.920464  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetConfigRaw
	I0819 19:12:48.921105  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetIP
	I0819 19:12:48.923637  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:48.924022  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:48.924053  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:48.924242  438716 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/config.json ...
	I0819 19:12:48.924429  438716 machine.go:93] provisionDockerMachine start ...
	I0819 19:12:48.924447  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .DriverName
	I0819 19:12:48.924655  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHHostname
	I0819 19:12:48.926885  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:48.927345  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:48.927376  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:48.927527  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHPort
	I0819 19:12:48.927723  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:48.927846  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:48.927968  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHUsername
	I0819 19:12:48.928241  438716 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:48.928453  438716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I0819 19:12:48.928475  438716 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 19:12:49.039908  438716 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 19:12:49.039944  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetMachineName
	I0819 19:12:49.040200  438716 buildroot.go:166] provisioning hostname "old-k8s-version-104669"
	I0819 19:12:49.040236  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetMachineName
	I0819 19:12:49.040454  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHHostname
	I0819 19:12:49.043462  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.043860  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:49.043892  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.044061  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHPort
	I0819 19:12:49.044256  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:49.044472  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:49.044613  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHUsername
	I0819 19:12:49.044837  438716 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:49.045014  438716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I0819 19:12:49.045027  438716 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-104669 && echo "old-k8s-version-104669" | sudo tee /etc/hostname
	I0819 19:12:49.170660  438716 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-104669
	
	I0819 19:12:49.170695  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHHostname
	I0819 19:12:49.173564  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.173855  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:49.173882  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.174059  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHPort
	I0819 19:12:49.174239  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:49.174432  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:49.174564  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHUsername
	I0819 19:12:49.174732  438716 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:49.174923  438716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I0819 19:12:49.174941  438716 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-104669' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-104669/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-104669' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 19:12:49.298689  438716 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 19:12:49.298731  438716 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19468-372744/.minikube CaCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19468-372744/.minikube}
	I0819 19:12:49.298764  438716 buildroot.go:174] setting up certificates
	I0819 19:12:49.298778  438716 provision.go:84] configureAuth start
	I0819 19:12:49.298793  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetMachineName
	I0819 19:12:49.299157  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetIP
	I0819 19:12:49.301897  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.302290  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:49.302326  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.302462  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHHostname
	I0819 19:12:49.304592  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.304960  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:49.304987  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.305150  438716 provision.go:143] copyHostCerts
	I0819 19:12:49.305219  438716 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem, removing ...
	I0819 19:12:49.305243  438716 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem
	I0819 19:12:49.305310  438716 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem (1082 bytes)
	I0819 19:12:49.305437  438716 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem, removing ...
	I0819 19:12:49.305449  438716 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem
	I0819 19:12:49.305477  438716 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem (1123 bytes)
	I0819 19:12:49.305571  438716 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem, removing ...
	I0819 19:12:49.305583  438716 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem
	I0819 19:12:49.305612  438716 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem (1675 bytes)
	I0819 19:12:49.305699  438716 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-104669 san=[127.0.0.1 192.168.50.32 localhost minikube old-k8s-version-104669]
	I0819 19:12:49.394004  438716 provision.go:177] copyRemoteCerts
	I0819 19:12:49.394074  438716 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 19:12:49.394112  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHHostname
	I0819 19:12:49.396645  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.396906  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:49.396951  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.397108  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHPort
	I0819 19:12:49.397321  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:49.397504  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHUsername
	I0819 19:12:49.397709  438716 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/old-k8s-version-104669/id_rsa Username:docker}
	I0819 19:12:49.483061  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 19:12:49.508297  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 19:12:49.533821  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0819 19:12:49.560064  438716 provision.go:87] duration metric: took 261.270909ms to configureAuth
	I0819 19:12:49.560093  438716 buildroot.go:189] setting minikube options for container-runtime
	I0819 19:12:49.560310  438716 config.go:182] Loaded profile config "old-k8s-version-104669": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0819 19:12:49.560409  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHHostname
	I0819 19:12:49.563173  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.563604  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:49.563633  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.563882  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHPort
	I0819 19:12:49.564075  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:49.564274  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:49.564479  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHUsername
	I0819 19:12:49.564707  438716 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:49.564925  438716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I0819 19:12:49.564948  438716 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 19:12:49.837237  438716 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 19:12:49.837267  438716 machine.go:96] duration metric: took 912.825625ms to provisionDockerMachine
	I0819 19:12:49.837281  438716 start.go:293] postStartSetup for "old-k8s-version-104669" (driver="kvm2")
	I0819 19:12:49.837297  438716 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 19:12:49.837341  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .DriverName
	I0819 19:12:49.837716  438716 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 19:12:49.837757  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHHostname
	I0819 19:12:49.840409  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.840759  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:49.840789  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.840988  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHPort
	I0819 19:12:49.841183  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:49.841345  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHUsername
	I0819 19:12:49.841473  438716 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/old-k8s-version-104669/id_rsa Username:docker}
	I0819 19:12:49.931067  438716 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 19:12:49.935562  438716 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 19:12:49.935590  438716 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-372744/.minikube/addons for local assets ...
	I0819 19:12:49.935694  438716 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-372744/.minikube/files for local assets ...
	I0819 19:12:49.935815  438716 filesync.go:149] local asset: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem -> 3800092.pem in /etc/ssl/certs
	I0819 19:12:49.935941  438716 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 19:12:49.945418  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem --> /etc/ssl/certs/3800092.pem (1708 bytes)
	I0819 19:12:49.969454  438716 start.go:296] duration metric: took 132.15677ms for postStartSetup
	I0819 19:12:49.969494  438716 fix.go:56] duration metric: took 20.836438665s for fixHost
	I0819 19:12:49.969517  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHHostname
	I0819 19:12:49.972127  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.972502  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:49.972542  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.972758  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHPort
	I0819 19:12:49.973000  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:49.973190  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:49.973355  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHUsername
	I0819 19:12:49.973548  438716 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:49.973753  438716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I0819 19:12:49.973766  438716 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 19:12:50.084645  438716 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724094770.056929881
	
	I0819 19:12:50.084672  438716 fix.go:216] guest clock: 1724094770.056929881
	I0819 19:12:50.084681  438716 fix.go:229] Guest: 2024-08-19 19:12:50.056929881 +0000 UTC Remote: 2024-08-19 19:12:49.969497734 +0000 UTC m=+259.472837552 (delta=87.432147ms)
	I0819 19:12:50.084711  438716 fix.go:200] guest clock delta is within tolerance: 87.432147ms
	I0819 19:12:50.084718  438716 start.go:83] releasing machines lock for "old-k8s-version-104669", held for 20.951701853s
	I0819 19:12:50.084752  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .DriverName
	I0819 19:12:50.085050  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetIP
	I0819 19:12:50.087976  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:50.088363  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:50.088391  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:50.088572  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .DriverName
	I0819 19:12:50.089141  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .DriverName
	I0819 19:12:50.089360  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .DriverName
	I0819 19:12:50.089460  438716 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 19:12:50.089526  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHHostname
	I0819 19:12:50.089572  438716 ssh_runner.go:195] Run: cat /version.json
	I0819 19:12:50.089599  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHHostname
	I0819 19:12:50.092427  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:50.092591  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:50.092772  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:50.092797  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:50.092933  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:50.092965  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:50.092965  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHPort
	I0819 19:12:50.093147  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:50.093248  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHPort
	I0819 19:12:50.093328  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHUsername
	I0819 19:12:50.093409  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:50.093503  438716 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/old-k8s-version-104669/id_rsa Username:docker}
	I0819 19:12:50.093532  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHUsername
	I0819 19:12:50.093650  438716 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/old-k8s-version-104669/id_rsa Username:docker}
	I0819 19:12:50.177322  438716 ssh_runner.go:195] Run: systemctl --version
	I0819 19:12:50.200999  438716 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 19:12:50.349276  438716 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 19:12:50.357011  438716 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 19:12:50.357090  438716 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 19:12:50.377691  438716 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 19:12:50.377721  438716 start.go:495] detecting cgroup driver to use...
	I0819 19:12:50.377790  438716 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 19:12:50.394502  438716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 19:12:50.408481  438716 docker.go:217] disabling cri-docker service (if available) ...
	I0819 19:12:50.408556  438716 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 19:12:50.421818  438716 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 19:12:50.434899  438716 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 19:12:50.559399  438716 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 19:12:50.708621  438716 docker.go:233] disabling docker service ...
	I0819 19:12:50.708695  438716 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 19:12:50.726699  438716 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 19:12:50.740605  438716 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 19:12:50.896815  438716 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 19:12:51.037560  438716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 19:12:51.052554  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 19:12:51.072292  438716 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0819 19:12:51.072360  438716 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:51.083248  438716 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 19:12:51.083334  438716 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:51.093721  438716 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:51.105212  438716 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:51.119349  438716 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 19:12:51.134647  438716 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 19:12:51.144553  438716 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 19:12:51.144598  438716 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 19:12:51.159151  438716 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 19:12:51.171260  438716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:12:51.328931  438716 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 19:12:51.500761  438716 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 19:12:51.500831  438716 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 19:12:51.505982  438716 start.go:563] Will wait 60s for crictl version
	I0819 19:12:51.506057  438716 ssh_runner.go:195] Run: which crictl
	I0819 19:12:51.510447  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 19:12:51.552892  438716 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 19:12:51.552982  438716 ssh_runner.go:195] Run: crio --version
	I0819 19:12:51.581931  438716 ssh_runner.go:195] Run: crio --version
	I0819 19:12:51.614565  438716 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0819 19:12:50.110718  438001 main.go:141] libmachine: (no-preload-278232) Calling .Start
	I0819 19:12:50.110888  438001 main.go:141] libmachine: (no-preload-278232) Ensuring networks are active...
	I0819 19:12:50.111809  438001 main.go:141] libmachine: (no-preload-278232) Ensuring network default is active
	I0819 19:12:50.112149  438001 main.go:141] libmachine: (no-preload-278232) Ensuring network mk-no-preload-278232 is active
	I0819 19:12:50.112709  438001 main.go:141] libmachine: (no-preload-278232) Getting domain xml...
	I0819 19:12:50.113441  438001 main.go:141] libmachine: (no-preload-278232) Creating domain...
	I0819 19:12:51.494803  438001 main.go:141] libmachine: (no-preload-278232) Waiting to get IP...
	I0819 19:12:51.495733  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:12:51.496203  438001 main.go:141] libmachine: (no-preload-278232) DBG | unable to find current IP address of domain no-preload-278232 in network mk-no-preload-278232
	I0819 19:12:51.496302  438001 main.go:141] libmachine: (no-preload-278232) DBG | I0819 19:12:51.496187  439925 retry.go:31] will retry after 190.334257ms: waiting for machine to come up
	I0819 19:12:48.694017  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:50.694533  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:51.102764  438295 node_ready.go:49] node "embed-certs-024748" has status "Ready":"True"
	I0819 19:12:51.102791  438295 node_ready.go:38] duration metric: took 7.004204889s for node "embed-certs-024748" to be "Ready" ...
	I0819 19:12:51.102814  438295 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 19:12:51.109122  438295 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-7ww4z" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:51.114649  438295 pod_ready.go:93] pod "coredns-6f6b679f8f-7ww4z" in "kube-system" namespace has status "Ready":"True"
	I0819 19:12:51.114679  438295 pod_ready.go:82] duration metric: took 5.529339ms for pod "coredns-6f6b679f8f-7ww4z" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:51.114692  438295 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:51.121699  438295 pod_ready.go:93] pod "etcd-embed-certs-024748" in "kube-system" namespace has status "Ready":"True"
	I0819 19:12:51.121729  438295 pod_ready.go:82] duration metric: took 7.027906ms for pod "etcd-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:51.121742  438295 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:51.129040  438295 pod_ready.go:93] pod "kube-apiserver-embed-certs-024748" in "kube-system" namespace has status "Ready":"True"
	I0819 19:12:51.129066  438295 pod_ready.go:82] duration metric: took 7.315166ms for pod "kube-apiserver-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:51.129078  438295 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:51.636173  438295 pod_ready.go:93] pod "kube-controller-manager-embed-certs-024748" in "kube-system" namespace has status "Ready":"True"
	I0819 19:12:51.636226  438295 pod_ready.go:82] duration metric: took 507.130455ms for pod "kube-controller-manager-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:51.636243  438295 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bmmbh" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:51.904734  438295 pod_ready.go:93] pod "kube-proxy-bmmbh" in "kube-system" namespace has status "Ready":"True"
	I0819 19:12:51.904776  438295 pod_ready.go:82] duration metric: took 268.522999ms for pod "kube-proxy-bmmbh" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:51.904806  438295 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:53.911857  438295 pod_ready.go:103] pod "kube-scheduler-embed-certs-024748" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:51.615865  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetIP
	I0819 19:12:51.618782  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:51.619238  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:51.619268  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:51.619508  438716 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0819 19:12:51.624020  438716 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 19:12:51.640765  438716 kubeadm.go:883] updating cluster {Name:old-k8s-version-104669 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-104669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.32 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 19:12:51.640905  438716 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0819 19:12:51.640982  438716 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 19:12:51.696872  438716 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0819 19:12:51.696931  438716 ssh_runner.go:195] Run: which lz4
	I0819 19:12:51.702194  438716 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 19:12:51.707228  438716 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 19:12:51.707265  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0819 19:12:53.435062  438716 crio.go:462] duration metric: took 1.732918912s to copy over tarball
	I0819 19:12:53.435149  438716 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 19:12:51.688680  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:12:51.689287  438001 main.go:141] libmachine: (no-preload-278232) DBG | unable to find current IP address of domain no-preload-278232 in network mk-no-preload-278232
	I0819 19:12:51.689326  438001 main.go:141] libmachine: (no-preload-278232) DBG | I0819 19:12:51.689222  439925 retry.go:31] will retry after 351.943478ms: waiting for machine to come up
	I0819 19:12:52.042810  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:12:52.043142  438001 main.go:141] libmachine: (no-preload-278232) DBG | unable to find current IP address of domain no-preload-278232 in network mk-no-preload-278232
	I0819 19:12:52.043163  438001 main.go:141] libmachine: (no-preload-278232) DBG | I0819 19:12:52.043070  439925 retry.go:31] will retry after 332.731922ms: waiting for machine to come up
	I0819 19:12:52.377750  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:12:52.378418  438001 main.go:141] libmachine: (no-preload-278232) DBG | unable to find current IP address of domain no-preload-278232 in network mk-no-preload-278232
	I0819 19:12:52.378442  438001 main.go:141] libmachine: (no-preload-278232) DBG | I0819 19:12:52.378377  439925 retry.go:31] will retry after 601.079013ms: waiting for machine to come up
	I0819 19:12:52.980930  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:12:52.981446  438001 main.go:141] libmachine: (no-preload-278232) DBG | unable to find current IP address of domain no-preload-278232 in network mk-no-preload-278232
	I0819 19:12:52.981474  438001 main.go:141] libmachine: (no-preload-278232) DBG | I0819 19:12:52.981396  439925 retry.go:31] will retry after 621.686612ms: waiting for machine to come up
	I0819 19:12:53.605240  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:12:53.605716  438001 main.go:141] libmachine: (no-preload-278232) DBG | unable to find current IP address of domain no-preload-278232 in network mk-no-preload-278232
	I0819 19:12:53.605751  438001 main.go:141] libmachine: (no-preload-278232) DBG | I0819 19:12:53.605666  439925 retry.go:31] will retry after 627.115747ms: waiting for machine to come up
	I0819 19:12:54.234095  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:12:54.234590  438001 main.go:141] libmachine: (no-preload-278232) DBG | unable to find current IP address of domain no-preload-278232 in network mk-no-preload-278232
	I0819 19:12:54.234613  438001 main.go:141] libmachine: (no-preload-278232) DBG | I0819 19:12:54.234541  439925 retry.go:31] will retry after 1.137953362s: waiting for machine to come up
	I0819 19:12:55.373941  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:12:55.374412  438001 main.go:141] libmachine: (no-preload-278232) DBG | unable to find current IP address of domain no-preload-278232 in network mk-no-preload-278232
	I0819 19:12:55.374440  438001 main.go:141] libmachine: (no-preload-278232) DBG | I0819 19:12:55.374368  439925 retry.go:31] will retry after 1.437610965s: waiting for machine to come up
	I0819 19:12:52.696277  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:54.704463  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:57.195001  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:55.412162  438295 pod_ready.go:93] pod "kube-scheduler-embed-certs-024748" in "kube-system" namespace has status "Ready":"True"
	I0819 19:12:55.412198  438295 pod_ready.go:82] duration metric: took 3.507380249s for pod "kube-scheduler-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:55.412214  438295 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:57.419600  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:56.399941  438716 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.96472478s)
	I0819 19:12:56.399971  438716 crio.go:469] duration metric: took 2.964877539s to extract the tarball
	I0819 19:12:56.399986  438716 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 19:12:56.447075  438716 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 19:12:56.491773  438716 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0819 19:12:56.491800  438716 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0819 19:12:56.491876  438716 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:12:56.491876  438716 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0819 19:12:56.491956  438716 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0819 19:12:56.491961  438716 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 19:12:56.492041  438716 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0819 19:12:56.492059  438716 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0819 19:12:56.492280  438716 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0819 19:12:56.492494  438716 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0819 19:12:56.493750  438716 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 19:12:56.493762  438716 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0819 19:12:56.493756  438716 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:12:56.493762  438716 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0819 19:12:56.493765  438716 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0819 19:12:56.493831  438716 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0819 19:12:56.493806  438716 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0819 19:12:56.494099  438716 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0819 19:12:56.694872  438716 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0819 19:12:56.711504  438716 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0819 19:12:56.754045  438716 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0819 19:12:56.754096  438716 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0819 19:12:56.754136  438716 ssh_runner.go:195] Run: which crictl
	I0819 19:12:56.770451  438716 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0819 19:12:56.770510  438716 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0819 19:12:56.770574  438716 ssh_runner.go:195] Run: which crictl
	I0819 19:12:56.770573  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0819 19:12:56.804839  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0819 19:12:56.804872  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0819 19:12:56.825837  438716 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0819 19:12:56.832063  438716 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0819 19:12:56.834072  438716 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0819 19:12:56.837029  438716 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0819 19:12:56.837697  438716 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 19:12:56.902843  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0819 19:12:56.902930  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0819 19:12:57.020902  438716 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0819 19:12:57.020962  438716 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0819 19:12:57.020988  438716 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0819 19:12:57.021017  438716 ssh_runner.go:195] Run: which crictl
	I0819 19:12:57.021025  438716 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0819 19:12:57.021098  438716 ssh_runner.go:195] Run: which crictl
	I0819 19:12:57.023363  438716 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0819 19:12:57.023411  438716 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0819 19:12:57.023457  438716 ssh_runner.go:195] Run: which crictl
	I0819 19:12:57.023541  438716 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0819 19:12:57.023569  438716 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0819 19:12:57.023605  438716 ssh_runner.go:195] Run: which crictl
	I0819 19:12:57.034648  438716 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0819 19:12:57.034698  438716 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 19:12:57.034719  438716 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0819 19:12:57.034748  438716 ssh_runner.go:195] Run: which crictl
	I0819 19:12:57.039577  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0819 19:12:57.039648  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0819 19:12:57.039715  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0819 19:12:57.041644  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0819 19:12:57.041983  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0819 19:12:57.045383  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 19:12:57.149677  438716 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0819 19:12:57.164701  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0819 19:12:57.164821  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0819 19:12:57.202353  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0819 19:12:57.202434  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0819 19:12:57.202465  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 19:12:57.258824  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0819 19:12:57.258858  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0819 19:12:57.285756  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0819 19:12:57.326148  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 19:12:57.326237  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0819 19:12:57.378322  438716 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0819 19:12:57.378369  438716 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0819 19:12:57.390369  438716 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0819 19:12:57.419554  438716 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0819 19:12:57.419627  438716 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0819 19:12:57.438485  438716 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:12:57.583634  438716 cache_images.go:92] duration metric: took 1.091812972s to LoadCachedImages
	W0819 19:12:57.583757  438716 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0819 19:12:57.583777  438716 kubeadm.go:934] updating node { 192.168.50.32 8443 v1.20.0 crio true true} ...
	I0819 19:12:57.583915  438716 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-104669 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.32
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-104669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 19:12:57.584007  438716 ssh_runner.go:195] Run: crio config
	I0819 19:12:57.636714  438716 cni.go:84] Creating CNI manager for ""
	I0819 19:12:57.636738  438716 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 19:12:57.636752  438716 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 19:12:57.636776  438716 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.32 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-104669 NodeName:old-k8s-version-104669 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.32"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.32 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0819 19:12:57.636951  438716 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.32
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-104669"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.32
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.32"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 19:12:57.637028  438716 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0819 19:12:57.648002  438716 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 19:12:57.648093  438716 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 19:12:57.658889  438716 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0819 19:12:57.677316  438716 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 19:12:57.695825  438716 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0819 19:12:57.715396  438716 ssh_runner.go:195] Run: grep 192.168.50.32	control-plane.minikube.internal$ /etc/hosts
	I0819 19:12:57.719886  438716 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.32	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 19:12:57.733179  438716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:12:57.854139  438716 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 19:12:57.871590  438716 certs.go:68] Setting up /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669 for IP: 192.168.50.32
	I0819 19:12:57.871619  438716 certs.go:194] generating shared ca certs ...
	I0819 19:12:57.871642  438716 certs.go:226] acquiring lock for ca certs: {Name:mk639e03f593e0bccac045f6e9f5ba3b96cc81e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:12:57.871850  438716 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key
	I0819 19:12:57.871916  438716 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key
	I0819 19:12:57.871930  438716 certs.go:256] generating profile certs ...
	I0819 19:12:57.872060  438716 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/client.key
	I0819 19:12:57.872131  438716 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/apiserver.key.7101f8a0
	I0819 19:12:57.872197  438716 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/proxy-client.key
	I0819 19:12:57.872336  438716 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem (1338 bytes)
	W0819 19:12:57.872365  438716 certs.go:480] ignoring /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009_empty.pem, impossibly tiny 0 bytes
	I0819 19:12:57.872371  438716 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 19:12:57.872390  438716 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem (1082 bytes)
	I0819 19:12:57.872419  438716 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem (1123 bytes)
	I0819 19:12:57.872441  438716 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem (1675 bytes)
	I0819 19:12:57.872488  438716 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem (1708 bytes)
	I0819 19:12:57.873259  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 19:12:57.907576  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 19:12:57.943535  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 19:12:57.977770  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 19:12:58.021213  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0819 19:12:58.051043  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 19:12:58.080442  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 19:12:58.110888  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 19:12:58.158635  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 19:12:58.184168  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem --> /usr/share/ca-certificates/380009.pem (1338 bytes)
	I0819 19:12:58.210064  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem --> /usr/share/ca-certificates/3800092.pem (1708 bytes)
	I0819 19:12:58.235366  438716 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 19:12:58.254667  438716 ssh_runner.go:195] Run: openssl version
	I0819 19:12:58.260977  438716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3800092.pem && ln -fs /usr/share/ca-certificates/3800092.pem /etc/ssl/certs/3800092.pem"
	I0819 19:12:58.272995  438716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3800092.pem
	I0819 19:12:58.278056  438716 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 17:56 /usr/share/ca-certificates/3800092.pem
	I0819 19:12:58.278154  438716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3800092.pem
	I0819 19:12:58.284420  438716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3800092.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 19:12:58.296945  438716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 19:12:58.309288  438716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:12:58.314695  438716 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:12:58.314774  438716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:12:58.321016  438716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 19:12:58.332728  438716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/380009.pem && ln -fs /usr/share/ca-certificates/380009.pem /etc/ssl/certs/380009.pem"
	I0819 19:12:58.344766  438716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/380009.pem
	I0819 19:12:58.349610  438716 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 17:56 /usr/share/ca-certificates/380009.pem
	I0819 19:12:58.349681  438716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/380009.pem
	I0819 19:12:58.355942  438716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/380009.pem /etc/ssl/certs/51391683.0"
	I0819 19:12:58.368869  438716 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 19:12:58.373681  438716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 19:12:58.380415  438716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 19:12:58.386741  438716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 19:12:58.393362  438716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 19:12:58.399665  438716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 19:12:58.406108  438716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 19:12:58.412486  438716 kubeadm.go:392] StartCluster: {Name:old-k8s-version-104669 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-104669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.32 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 19:12:58.412606  438716 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 19:12:58.412655  438716 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 19:12:58.462379  438716 cri.go:89] found id: ""
	I0819 19:12:58.462463  438716 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 19:12:58.474029  438716 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 19:12:58.474054  438716 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 19:12:58.474112  438716 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 19:12:58.485755  438716 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 19:12:58.486762  438716 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-104669" does not appear in /home/jenkins/minikube-integration/19468-372744/kubeconfig
	I0819 19:12:58.487464  438716 kubeconfig.go:62] /home/jenkins/minikube-integration/19468-372744/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-104669" cluster setting kubeconfig missing "old-k8s-version-104669" context setting]
	I0819 19:12:58.489361  438716 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/kubeconfig: {Name:mk8e7b4e1bb7da665111d2acd83eb48882c66853 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:12:58.508865  438716 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 19:12:58.520577  438716 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.32
	I0819 19:12:58.520622  438716 kubeadm.go:1160] stopping kube-system containers ...
	I0819 19:12:58.520637  438716 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0819 19:12:58.520728  438716 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 19:12:58.561900  438716 cri.go:89] found id: ""
	I0819 19:12:58.561984  438716 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0819 19:12:58.580483  438716 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 19:12:58.591734  438716 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 19:12:58.591754  438716 kubeadm.go:157] found existing configuration files:
	
	I0819 19:12:58.591804  438716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 19:12:58.601694  438716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 19:12:58.601771  438716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 19:12:58.612132  438716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 19:12:58.621911  438716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 19:12:58.621984  438716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 19:12:58.631525  438716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 19:12:58.640802  438716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 19:12:58.640872  438716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 19:12:58.650216  438716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 19:12:58.660647  438716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 19:12:58.660720  438716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 19:12:58.669992  438716 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 19:12:58.679709  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:12:58.809302  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:12:59.757994  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:13:00.006386  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:13:00.136752  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:13:00.222424  438716 api_server.go:52] waiting for apiserver process to appear ...
	I0819 19:13:00.222542  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:12:56.813279  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:12:56.813777  438001 main.go:141] libmachine: (no-preload-278232) DBG | unable to find current IP address of domain no-preload-278232 in network mk-no-preload-278232
	I0819 19:12:56.813807  438001 main.go:141] libmachine: (no-preload-278232) DBG | I0819 19:12:56.813725  439925 retry.go:31] will retry after 1.504132921s: waiting for machine to come up
	I0819 19:12:58.319408  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:12:58.319880  438001 main.go:141] libmachine: (no-preload-278232) DBG | unable to find current IP address of domain no-preload-278232 in network mk-no-preload-278232
	I0819 19:12:58.319910  438001 main.go:141] libmachine: (no-preload-278232) DBG | I0819 19:12:58.319832  439925 retry.go:31] will retry after 1.921699926s: waiting for machine to come up
	I0819 19:13:00.243504  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:00.243995  438001 main.go:141] libmachine: (no-preload-278232) DBG | unable to find current IP address of domain no-preload-278232 in network mk-no-preload-278232
	I0819 19:13:00.244021  438001 main.go:141] libmachine: (no-preload-278232) DBG | I0819 19:13:00.243952  439925 retry.go:31] will retry after 2.040704792s: waiting for machine to come up
	I0819 19:12:59.195084  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:01.693648  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:59.419644  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:01.918769  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:00.723213  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:01.222908  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:01.723081  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:02.223465  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:02.722589  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:03.222706  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:03.722930  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:04.222826  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:04.722638  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:05.222666  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:02.287044  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:02.287490  438001 main.go:141] libmachine: (no-preload-278232) DBG | unable to find current IP address of domain no-preload-278232 in network mk-no-preload-278232
	I0819 19:13:02.287526  438001 main.go:141] libmachine: (no-preload-278232) DBG | I0819 19:13:02.287416  439925 retry.go:31] will retry after 2.562055052s: waiting for machine to come up
	I0819 19:13:04.852682  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:04.853097  438001 main.go:141] libmachine: (no-preload-278232) DBG | unable to find current IP address of domain no-preload-278232 in network mk-no-preload-278232
	I0819 19:13:04.853125  438001 main.go:141] libmachine: (no-preload-278232) DBG | I0819 19:13:04.853062  439925 retry.go:31] will retry after 3.627213972s: waiting for machine to come up
	I0819 19:13:04.194149  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:06.194831  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:04.418550  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:06.919083  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:05.723627  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:06.222663  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:06.723230  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:07.222666  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:07.722653  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:08.222861  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:08.723248  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:09.222831  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:09.722738  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:10.223069  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:08.484125  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.484586  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has current primary IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.484612  438001 main.go:141] libmachine: (no-preload-278232) Found IP for machine: 192.168.39.106
	I0819 19:13:08.484642  438001 main.go:141] libmachine: (no-preload-278232) Reserving static IP address...
	I0819 19:13:08.485049  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "no-preload-278232", mac: "52:54:00:14:f3:b1", ip: "192.168.39.106"} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:08.485091  438001 main.go:141] libmachine: (no-preload-278232) Reserved static IP address: 192.168.39.106
	I0819 19:13:08.485112  438001 main.go:141] libmachine: (no-preload-278232) DBG | skip adding static IP to network mk-no-preload-278232 - found existing host DHCP lease matching {name: "no-preload-278232", mac: "52:54:00:14:f3:b1", ip: "192.168.39.106"}
	I0819 19:13:08.485129  438001 main.go:141] libmachine: (no-preload-278232) DBG | Getting to WaitForSSH function...
	I0819 19:13:08.485145  438001 main.go:141] libmachine: (no-preload-278232) Waiting for SSH to be available...
	I0819 19:13:08.486998  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.487266  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:08.487290  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.487402  438001 main.go:141] libmachine: (no-preload-278232) DBG | Using SSH client type: external
	I0819 19:13:08.487429  438001 main.go:141] libmachine: (no-preload-278232) DBG | Using SSH private key: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/no-preload-278232/id_rsa (-rw-------)
	I0819 19:13:08.487463  438001 main.go:141] libmachine: (no-preload-278232) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.106 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19468-372744/.minikube/machines/no-preload-278232/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 19:13:08.487476  438001 main.go:141] libmachine: (no-preload-278232) DBG | About to run SSH command:
	I0819 19:13:08.487487  438001 main.go:141] libmachine: (no-preload-278232) DBG | exit 0
	I0819 19:13:08.611459  438001 main.go:141] libmachine: (no-preload-278232) DBG | SSH cmd err, output: <nil>: 
	I0819 19:13:08.611934  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetConfigRaw
	I0819 19:13:08.612610  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetIP
	I0819 19:13:08.615212  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.615564  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:08.615594  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.615919  438001 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/no-preload-278232/config.json ...
	I0819 19:13:08.616140  438001 machine.go:93] provisionDockerMachine start ...
	I0819 19:13:08.616162  438001 main.go:141] libmachine: (no-preload-278232) Calling .DriverName
	I0819 19:13:08.616387  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHHostname
	I0819 19:13:08.618650  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.618956  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:08.618988  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.619098  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHPort
	I0819 19:13:08.619291  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:08.619433  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:08.619569  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHUsername
	I0819 19:13:08.619727  438001 main.go:141] libmachine: Using SSH client type: native
	I0819 19:13:08.619893  438001 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I0819 19:13:08.619903  438001 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 19:13:08.724912  438001 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 19:13:08.724955  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetMachineName
	I0819 19:13:08.725264  438001 buildroot.go:166] provisioning hostname "no-preload-278232"
	I0819 19:13:08.725291  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetMachineName
	I0819 19:13:08.725486  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHHostname
	I0819 19:13:08.728810  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.729237  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:08.729274  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.729434  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHPort
	I0819 19:13:08.729667  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:08.729887  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:08.730067  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHUsername
	I0819 19:13:08.730244  438001 main.go:141] libmachine: Using SSH client type: native
	I0819 19:13:08.730490  438001 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I0819 19:13:08.730511  438001 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-278232 && echo "no-preload-278232" | sudo tee /etc/hostname
	I0819 19:13:08.854474  438001 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-278232
	
	I0819 19:13:08.854499  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHHostname
	I0819 19:13:08.857179  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.857511  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:08.857540  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.857713  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHPort
	I0819 19:13:08.857912  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:08.858075  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:08.858189  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHUsername
	I0819 19:13:08.858356  438001 main.go:141] libmachine: Using SSH client type: native
	I0819 19:13:08.858556  438001 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I0819 19:13:08.858579  438001 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-278232' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-278232/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-278232' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 19:13:08.973053  438001 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 19:13:08.973090  438001 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19468-372744/.minikube CaCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19468-372744/.minikube}
	I0819 19:13:08.973115  438001 buildroot.go:174] setting up certificates
	I0819 19:13:08.973125  438001 provision.go:84] configureAuth start
	I0819 19:13:08.973135  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetMachineName
	I0819 19:13:08.973417  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetIP
	I0819 19:13:08.976100  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.976459  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:08.976487  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.976690  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHHostname
	I0819 19:13:08.978902  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.979342  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:08.979370  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.979530  438001 provision.go:143] copyHostCerts
	I0819 19:13:08.979605  438001 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem, removing ...
	I0819 19:13:08.979628  438001 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem
	I0819 19:13:08.979717  438001 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem (1082 bytes)
	I0819 19:13:08.979830  438001 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem, removing ...
	I0819 19:13:08.979842  438001 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem
	I0819 19:13:08.979874  438001 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem (1123 bytes)
	I0819 19:13:08.979963  438001 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem, removing ...
	I0819 19:13:08.979974  438001 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem
	I0819 19:13:08.980002  438001 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem (1675 bytes)
	I0819 19:13:08.980075  438001 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem org=jenkins.no-preload-278232 san=[127.0.0.1 192.168.39.106 localhost minikube no-preload-278232]
	I0819 19:13:09.092643  438001 provision.go:177] copyRemoteCerts
	I0819 19:13:09.092707  438001 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 19:13:09.092739  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHHostname
	I0819 19:13:09.095542  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:09.095929  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:09.095960  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:09.096099  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHPort
	I0819 19:13:09.096318  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:09.096481  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHUsername
	I0819 19:13:09.096635  438001 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/no-preload-278232/id_rsa Username:docker}
	I0819 19:13:09.179713  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 19:13:09.206363  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0819 19:13:09.231180  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 19:13:09.256764  438001 provision.go:87] duration metric: took 283.626537ms to configureAuth
	I0819 19:13:09.256810  438001 buildroot.go:189] setting minikube options for container-runtime
	I0819 19:13:09.256993  438001 config.go:182] Loaded profile config "no-preload-278232": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:13:09.257079  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHHostname
	I0819 19:13:09.259661  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:09.260061  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:09.260094  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:09.260253  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHPort
	I0819 19:13:09.260461  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:09.260640  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:09.260796  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHUsername
	I0819 19:13:09.260973  438001 main.go:141] libmachine: Using SSH client type: native
	I0819 19:13:09.261150  438001 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I0819 19:13:09.261166  438001 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 19:13:09.534325  438001 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 19:13:09.534357  438001 machine.go:96] duration metric: took 918.201944ms to provisionDockerMachine
	I0819 19:13:09.534371  438001 start.go:293] postStartSetup for "no-preload-278232" (driver="kvm2")
	I0819 19:13:09.534387  438001 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 19:13:09.534412  438001 main.go:141] libmachine: (no-preload-278232) Calling .DriverName
	I0819 19:13:09.534794  438001 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 19:13:09.534826  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHHostname
	I0819 19:13:09.537623  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:09.537974  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:09.538002  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:09.538138  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHPort
	I0819 19:13:09.538349  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:09.538534  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHUsername
	I0819 19:13:09.538669  438001 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/no-preload-278232/id_rsa Username:docker}
	I0819 19:13:09.627085  438001 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 19:13:09.631714  438001 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 19:13:09.631740  438001 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-372744/.minikube/addons for local assets ...
	I0819 19:13:09.631817  438001 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-372744/.minikube/files for local assets ...
	I0819 19:13:09.631911  438001 filesync.go:149] local asset: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem -> 3800092.pem in /etc/ssl/certs
	I0819 19:13:09.632035  438001 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 19:13:09.642942  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem --> /etc/ssl/certs/3800092.pem (1708 bytes)
	I0819 19:13:09.669242  438001 start.go:296] duration metric: took 134.853886ms for postStartSetup
	I0819 19:13:09.669294  438001 fix.go:56] duration metric: took 19.584399031s for fixHost
	I0819 19:13:09.669325  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHHostname
	I0819 19:13:09.672072  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:09.672461  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:09.672494  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:09.672635  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHPort
	I0819 19:13:09.672937  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:09.673116  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:09.673331  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHUsername
	I0819 19:13:09.673517  438001 main.go:141] libmachine: Using SSH client type: native
	I0819 19:13:09.673699  438001 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I0819 19:13:09.673717  438001 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 19:13:09.780601  438001 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724094789.749951838
	
	I0819 19:13:09.780628  438001 fix.go:216] guest clock: 1724094789.749951838
	I0819 19:13:09.780640  438001 fix.go:229] Guest: 2024-08-19 19:13:09.749951838 +0000 UTC Remote: 2024-08-19 19:13:09.669301343 +0000 UTC m=+358.073543000 (delta=80.650495ms)
	I0819 19:13:09.780668  438001 fix.go:200] guest clock delta is within tolerance: 80.650495ms
	I0819 19:13:09.780676  438001 start.go:83] releasing machines lock for "no-preload-278232", held for 19.69582363s
	I0819 19:13:09.780703  438001 main.go:141] libmachine: (no-preload-278232) Calling .DriverName
	I0819 19:13:09.781042  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetIP
	I0819 19:13:09.783578  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:09.783967  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:09.783996  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:09.784149  438001 main.go:141] libmachine: (no-preload-278232) Calling .DriverName
	I0819 19:13:09.784649  438001 main.go:141] libmachine: (no-preload-278232) Calling .DriverName
	I0819 19:13:09.784855  438001 main.go:141] libmachine: (no-preload-278232) Calling .DriverName
	I0819 19:13:09.784946  438001 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 19:13:09.785037  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHHostname
	I0819 19:13:09.785073  438001 ssh_runner.go:195] Run: cat /version.json
	I0819 19:13:09.785107  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHHostname
	I0819 19:13:09.787346  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:09.787706  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:09.787763  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:09.787788  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:09.787977  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHPort
	I0819 19:13:09.788162  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:09.788226  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:09.788251  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:09.788327  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHUsername
	I0819 19:13:09.788447  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHPort
	I0819 19:13:09.788500  438001 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/no-preload-278232/id_rsa Username:docker}
	I0819 19:13:09.788622  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:09.788805  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHUsername
	I0819 19:13:09.788994  438001 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/no-preload-278232/id_rsa Username:docker}
	I0819 19:13:09.864596  438001 ssh_runner.go:195] Run: systemctl --version
	I0819 19:13:09.890038  438001 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 19:13:10.039016  438001 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 19:13:10.045269  438001 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 19:13:10.045352  438001 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 19:13:10.061345  438001 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 19:13:10.061380  438001 start.go:495] detecting cgroup driver to use...
	I0819 19:13:10.061467  438001 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 19:13:10.079229  438001 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 19:13:10.094396  438001 docker.go:217] disabling cri-docker service (if available) ...
	I0819 19:13:10.094471  438001 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 19:13:10.109307  438001 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 19:13:10.123389  438001 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 19:13:10.241132  438001 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 19:13:10.395346  438001 docker.go:233] disabling docker service ...
	I0819 19:13:10.395444  438001 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 19:13:10.409604  438001 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 19:13:10.424149  438001 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 19:13:10.544180  438001 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 19:13:10.671038  438001 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 19:13:10.685563  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 19:13:10.704754  438001 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 19:13:10.704819  438001 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:13:10.716002  438001 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 19:13:10.716077  438001 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:13:10.728085  438001 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:13:10.739292  438001 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:13:10.750083  438001 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 19:13:10.760832  438001 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:13:10.771231  438001 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:13:10.788807  438001 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:13:10.799472  438001 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 19:13:10.809354  438001 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 19:13:10.809432  438001 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 19:13:10.824339  438001 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 19:13:10.833761  438001 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:13:10.953587  438001 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 19:13:11.091264  438001 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 19:13:11.091336  438001 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 19:13:11.096092  438001 start.go:563] Will wait 60s for crictl version
	I0819 19:13:11.096161  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:13:11.100040  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 19:13:11.142512  438001 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 19:13:11.142612  438001 ssh_runner.go:195] Run: crio --version
	I0819 19:13:11.176967  438001 ssh_runner.go:195] Run: crio --version
	I0819 19:13:11.208687  438001 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 19:13:11.209819  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetIP
	I0819 19:13:11.212533  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:11.212876  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:11.212900  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:11.213098  438001 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 19:13:11.217234  438001 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 19:13:11.229995  438001 kubeadm.go:883] updating cluster {Name:no-preload-278232 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-278232 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.106 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 19:13:11.230124  438001 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 19:13:11.230168  438001 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 19:13:11.265699  438001 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0819 19:13:11.265730  438001 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0819 19:13:11.265816  438001 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0819 19:13:11.265836  438001 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0819 19:13:11.265843  438001 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0819 19:13:11.265816  438001 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:13:11.265875  438001 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 19:13:11.265941  438001 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0819 19:13:11.265955  438001 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0819 19:13:11.266027  438001 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0819 19:13:11.267344  438001 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0819 19:13:11.267364  438001 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0819 19:13:11.267344  438001 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0819 19:13:11.267408  438001 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0819 19:13:11.267349  438001 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0819 19:13:11.267445  438001 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0819 19:13:11.267408  438001 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 19:13:11.267407  438001 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:13:11.411117  438001 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0819 19:13:11.435022  438001 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0819 19:13:11.437707  438001 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0819 19:13:11.439226  438001 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 19:13:11.446384  438001 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0819 19:13:11.448011  438001 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0819 19:13:11.463921  438001 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0819 19:13:11.476902  438001 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0819 19:13:11.476956  438001 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0819 19:13:11.477011  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:13:11.561762  438001 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0819 19:13:11.561827  438001 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0819 19:13:11.561889  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:13:08.694513  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:11.193505  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:09.419409  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:11.919413  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:13.931174  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:10.722882  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:11.223650  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:11.722917  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:12.223146  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:12.723410  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:13.222692  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:13.722636  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:14.223152  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:14.722661  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:15.223297  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:11.657022  438001 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0819 19:13:11.657071  438001 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0819 19:13:11.657092  438001 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0819 19:13:11.657123  438001 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 19:13:11.657127  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:13:11.657164  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:13:11.657176  438001 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0819 19:13:11.657195  438001 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0819 19:13:11.657217  438001 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0819 19:13:11.657216  438001 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0819 19:13:11.657254  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:13:11.657260  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:13:11.729671  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0819 19:13:11.729903  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0819 19:13:11.730476  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 19:13:11.730489  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0819 19:13:11.730510  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0819 19:13:11.730544  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0819 19:13:11.853411  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0819 19:13:11.853647  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0819 19:13:11.872296  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0819 19:13:11.872370  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0819 19:13:11.876801  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 19:13:11.877002  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0819 19:13:11.982642  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0819 19:13:12.007940  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0819 19:13:12.031132  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0819 19:13:12.031150  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0819 19:13:12.031163  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 19:13:12.031275  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0819 19:13:12.130991  438001 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0819 19:13:12.131099  438001 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0819 19:13:12.130994  438001 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0819 19:13:12.131231  438001 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1
	I0819 19:13:12.162852  438001 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0819 19:13:12.162911  438001 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0819 19:13:12.162916  438001 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0819 19:13:12.162967  438001 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0819 19:13:12.162984  438001 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0819 19:13:12.162984  438001 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0819 19:13:12.163035  438001 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0819 19:13:12.163044  438001 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0819 19:13:12.163053  438001 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0819 19:13:12.163055  438001 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0819 19:13:12.163086  438001 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0819 19:13:12.163095  438001 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0819 19:13:12.177377  438001 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0819 19:13:12.177438  438001 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0819 19:13:12.177438  438001 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0819 19:13:12.229301  438001 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:13:14.745129  438001 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (2.582015913s)
	I0819 19:13:14.745162  438001 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0819 19:13:14.745196  438001 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0: (2.582131532s)
	I0819 19:13:14.745215  438001 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.515891614s)
	I0819 19:13:14.745232  438001 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0819 19:13:14.745200  438001 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0819 19:13:14.745247  438001 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0819 19:13:14.745285  438001 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:13:14.745298  438001 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0819 19:13:14.745325  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:13:13.693752  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:15.693871  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:16.419552  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:18.920189  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:15.723053  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:16.223486  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:16.722740  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:17.223337  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:17.723160  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:18.222651  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:18.723509  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:19.223686  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:19.723376  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:20.222953  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:16.728557  438001 ssh_runner.go:235] Completed: which crictl: (1.983204878s)
	I0819 19:13:16.728614  438001 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.983294709s)
	I0819 19:13:16.728635  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:13:16.728642  438001 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0819 19:13:16.728673  438001 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0819 19:13:16.728714  438001 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0819 19:13:16.771574  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:13:20.532388  438001 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.760772797s)
	I0819 19:13:20.532421  438001 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.80368813s)
	I0819 19:13:20.532437  438001 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0819 19:13:20.532469  438001 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0819 19:13:20.532480  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:13:20.532500  438001 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0819 19:13:18.193852  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:20.692752  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:21.419154  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:23.419271  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:20.723620  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:21.223286  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:21.723663  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:22.223594  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:22.723415  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:23.223643  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:23.723395  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:24.223476  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:24.723236  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:25.223620  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:22.500967  438001 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.968455152s)
	I0819 19:13:22.501030  438001 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0819 19:13:22.501036  438001 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (1.968509024s)
	I0819 19:13:22.501068  438001 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0819 19:13:22.501108  438001 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0819 19:13:22.501138  438001 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0819 19:13:22.501175  438001 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0819 19:13:22.506796  438001 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0819 19:13:23.962797  438001 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.461519717s)
	I0819 19:13:23.962838  438001 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0819 19:13:23.962876  438001 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0819 19:13:23.962959  438001 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0819 19:13:25.927805  438001 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.964816993s)
	I0819 19:13:25.927836  438001 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0819 19:13:25.927868  438001 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0819 19:13:25.927922  438001 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0819 19:13:26.572310  438001 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0819 19:13:26.572368  438001 cache_images.go:123] Successfully loaded all cached images
	I0819 19:13:26.572376  438001 cache_images.go:92] duration metric: took 15.306632126s to LoadCachedImages
	I0819 19:13:26.572397  438001 kubeadm.go:934] updating node { 192.168.39.106 8443 v1.31.0 crio true true} ...
	I0819 19:13:26.572549  438001 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-278232 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.106
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-278232 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 19:13:26.572635  438001 ssh_runner.go:195] Run: crio config
	I0819 19:13:26.623839  438001 cni.go:84] Creating CNI manager for ""
	I0819 19:13:26.623862  438001 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 19:13:26.623872  438001 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 19:13:26.623896  438001 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.106 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-278232 NodeName:no-preload-278232 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.106"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.106 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 19:13:26.624138  438001 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.106
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-278232"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.106
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.106"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 19:13:26.624226  438001 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 19:13:22.693093  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:24.694313  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:26.695312  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:25.918793  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:27.919721  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:25.722593  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:26.223582  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:26.722927  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:27.223364  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:27.723223  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:28.223458  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:28.723262  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:29.222823  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:29.722837  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:30.223196  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:26.634770  438001 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 19:13:26.634844  438001 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 19:13:26.644193  438001 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0819 19:13:26.661226  438001 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 19:13:26.677413  438001 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0819 19:13:26.696260  438001 ssh_runner.go:195] Run: grep 192.168.39.106	control-plane.minikube.internal$ /etc/hosts
	I0819 19:13:26.700029  438001 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.106	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 19:13:26.711667  438001 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:13:26.849658  438001 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 19:13:26.867185  438001 certs.go:68] Setting up /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/no-preload-278232 for IP: 192.168.39.106
	I0819 19:13:26.867216  438001 certs.go:194] generating shared ca certs ...
	I0819 19:13:26.867240  438001 certs.go:226] acquiring lock for ca certs: {Name:mk639e03f593e0bccac045f6e9f5ba3b96cc81e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:13:26.867431  438001 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key
	I0819 19:13:26.867489  438001 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key
	I0819 19:13:26.867502  438001 certs.go:256] generating profile certs ...
	I0819 19:13:26.867600  438001 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/no-preload-278232/client.key
	I0819 19:13:26.867705  438001 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/no-preload-278232/apiserver.key.4086521c
	I0819 19:13:26.867759  438001 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/no-preload-278232/proxy-client.key
	I0819 19:13:26.867936  438001 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem (1338 bytes)
	W0819 19:13:26.867980  438001 certs.go:480] ignoring /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009_empty.pem, impossibly tiny 0 bytes
	I0819 19:13:26.867995  438001 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 19:13:26.868037  438001 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem (1082 bytes)
	I0819 19:13:26.868075  438001 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem (1123 bytes)
	I0819 19:13:26.868107  438001 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem (1675 bytes)
	I0819 19:13:26.868171  438001 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem (1708 bytes)
	I0819 19:13:26.869217  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 19:13:26.903250  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 19:13:26.928593  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 19:13:26.957098  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 19:13:26.982422  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/no-preload-278232/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0819 19:13:27.009252  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/no-preload-278232/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 19:13:27.038043  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/no-preload-278232/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 19:13:27.075400  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/no-preload-278232/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 19:13:27.101568  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem --> /usr/share/ca-certificates/3800092.pem (1708 bytes)
	I0819 19:13:27.127162  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 19:13:27.152327  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem --> /usr/share/ca-certificates/380009.pem (1338 bytes)
	I0819 19:13:27.176207  438001 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 19:13:27.194919  438001 ssh_runner.go:195] Run: openssl version
	I0819 19:13:27.201002  438001 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3800092.pem && ln -fs /usr/share/ca-certificates/3800092.pem /etc/ssl/certs/3800092.pem"
	I0819 19:13:27.212050  438001 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3800092.pem
	I0819 19:13:27.216607  438001 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 17:56 /usr/share/ca-certificates/3800092.pem
	I0819 19:13:27.216663  438001 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3800092.pem
	I0819 19:13:27.222437  438001 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3800092.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 19:13:27.234112  438001 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 19:13:27.245472  438001 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:13:27.250203  438001 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:13:27.250257  438001 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:13:27.256045  438001 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 19:13:27.266746  438001 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/380009.pem && ln -fs /usr/share/ca-certificates/380009.pem /etc/ssl/certs/380009.pem"
	I0819 19:13:27.277316  438001 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/380009.pem
	I0819 19:13:27.281660  438001 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 17:56 /usr/share/ca-certificates/380009.pem
	I0819 19:13:27.281721  438001 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/380009.pem
	I0819 19:13:27.287223  438001 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/380009.pem /etc/ssl/certs/51391683.0"
	I0819 19:13:27.299791  438001 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 19:13:27.304470  438001 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 19:13:27.310642  438001 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 19:13:27.316259  438001 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 19:13:27.322248  438001 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 19:13:27.327902  438001 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 19:13:27.333447  438001 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 19:13:27.339044  438001 kubeadm.go:392] StartCluster: {Name:no-preload-278232 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-278232 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.106 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 19:13:27.339165  438001 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 19:13:27.339241  438001 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 19:13:27.378362  438001 cri.go:89] found id: ""
	I0819 19:13:27.378436  438001 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 19:13:27.388560  438001 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 19:13:27.388580  438001 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 19:13:27.388623  438001 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 19:13:27.397834  438001 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 19:13:27.399336  438001 kubeconfig.go:125] found "no-preload-278232" server: "https://192.168.39.106:8443"
	I0819 19:13:27.402651  438001 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 19:13:27.412108  438001 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.106
	I0819 19:13:27.412155  438001 kubeadm.go:1160] stopping kube-system containers ...
	I0819 19:13:27.412170  438001 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0819 19:13:27.412230  438001 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 19:13:27.450332  438001 cri.go:89] found id: ""
	I0819 19:13:27.450431  438001 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0819 19:13:27.466943  438001 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 19:13:27.476741  438001 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 19:13:27.476765  438001 kubeadm.go:157] found existing configuration files:
	
	I0819 19:13:27.476810  438001 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 19:13:27.485630  438001 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 19:13:27.485695  438001 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 19:13:27.495232  438001 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 19:13:27.504379  438001 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 19:13:27.504449  438001 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 19:13:27.513723  438001 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 19:13:27.522864  438001 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 19:13:27.522946  438001 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 19:13:27.532402  438001 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 19:13:27.541502  438001 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 19:13:27.541592  438001 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 19:13:27.550934  438001 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 19:13:27.560650  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:13:27.684890  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:13:28.534223  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:13:28.757538  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:13:28.831313  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:13:28.897644  438001 api_server.go:52] waiting for apiserver process to appear ...
	I0819 19:13:28.897735  438001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:29.398486  438001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:29.898494  438001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:29.924881  438001 api_server.go:72] duration metric: took 1.027247684s to wait for apiserver process to appear ...
	I0819 19:13:29.924918  438001 api_server.go:88] waiting for apiserver healthz status ...
	I0819 19:13:29.924944  438001 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8443/healthz ...
	I0819 19:13:29.925535  438001 api_server.go:269] stopped: https://192.168.39.106:8443/healthz: Get "https://192.168.39.106:8443/healthz": dial tcp 192.168.39.106:8443: connect: connection refused
	I0819 19:13:30.425624  438001 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8443/healthz ...
	I0819 19:13:29.193722  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:31.194540  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:32.406445  438001 api_server.go:279] https://192.168.39.106:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 19:13:32.406476  438001 api_server.go:103] status: https://192.168.39.106:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 19:13:32.406491  438001 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8443/healthz ...
	I0819 19:13:32.470160  438001 api_server.go:279] https://192.168.39.106:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 19:13:32.470195  438001 api_server.go:103] status: https://192.168.39.106:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 19:13:32.470211  438001 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8443/healthz ...
	I0819 19:13:32.486292  438001 api_server.go:279] https://192.168.39.106:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 19:13:32.486322  438001 api_server.go:103] status: https://192.168.39.106:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 19:13:32.925943  438001 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8443/healthz ...
	I0819 19:13:32.933024  438001 api_server.go:279] https://192.168.39.106:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 19:13:32.933068  438001 api_server.go:103] status: https://192.168.39.106:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 19:13:33.425638  438001 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8443/healthz ...
	I0819 19:13:33.431919  438001 api_server.go:279] https://192.168.39.106:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 19:13:33.432051  438001 api_server.go:103] status: https://192.168.39.106:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 19:13:33.925369  438001 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8443/healthz ...
	I0819 19:13:33.930489  438001 api_server.go:279] https://192.168.39.106:8443/healthz returned 200:
	ok
	I0819 19:13:33.937758  438001 api_server.go:141] control plane version: v1.31.0
	I0819 19:13:33.937789  438001 api_server.go:131] duration metric: took 4.012862801s to wait for apiserver health ...
	I0819 19:13:33.937800  438001 cni.go:84] Creating CNI manager for ""
	I0819 19:13:33.937807  438001 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 19:13:33.939711  438001 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 19:13:30.419241  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:32.419437  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:30.723537  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:31.223437  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:31.723289  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:32.222714  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:32.723037  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:33.223138  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:33.723303  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:34.223334  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:34.722692  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:35.223021  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:33.941055  438001 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 19:13:33.953427  438001 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 19:13:33.982889  438001 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 19:13:33.998701  438001 system_pods.go:59] 8 kube-system pods found
	I0819 19:13:33.998750  438001 system_pods.go:61] "coredns-6f6b679f8f-22lbt" [c8a5cabd-41d4-41cb-91c1-2db1f3471db3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0819 19:13:33.998762  438001 system_pods.go:61] "etcd-no-preload-278232" [36d555a1-33e4-4c6c-b24e-2fee4fd84f2b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0819 19:13:33.998775  438001 system_pods.go:61] "kube-apiserver-no-preload-278232" [af7173e5-c4ac-4ece-b8b9-bb81cb6b9bfd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0819 19:13:33.998784  438001 system_pods.go:61] "kube-controller-manager-no-preload-278232" [2463d97a-5221-40ce-8fd7-08151165d6f7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0819 19:13:33.998794  438001 system_pods.go:61] "kube-proxy-rcf49" [85d5814a-1ba9-46be-ab11-17bf40c0f029] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0819 19:13:33.998807  438001 system_pods.go:61] "kube-scheduler-no-preload-278232" [3b327704-f70c-4d6f-a774-15427a305472] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0819 19:13:33.998819  438001 system_pods.go:61] "metrics-server-6867b74b74-vxwrs" [e8b74128-b393-4f0f-90fe-e05f20d54acd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 19:13:33.998827  438001 system_pods.go:61] "storage-provisioner" [24766475-1a5b-4f1a-9350-3e891b5272cc] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0819 19:13:33.998841  438001 system_pods.go:74] duration metric: took 15.918876ms to wait for pod list to return data ...
	I0819 19:13:33.998853  438001 node_conditions.go:102] verifying NodePressure condition ...
	I0819 19:13:34.003102  438001 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 19:13:34.003131  438001 node_conditions.go:123] node cpu capacity is 2
	I0819 19:13:34.003145  438001 node_conditions.go:105] duration metric: took 4.283682ms to run NodePressure ...
	I0819 19:13:34.003163  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:13:34.300052  438001 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0819 19:13:34.304483  438001 kubeadm.go:739] kubelet initialised
	I0819 19:13:34.304505  438001 kubeadm.go:740] duration metric: took 4.421894ms waiting for restarted kubelet to initialise ...
	I0819 19:13:34.304513  438001 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 19:13:34.310575  438001 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-22lbt" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:34.316040  438001 pod_ready.go:98] node "no-preload-278232" hosting pod "coredns-6f6b679f8f-22lbt" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:34.316068  438001 pod_ready.go:82] duration metric: took 5.462078ms for pod "coredns-6f6b679f8f-22lbt" in "kube-system" namespace to be "Ready" ...
	E0819 19:13:34.316080  438001 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-278232" hosting pod "coredns-6f6b679f8f-22lbt" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:34.316088  438001 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:34.320731  438001 pod_ready.go:98] node "no-preload-278232" hosting pod "etcd-no-preload-278232" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:34.320751  438001 pod_ready.go:82] duration metric: took 4.649545ms for pod "etcd-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	E0819 19:13:34.320758  438001 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-278232" hosting pod "etcd-no-preload-278232" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:34.320763  438001 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:34.325499  438001 pod_ready.go:98] node "no-preload-278232" hosting pod "kube-apiserver-no-preload-278232" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:34.325519  438001 pod_ready.go:82] duration metric: took 4.750861ms for pod "kube-apiserver-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	E0819 19:13:34.325526  438001 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-278232" hosting pod "kube-apiserver-no-preload-278232" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:34.325531  438001 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:34.388221  438001 pod_ready.go:98] node "no-preload-278232" hosting pod "kube-controller-manager-no-preload-278232" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:34.388248  438001 pod_ready.go:82] duration metric: took 62.708596ms for pod "kube-controller-manager-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	E0819 19:13:34.388259  438001 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-278232" hosting pod "kube-controller-manager-no-preload-278232" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:34.388265  438001 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-rcf49" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:34.787164  438001 pod_ready.go:98] node "no-preload-278232" hosting pod "kube-proxy-rcf49" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:34.787193  438001 pod_ready.go:82] duration metric: took 398.919585ms for pod "kube-proxy-rcf49" in "kube-system" namespace to be "Ready" ...
	E0819 19:13:34.787203  438001 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-278232" hosting pod "kube-proxy-rcf49" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:34.787210  438001 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:35.186336  438001 pod_ready.go:98] node "no-preload-278232" hosting pod "kube-scheduler-no-preload-278232" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:35.186365  438001 pod_ready.go:82] duration metric: took 399.147858ms for pod "kube-scheduler-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	E0819 19:13:35.186377  438001 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-278232" hosting pod "kube-scheduler-no-preload-278232" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:35.186386  438001 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:35.586266  438001 pod_ready.go:98] node "no-preload-278232" hosting pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:35.586292  438001 pod_ready.go:82] duration metric: took 399.895038ms for pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace to be "Ready" ...
	E0819 19:13:35.586301  438001 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-278232" hosting pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:35.586307  438001 pod_ready.go:39] duration metric: took 1.281785432s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 19:13:35.586326  438001 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 19:13:35.598523  438001 ops.go:34] apiserver oom_adj: -16
	I0819 19:13:35.598545  438001 kubeadm.go:597] duration metric: took 8.20995933s to restartPrimaryControlPlane
	I0819 19:13:35.598554  438001 kubeadm.go:394] duration metric: took 8.259514907s to StartCluster
	I0819 19:13:35.598576  438001 settings.go:142] acquiring lock: {Name:mk396fcf49a1d0e69583cf37ff3c819e37118163 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:13:35.598662  438001 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19468-372744/kubeconfig
	I0819 19:13:35.600424  438001 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/kubeconfig: {Name:mk8e7b4e1bb7da665111d2acd83eb48882c66853 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:13:35.600672  438001 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.106 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 19:13:35.600768  438001 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 19:13:35.600850  438001 addons.go:69] Setting storage-provisioner=true in profile "no-preload-278232"
	I0819 19:13:35.600879  438001 addons.go:69] Setting metrics-server=true in profile "no-preload-278232"
	I0819 19:13:35.600924  438001 addons.go:234] Setting addon metrics-server=true in "no-preload-278232"
	W0819 19:13:35.600938  438001 addons.go:243] addon metrics-server should already be in state true
	I0819 19:13:35.600884  438001 addons.go:234] Setting addon storage-provisioner=true in "no-preload-278232"
	W0819 19:13:35.600969  438001 addons.go:243] addon storage-provisioner should already be in state true
	I0819 19:13:35.600966  438001 config.go:182] Loaded profile config "no-preload-278232": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:13:35.600976  438001 host.go:66] Checking if "no-preload-278232" exists ...
	I0819 19:13:35.600988  438001 host.go:66] Checking if "no-preload-278232" exists ...
	I0819 19:13:35.601395  438001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:13:35.601428  438001 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:13:35.601436  438001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:13:35.601453  438001 addons.go:69] Setting default-storageclass=true in profile "no-preload-278232"
	I0819 19:13:35.601501  438001 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-278232"
	I0819 19:13:35.601463  438001 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:13:35.601898  438001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:13:35.601948  438001 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:13:35.602507  438001 out.go:177] * Verifying Kubernetes components...
	I0819 19:13:35.604092  438001 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:13:35.617515  438001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34839
	I0819 19:13:35.617538  438001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36157
	I0819 19:13:35.617521  438001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35771
	I0819 19:13:35.618045  438001 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:13:35.618101  438001 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:13:35.618163  438001 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:13:35.618570  438001 main.go:141] libmachine: Using API Version  1
	I0819 19:13:35.618598  438001 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:13:35.618712  438001 main.go:141] libmachine: Using API Version  1
	I0819 19:13:35.618734  438001 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:13:35.618715  438001 main.go:141] libmachine: Using API Version  1
	I0819 19:13:35.618754  438001 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:13:35.618989  438001 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:13:35.619109  438001 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:13:35.619111  438001 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:13:35.619177  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetState
	I0819 19:13:35.619649  438001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:13:35.619693  438001 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:13:35.619695  438001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:13:35.619768  438001 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:13:35.641244  438001 addons.go:234] Setting addon default-storageclass=true in "no-preload-278232"
	W0819 19:13:35.641268  438001 addons.go:243] addon default-storageclass should already be in state true
	I0819 19:13:35.641298  438001 host.go:66] Checking if "no-preload-278232" exists ...
	I0819 19:13:35.641558  438001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:13:35.641610  438001 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:13:35.659392  438001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39373
	I0819 19:13:35.659999  438001 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:13:35.660432  438001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38477
	I0819 19:13:35.660432  438001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35477
	I0819 19:13:35.660604  438001 main.go:141] libmachine: Using API Version  1
	I0819 19:13:35.660631  438001 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:13:35.661089  438001 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:13:35.661149  438001 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:13:35.661169  438001 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:13:35.661641  438001 main.go:141] libmachine: Using API Version  1
	I0819 19:13:35.661661  438001 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:13:35.661757  438001 main.go:141] libmachine: Using API Version  1
	I0819 19:13:35.661772  438001 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:13:35.661792  438001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:13:35.661826  438001 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:13:35.662039  438001 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:13:35.662142  438001 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:13:35.662222  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetState
	I0819 19:13:35.662375  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetState
	I0819 19:13:35.664221  438001 main.go:141] libmachine: (no-preload-278232) Calling .DriverName
	I0819 19:13:35.664397  438001 main.go:141] libmachine: (no-preload-278232) Calling .DriverName
	I0819 19:13:35.666459  438001 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0819 19:13:35.666471  438001 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:13:35.667849  438001 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0819 19:13:35.667864  438001 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0819 19:13:35.667882  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHHostname
	I0819 19:13:35.667944  438001 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 19:13:35.667959  438001 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 19:13:35.667977  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHHostname
	I0819 19:13:35.673516  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHPort
	I0819 19:13:35.673544  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:35.673520  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:35.673578  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:35.673593  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:35.673602  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:35.673521  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHPort
	I0819 19:13:35.673615  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:35.673793  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:35.673937  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:35.673986  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHUsername
	I0819 19:13:35.674150  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHUsername
	I0819 19:13:35.674324  438001 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/no-preload-278232/id_rsa Username:docker}
	I0819 19:13:35.674350  438001 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/no-preload-278232/id_rsa Username:docker}
	I0819 19:13:35.683691  438001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39783
	I0819 19:13:35.684219  438001 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:13:35.684806  438001 main.go:141] libmachine: Using API Version  1
	I0819 19:13:35.684831  438001 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:13:35.685251  438001 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:13:35.685515  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetState
	I0819 19:13:35.687268  438001 main.go:141] libmachine: (no-preload-278232) Calling .DriverName
	I0819 19:13:35.687485  438001 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 19:13:35.687503  438001 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 19:13:35.687524  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHHostname
	I0819 19:13:35.690504  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:35.691297  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHPort
	I0819 19:13:35.691333  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:35.691356  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:35.691477  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:35.691659  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHUsername
	I0819 19:13:35.691814  438001 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/no-preload-278232/id_rsa Username:docker}
	I0819 19:13:35.833054  438001 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 19:13:35.855442  438001 node_ready.go:35] waiting up to 6m0s for node "no-preload-278232" to be "Ready" ...
	I0819 19:13:35.923521  438001 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0819 19:13:35.923551  438001 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0819 19:13:35.940005  438001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 19:13:35.965657  438001 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0819 19:13:35.965686  438001 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0819 19:13:36.002636  438001 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 19:13:36.002665  438001 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0819 19:13:36.024764  438001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 19:13:36.058824  438001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 19:13:36.420421  438001 main.go:141] libmachine: Making call to close driver server
	I0819 19:13:36.420452  438001 main.go:141] libmachine: (no-preload-278232) Calling .Close
	I0819 19:13:36.420785  438001 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:13:36.420804  438001 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:13:36.420844  438001 main.go:141] libmachine: (no-preload-278232) DBG | Closing plugin on server side
	I0819 19:13:36.420904  438001 main.go:141] libmachine: Making call to close driver server
	I0819 19:13:36.420918  438001 main.go:141] libmachine: (no-preload-278232) Calling .Close
	I0819 19:13:36.421185  438001 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:13:36.421210  438001 main.go:141] libmachine: (no-preload-278232) DBG | Closing plugin on server side
	I0819 19:13:36.421224  438001 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:13:36.429463  438001 main.go:141] libmachine: Making call to close driver server
	I0819 19:13:36.429481  438001 main.go:141] libmachine: (no-preload-278232) Calling .Close
	I0819 19:13:36.429811  438001 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:13:36.429830  438001 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:13:37.141893  438001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.117083882s)
	I0819 19:13:37.141987  438001 main.go:141] libmachine: Making call to close driver server
	I0819 19:13:37.141999  438001 main.go:141] libmachine: (no-preload-278232) Calling .Close
	I0819 19:13:37.142472  438001 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:13:37.142495  438001 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:13:37.142506  438001 main.go:141] libmachine: Making call to close driver server
	I0819 19:13:37.142515  438001 main.go:141] libmachine: (no-preload-278232) Calling .Close
	I0819 19:13:37.142788  438001 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:13:37.142808  438001 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:13:37.142814  438001 main.go:141] libmachine: (no-preload-278232) DBG | Closing plugin on server side
	I0819 19:13:37.161659  438001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.10278963s)
	I0819 19:13:37.161723  438001 main.go:141] libmachine: Making call to close driver server
	I0819 19:13:37.161739  438001 main.go:141] libmachine: (no-preload-278232) Calling .Close
	I0819 19:13:37.162067  438001 main.go:141] libmachine: (no-preload-278232) DBG | Closing plugin on server side
	I0819 19:13:37.162099  438001 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:13:37.162125  438001 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:13:37.162142  438001 main.go:141] libmachine: Making call to close driver server
	I0819 19:13:37.162154  438001 main.go:141] libmachine: (no-preload-278232) Calling .Close
	I0819 19:13:37.162404  438001 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:13:37.162420  438001 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:13:37.162432  438001 addons.go:475] Verifying addon metrics-server=true in "no-preload-278232"
	I0819 19:13:37.164423  438001 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0819 19:13:33.694203  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:35.694403  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:34.918988  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:36.919564  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:35.722784  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:36.223168  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:36.723041  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:37.222801  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:37.722855  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:38.223296  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:38.722936  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:39.223326  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:39.722883  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:40.223284  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:37.165767  438001 addons.go:510] duration metric: took 1.565026237s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0819 19:13:37.859454  438001 node_ready.go:53] node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:39.859662  438001 node_ready.go:53] node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:38.193207  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:40.694127  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:39.418572  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:41.918302  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:43.918558  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:40.722612  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:41.222700  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:41.723144  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:42.223369  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:42.723209  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:43.222849  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:43.723518  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:44.223585  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:44.722772  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:45.223078  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:41.859965  438001 node_ready.go:53] node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:43.359120  438001 node_ready.go:49] node "no-preload-278232" has status "Ready":"True"
	I0819 19:13:43.359151  438001 node_ready.go:38] duration metric: took 7.503671074s for node "no-preload-278232" to be "Ready" ...
	I0819 19:13:43.359169  438001 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 19:13:43.365307  438001 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-22lbt" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:43.369626  438001 pod_ready.go:93] pod "coredns-6f6b679f8f-22lbt" in "kube-system" namespace has status "Ready":"True"
	I0819 19:13:43.369646  438001 pod_ready.go:82] duration metric: took 4.316734ms for pod "coredns-6f6b679f8f-22lbt" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:43.369654  438001 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:45.377672  438001 pod_ready.go:103] pod "etcd-no-preload-278232" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:43.193775  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:45.693494  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:45.919705  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:48.418981  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:45.723287  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:46.223666  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:46.722754  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:47.223414  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:47.723567  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:48.222938  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:48.723011  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:49.223076  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:49.723443  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:50.223627  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:47.875409  438001 pod_ready.go:103] pod "etcd-no-preload-278232" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:48.377127  438001 pod_ready.go:93] pod "etcd-no-preload-278232" in "kube-system" namespace has status "Ready":"True"
	I0819 19:13:48.377155  438001 pod_ready.go:82] duration metric: took 5.007493319s for pod "etcd-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:48.377169  438001 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:48.381841  438001 pod_ready.go:93] pod "kube-apiserver-no-preload-278232" in "kube-system" namespace has status "Ready":"True"
	I0819 19:13:48.381864  438001 pod_ready.go:82] duration metric: took 4.686309ms for pod "kube-apiserver-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:48.381877  438001 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:48.386382  438001 pod_ready.go:93] pod "kube-controller-manager-no-preload-278232" in "kube-system" namespace has status "Ready":"True"
	I0819 19:13:48.386397  438001 pod_ready.go:82] duration metric: took 4.514361ms for pod "kube-controller-manager-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:48.386405  438001 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rcf49" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:48.390940  438001 pod_ready.go:93] pod "kube-proxy-rcf49" in "kube-system" namespace has status "Ready":"True"
	I0819 19:13:48.390955  438001 pod_ready.go:82] duration metric: took 4.544499ms for pod "kube-proxy-rcf49" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:48.390963  438001 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:48.395159  438001 pod_ready.go:93] pod "kube-scheduler-no-preload-278232" in "kube-system" namespace has status "Ready":"True"
	I0819 19:13:48.395180  438001 pod_ready.go:82] duration metric: took 4.211012ms for pod "kube-scheduler-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:48.395197  438001 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:50.402109  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:47.693601  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:50.193183  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:50.918811  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:52.919981  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:50.723259  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:51.222697  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:51.723284  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:52.222757  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:52.723414  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:53.223202  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:53.722721  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:54.223578  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:54.723400  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:55.222730  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:52.901901  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:54.903583  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:52.693231  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:54.693934  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:56.695700  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:55.418965  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:57.918885  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:55.723644  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:56.223212  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:56.722729  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:57.223226  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:57.723045  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:58.222901  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:58.722710  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:59.223149  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:59.723186  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:00.222763  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:00.222844  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:00.271266  438716 cri.go:89] found id: ""
	I0819 19:14:00.271296  438716 logs.go:276] 0 containers: []
	W0819 19:14:00.271305  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:00.271312  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:00.271373  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:00.311870  438716 cri.go:89] found id: ""
	I0819 19:14:00.311900  438716 logs.go:276] 0 containers: []
	W0819 19:14:00.311936  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:00.311946  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:00.312011  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:00.350476  438716 cri.go:89] found id: ""
	I0819 19:14:00.350505  438716 logs.go:276] 0 containers: []
	W0819 19:14:00.350514  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:00.350520  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:00.350586  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:00.387404  438716 cri.go:89] found id: ""
	I0819 19:14:00.387438  438716 logs.go:276] 0 containers: []
	W0819 19:14:00.387447  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:00.387457  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:00.387516  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:00.423493  438716 cri.go:89] found id: ""
	I0819 19:14:00.423521  438716 logs.go:276] 0 containers: []
	W0819 19:14:00.423529  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:00.423535  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:00.423596  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:00.458593  438716 cri.go:89] found id: ""
	I0819 19:14:00.458630  438716 logs.go:276] 0 containers: []
	W0819 19:14:00.458642  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:00.458651  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:00.458722  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:00.495645  438716 cri.go:89] found id: ""
	I0819 19:14:00.495695  438716 logs.go:276] 0 containers: []
	W0819 19:14:00.495709  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:00.495717  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:00.495782  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:00.531464  438716 cri.go:89] found id: ""
	I0819 19:14:00.531498  438716 logs.go:276] 0 containers: []
	W0819 19:14:00.531508  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:00.531529  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:00.531543  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:13:57.401329  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:59.402701  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:59.192781  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:01.194411  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:00.419287  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:02.918450  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:00.584029  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:00.584078  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:00.597870  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:00.597908  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:00.746061  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:00.746085  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:00.746098  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:00.818001  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:00.818042  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:03.358509  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:03.371262  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:03.371345  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:03.408201  438716 cri.go:89] found id: ""
	I0819 19:14:03.408231  438716 logs.go:276] 0 containers: []
	W0819 19:14:03.408241  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:03.408248  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:03.408306  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:03.445354  438716 cri.go:89] found id: ""
	I0819 19:14:03.445386  438716 logs.go:276] 0 containers: []
	W0819 19:14:03.445396  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:03.445408  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:03.445470  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:03.481144  438716 cri.go:89] found id: ""
	I0819 19:14:03.481178  438716 logs.go:276] 0 containers: []
	W0819 19:14:03.481188  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:03.481195  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:03.481260  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:03.529069  438716 cri.go:89] found id: ""
	I0819 19:14:03.529109  438716 logs.go:276] 0 containers: []
	W0819 19:14:03.529141  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:03.529148  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:03.529216  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:03.590325  438716 cri.go:89] found id: ""
	I0819 19:14:03.590364  438716 logs.go:276] 0 containers: []
	W0819 19:14:03.590377  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:03.590386  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:03.590456  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:03.634924  438716 cri.go:89] found id: ""
	I0819 19:14:03.634969  438716 logs.go:276] 0 containers: []
	W0819 19:14:03.634981  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:03.634990  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:03.635062  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:03.684133  438716 cri.go:89] found id: ""
	I0819 19:14:03.684164  438716 logs.go:276] 0 containers: []
	W0819 19:14:03.684176  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:03.684184  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:03.684253  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:03.722285  438716 cri.go:89] found id: ""
	I0819 19:14:03.722312  438716 logs.go:276] 0 containers: []
	W0819 19:14:03.722321  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:03.722330  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:03.722372  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:03.735937  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:03.735965  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:03.814906  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:03.814931  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:03.814948  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:03.896323  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:03.896363  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:03.943002  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:03.943037  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:01.901154  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:03.902972  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:05.903388  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:03.694686  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:06.193228  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:04.919332  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:07.419221  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:06.496886  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:06.510719  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:06.510790  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:06.544692  438716 cri.go:89] found id: ""
	I0819 19:14:06.544724  438716 logs.go:276] 0 containers: []
	W0819 19:14:06.544737  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:06.544747  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:06.544818  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:06.578935  438716 cri.go:89] found id: ""
	I0819 19:14:06.578962  438716 logs.go:276] 0 containers: []
	W0819 19:14:06.578971  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:06.578979  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:06.579033  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:06.614488  438716 cri.go:89] found id: ""
	I0819 19:14:06.614516  438716 logs.go:276] 0 containers: []
	W0819 19:14:06.614525  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:06.614532  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:06.614583  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:06.648579  438716 cri.go:89] found id: ""
	I0819 19:14:06.648612  438716 logs.go:276] 0 containers: []
	W0819 19:14:06.648623  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:06.648630  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:06.648685  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:06.685168  438716 cri.go:89] found id: ""
	I0819 19:14:06.685198  438716 logs.go:276] 0 containers: []
	W0819 19:14:06.685208  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:06.685217  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:06.685280  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:06.720391  438716 cri.go:89] found id: ""
	I0819 19:14:06.720424  438716 logs.go:276] 0 containers: []
	W0819 19:14:06.720433  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:06.720440  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:06.720491  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:06.758183  438716 cri.go:89] found id: ""
	I0819 19:14:06.758217  438716 logs.go:276] 0 containers: []
	W0819 19:14:06.758228  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:06.758237  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:06.758307  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:06.800182  438716 cri.go:89] found id: ""
	I0819 19:14:06.800215  438716 logs.go:276] 0 containers: []
	W0819 19:14:06.800224  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:06.800234  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:06.800247  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:06.852735  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:06.852777  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:06.867214  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:06.867249  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:06.938942  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:06.938967  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:06.938980  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:07.023950  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:07.023992  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:09.568889  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:09.588481  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:09.588545  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:09.630790  438716 cri.go:89] found id: ""
	I0819 19:14:09.630825  438716 logs.go:276] 0 containers: []
	W0819 19:14:09.630839  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:09.630848  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:09.630926  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:09.673258  438716 cri.go:89] found id: ""
	I0819 19:14:09.673291  438716 logs.go:276] 0 containers: []
	W0819 19:14:09.673302  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:09.673311  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:09.673374  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:09.709500  438716 cri.go:89] found id: ""
	I0819 19:14:09.709530  438716 logs.go:276] 0 containers: []
	W0819 19:14:09.709541  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:09.709549  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:09.709617  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:09.743110  438716 cri.go:89] found id: ""
	I0819 19:14:09.743139  438716 logs.go:276] 0 containers: []
	W0819 19:14:09.743150  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:09.743164  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:09.743238  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:09.776717  438716 cri.go:89] found id: ""
	I0819 19:14:09.776746  438716 logs.go:276] 0 containers: []
	W0819 19:14:09.776754  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:09.776761  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:09.776820  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:09.811381  438716 cri.go:89] found id: ""
	I0819 19:14:09.811409  438716 logs.go:276] 0 containers: []
	W0819 19:14:09.811417  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:09.811423  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:09.811474  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:09.843699  438716 cri.go:89] found id: ""
	I0819 19:14:09.843730  438716 logs.go:276] 0 containers: []
	W0819 19:14:09.843741  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:09.843750  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:09.843822  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:09.882972  438716 cri.go:89] found id: ""
	I0819 19:14:09.883005  438716 logs.go:276] 0 containers: []
	W0819 19:14:09.883018  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:09.883033  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:09.883050  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:09.973077  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:09.973114  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:10.014505  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:10.014556  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:10.069779  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:10.069819  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:10.084337  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:10.084367  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:10.164870  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:08.402464  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:10.900684  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:08.193980  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:10.194818  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:09.918852  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:12.419687  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:12.665929  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:12.679881  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:12.679960  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:12.718305  438716 cri.go:89] found id: ""
	I0819 19:14:12.718332  438716 logs.go:276] 0 containers: []
	W0819 19:14:12.718341  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:12.718348  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:12.718398  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:12.759084  438716 cri.go:89] found id: ""
	I0819 19:14:12.759112  438716 logs.go:276] 0 containers: []
	W0819 19:14:12.759127  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:12.759135  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:12.759205  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:12.793193  438716 cri.go:89] found id: ""
	I0819 19:14:12.793228  438716 logs.go:276] 0 containers: []
	W0819 19:14:12.793238  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:12.793245  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:12.793299  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:12.828283  438716 cri.go:89] found id: ""
	I0819 19:14:12.828310  438716 logs.go:276] 0 containers: []
	W0819 19:14:12.828322  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:12.828329  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:12.828379  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:12.861971  438716 cri.go:89] found id: ""
	I0819 19:14:12.862004  438716 logs.go:276] 0 containers: []
	W0819 19:14:12.862016  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:12.862025  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:12.862092  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:12.898173  438716 cri.go:89] found id: ""
	I0819 19:14:12.898203  438716 logs.go:276] 0 containers: []
	W0819 19:14:12.898214  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:12.898223  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:12.898287  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:12.940203  438716 cri.go:89] found id: ""
	I0819 19:14:12.940234  438716 logs.go:276] 0 containers: []
	W0819 19:14:12.940246  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:12.940254  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:12.940309  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:12.978092  438716 cri.go:89] found id: ""
	I0819 19:14:12.978123  438716 logs.go:276] 0 containers: []
	W0819 19:14:12.978134  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:12.978147  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:12.978172  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:12.992082  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:12.992117  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:13.073609  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:13.073636  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:13.073649  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:13.153060  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:13.153105  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:13.196535  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:13.196581  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:12.903116  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:15.401183  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:12.693872  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:14.694252  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:17.193116  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:14.919563  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:17.418946  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:15.750298  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:15.763913  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:15.763996  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:15.804515  438716 cri.go:89] found id: ""
	I0819 19:14:15.804542  438716 logs.go:276] 0 containers: []
	W0819 19:14:15.804551  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:15.804558  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:15.804624  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:15.847077  438716 cri.go:89] found id: ""
	I0819 19:14:15.847112  438716 logs.go:276] 0 containers: []
	W0819 19:14:15.847125  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:15.847133  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:15.847200  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:15.882316  438716 cri.go:89] found id: ""
	I0819 19:14:15.882348  438716 logs.go:276] 0 containers: []
	W0819 19:14:15.882358  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:15.882365  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:15.882417  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:15.919084  438716 cri.go:89] found id: ""
	I0819 19:14:15.919114  438716 logs.go:276] 0 containers: []
	W0819 19:14:15.919125  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:15.919132  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:15.919202  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:15.953139  438716 cri.go:89] found id: ""
	I0819 19:14:15.953175  438716 logs.go:276] 0 containers: []
	W0819 19:14:15.953188  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:15.953209  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:15.953276  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:15.993231  438716 cri.go:89] found id: ""
	I0819 19:14:15.993259  438716 logs.go:276] 0 containers: []
	W0819 19:14:15.993268  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:15.993286  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:15.993337  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:16.030382  438716 cri.go:89] found id: ""
	I0819 19:14:16.030412  438716 logs.go:276] 0 containers: []
	W0819 19:14:16.030422  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:16.030428  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:16.030482  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:16.065834  438716 cri.go:89] found id: ""
	I0819 19:14:16.065861  438716 logs.go:276] 0 containers: []
	W0819 19:14:16.065872  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:16.065885  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:16.065901  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:16.117943  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:16.117983  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:16.132010  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:16.132041  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:16.202398  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:16.202416  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:16.202429  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:16.286609  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:16.286653  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:18.830502  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:18.844022  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:18.844107  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:18.880539  438716 cri.go:89] found id: ""
	I0819 19:14:18.880576  438716 logs.go:276] 0 containers: []
	W0819 19:14:18.880588  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:18.880595  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:18.880657  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:18.918426  438716 cri.go:89] found id: ""
	I0819 19:14:18.918454  438716 logs.go:276] 0 containers: []
	W0819 19:14:18.918463  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:18.918470  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:18.918531  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:18.954534  438716 cri.go:89] found id: ""
	I0819 19:14:18.954566  438716 logs.go:276] 0 containers: []
	W0819 19:14:18.954578  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:18.954587  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:18.954651  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:18.993820  438716 cri.go:89] found id: ""
	I0819 19:14:18.993852  438716 logs.go:276] 0 containers: []
	W0819 19:14:18.993864  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:18.993885  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:18.993967  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:19.026947  438716 cri.go:89] found id: ""
	I0819 19:14:19.026982  438716 logs.go:276] 0 containers: []
	W0819 19:14:19.026995  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:19.027005  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:19.027072  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:19.062097  438716 cri.go:89] found id: ""
	I0819 19:14:19.062130  438716 logs.go:276] 0 containers: []
	W0819 19:14:19.062142  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:19.062150  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:19.062207  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:19.099522  438716 cri.go:89] found id: ""
	I0819 19:14:19.099549  438716 logs.go:276] 0 containers: []
	W0819 19:14:19.099559  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:19.099567  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:19.099630  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:19.134766  438716 cri.go:89] found id: ""
	I0819 19:14:19.134803  438716 logs.go:276] 0 containers: []
	W0819 19:14:19.134815  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:19.134850  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:19.134867  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:19.176428  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:19.176458  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:19.231448  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:19.231484  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:19.245631  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:19.245687  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:19.318679  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:19.318703  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:19.318717  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:17.401916  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:19.402628  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:19.195224  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:21.693528  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:19.918727  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:21.918863  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:23.919050  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:21.898430  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:21.913840  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:21.913911  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:21.955682  438716 cri.go:89] found id: ""
	I0819 19:14:21.955720  438716 logs.go:276] 0 containers: []
	W0819 19:14:21.955732  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:21.955743  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:21.955820  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:21.994798  438716 cri.go:89] found id: ""
	I0819 19:14:21.994836  438716 logs.go:276] 0 containers: []
	W0819 19:14:21.994845  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:21.994852  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:21.994904  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:22.029155  438716 cri.go:89] found id: ""
	I0819 19:14:22.029191  438716 logs.go:276] 0 containers: []
	W0819 19:14:22.029202  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:22.029210  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:22.029281  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:22.072489  438716 cri.go:89] found id: ""
	I0819 19:14:22.072534  438716 logs.go:276] 0 containers: []
	W0819 19:14:22.072546  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:22.072559  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:22.072621  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:22.109160  438716 cri.go:89] found id: ""
	I0819 19:14:22.109192  438716 logs.go:276] 0 containers: []
	W0819 19:14:22.109203  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:22.109211  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:22.109281  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:22.146161  438716 cri.go:89] found id: ""
	I0819 19:14:22.146194  438716 logs.go:276] 0 containers: []
	W0819 19:14:22.146206  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:22.146215  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:22.146276  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:22.183005  438716 cri.go:89] found id: ""
	I0819 19:14:22.183033  438716 logs.go:276] 0 containers: []
	W0819 19:14:22.183046  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:22.183054  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:22.183108  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:22.220745  438716 cri.go:89] found id: ""
	I0819 19:14:22.220772  438716 logs.go:276] 0 containers: []
	W0819 19:14:22.220784  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:22.220798  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:22.220817  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:22.297377  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:22.297403  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:22.297416  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:22.373503  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:22.373542  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:22.414922  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:22.414956  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:22.477902  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:22.477944  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:24.993405  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:25.007305  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:25.007379  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:25.041157  438716 cri.go:89] found id: ""
	I0819 19:14:25.041191  438716 logs.go:276] 0 containers: []
	W0819 19:14:25.041203  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:25.041211  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:25.041278  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:25.078572  438716 cri.go:89] found id: ""
	I0819 19:14:25.078605  438716 logs.go:276] 0 containers: []
	W0819 19:14:25.078617  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:25.078625  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:25.078695  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:25.114571  438716 cri.go:89] found id: ""
	I0819 19:14:25.114603  438716 logs.go:276] 0 containers: []
	W0819 19:14:25.114615  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:25.114624  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:25.114690  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:25.154341  438716 cri.go:89] found id: ""
	I0819 19:14:25.154366  438716 logs.go:276] 0 containers: []
	W0819 19:14:25.154375  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:25.154381  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:25.154434  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:25.192592  438716 cri.go:89] found id: ""
	I0819 19:14:25.192620  438716 logs.go:276] 0 containers: []
	W0819 19:14:25.192631  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:25.192640  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:25.192705  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:25.227813  438716 cri.go:89] found id: ""
	I0819 19:14:25.227847  438716 logs.go:276] 0 containers: []
	W0819 19:14:25.227860  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:25.227869  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:25.227933  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:25.264321  438716 cri.go:89] found id: ""
	I0819 19:14:25.264349  438716 logs.go:276] 0 containers: []
	W0819 19:14:25.264357  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:25.264364  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:25.264427  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:25.298562  438716 cri.go:89] found id: ""
	I0819 19:14:25.298596  438716 logs.go:276] 0 containers: []
	W0819 19:14:25.298608  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:25.298621  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:25.298638  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:25.352659  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:25.352695  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:25.366638  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:25.366665  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:25.432964  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:25.432992  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:25.433010  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:25.511487  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:25.511549  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:21.902660  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:24.401454  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:26.402255  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:24.193406  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:26.194758  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:25.919090  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:28.420031  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:28.057003  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:28.070849  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:28.070914  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:28.107817  438716 cri.go:89] found id: ""
	I0819 19:14:28.107852  438716 logs.go:276] 0 containers: []
	W0819 19:14:28.107865  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:28.107875  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:28.107948  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:28.141816  438716 cri.go:89] found id: ""
	I0819 19:14:28.141862  438716 logs.go:276] 0 containers: []
	W0819 19:14:28.141874  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:28.141887  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:28.141958  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:28.179854  438716 cri.go:89] found id: ""
	I0819 19:14:28.179885  438716 logs.go:276] 0 containers: []
	W0819 19:14:28.179893  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:28.179905  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:28.179972  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:28.217335  438716 cri.go:89] found id: ""
	I0819 19:14:28.217364  438716 logs.go:276] 0 containers: []
	W0819 19:14:28.217372  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:28.217380  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:28.217438  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:28.254161  438716 cri.go:89] found id: ""
	I0819 19:14:28.254193  438716 logs.go:276] 0 containers: []
	W0819 19:14:28.254204  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:28.254213  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:28.254276  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:28.288658  438716 cri.go:89] found id: ""
	I0819 19:14:28.288682  438716 logs.go:276] 0 containers: []
	W0819 19:14:28.288691  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:28.288698  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:28.288749  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:28.321957  438716 cri.go:89] found id: ""
	I0819 19:14:28.321987  438716 logs.go:276] 0 containers: []
	W0819 19:14:28.321996  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:28.322004  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:28.322057  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:28.355032  438716 cri.go:89] found id: ""
	I0819 19:14:28.355068  438716 logs.go:276] 0 containers: []
	W0819 19:14:28.355080  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:28.355094  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:28.355111  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:28.406220  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:28.406253  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:28.420877  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:28.420907  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:28.502576  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:28.502598  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:28.502614  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:28.582717  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:28.582769  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:28.904716  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:31.401098  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:28.195001  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:30.693605  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:30.917957  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:32.918239  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:31.121960  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:31.135502  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:31.135568  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:31.170423  438716 cri.go:89] found id: ""
	I0819 19:14:31.170451  438716 logs.go:276] 0 containers: []
	W0819 19:14:31.170461  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:31.170467  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:31.170532  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:31.207328  438716 cri.go:89] found id: ""
	I0819 19:14:31.207356  438716 logs.go:276] 0 containers: []
	W0819 19:14:31.207364  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:31.207370  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:31.207430  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:31.245655  438716 cri.go:89] found id: ""
	I0819 19:14:31.245687  438716 logs.go:276] 0 containers: []
	W0819 19:14:31.245698  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:31.245707  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:31.245773  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:31.282174  438716 cri.go:89] found id: ""
	I0819 19:14:31.282208  438716 logs.go:276] 0 containers: []
	W0819 19:14:31.282221  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:31.282230  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:31.282303  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:31.316779  438716 cri.go:89] found id: ""
	I0819 19:14:31.316810  438716 logs.go:276] 0 containers: []
	W0819 19:14:31.316818  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:31.316826  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:31.316879  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:31.356849  438716 cri.go:89] found id: ""
	I0819 19:14:31.356884  438716 logs.go:276] 0 containers: []
	W0819 19:14:31.356894  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:31.356900  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:31.356963  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:31.395102  438716 cri.go:89] found id: ""
	I0819 19:14:31.395135  438716 logs.go:276] 0 containers: []
	W0819 19:14:31.395143  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:31.395150  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:31.395205  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:31.433018  438716 cri.go:89] found id: ""
	I0819 19:14:31.433045  438716 logs.go:276] 0 containers: []
	W0819 19:14:31.433076  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:31.433091  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:31.433108  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:31.446294  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:31.446319  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:31.518158  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:31.518180  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:31.518196  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:31.600568  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:31.600611  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:31.642356  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:31.642386  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:34.195665  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:34.210300  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:34.210370  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:34.248715  438716 cri.go:89] found id: ""
	I0819 19:14:34.248753  438716 logs.go:276] 0 containers: []
	W0819 19:14:34.248767  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:34.248775  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:34.248849  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:34.285305  438716 cri.go:89] found id: ""
	I0819 19:14:34.285334  438716 logs.go:276] 0 containers: []
	W0819 19:14:34.285347  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:34.285355  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:34.285438  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:34.326114  438716 cri.go:89] found id: ""
	I0819 19:14:34.326148  438716 logs.go:276] 0 containers: []
	W0819 19:14:34.326160  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:34.326168  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:34.326235  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:34.360587  438716 cri.go:89] found id: ""
	I0819 19:14:34.360616  438716 logs.go:276] 0 containers: []
	W0819 19:14:34.360628  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:34.360638  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:34.360715  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:34.397452  438716 cri.go:89] found id: ""
	I0819 19:14:34.397483  438716 logs.go:276] 0 containers: []
	W0819 19:14:34.397491  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:34.397498  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:34.397556  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:34.433651  438716 cri.go:89] found id: ""
	I0819 19:14:34.433683  438716 logs.go:276] 0 containers: []
	W0819 19:14:34.433694  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:34.433702  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:34.433771  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:34.468758  438716 cri.go:89] found id: ""
	I0819 19:14:34.468787  438716 logs.go:276] 0 containers: []
	W0819 19:14:34.468796  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:34.468802  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:34.468856  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:34.505787  438716 cri.go:89] found id: ""
	I0819 19:14:34.505816  438716 logs.go:276] 0 containers: []
	W0819 19:14:34.505828  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:34.505842  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:34.505859  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:34.519430  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:34.519463  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:34.592785  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:34.592810  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:34.592827  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:34.671215  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:34.671254  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:34.711248  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:34.711277  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:33.403429  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:35.901124  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:33.194319  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:35.694280  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:34.918372  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:37.418982  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:37.265131  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:37.279035  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:37.279127  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:37.325556  438716 cri.go:89] found id: ""
	I0819 19:14:37.325589  438716 logs.go:276] 0 containers: []
	W0819 19:14:37.325601  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:37.325610  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:37.325676  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:37.360514  438716 cri.go:89] found id: ""
	I0819 19:14:37.360541  438716 logs.go:276] 0 containers: []
	W0819 19:14:37.360553  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:37.360561  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:37.360616  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:37.394428  438716 cri.go:89] found id: ""
	I0819 19:14:37.394456  438716 logs.go:276] 0 containers: []
	W0819 19:14:37.394465  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:37.394472  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:37.394531  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:37.430221  438716 cri.go:89] found id: ""
	I0819 19:14:37.430249  438716 logs.go:276] 0 containers: []
	W0819 19:14:37.430257  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:37.430264  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:37.430324  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:37.466598  438716 cri.go:89] found id: ""
	I0819 19:14:37.466630  438716 logs.go:276] 0 containers: []
	W0819 19:14:37.466641  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:37.466649  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:37.466719  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:37.510455  438716 cri.go:89] found id: ""
	I0819 19:14:37.510484  438716 logs.go:276] 0 containers: []
	W0819 19:14:37.510492  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:37.510499  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:37.510563  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:37.546122  438716 cri.go:89] found id: ""
	I0819 19:14:37.546157  438716 logs.go:276] 0 containers: []
	W0819 19:14:37.546169  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:37.546178  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:37.546247  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:37.579425  438716 cri.go:89] found id: ""
	I0819 19:14:37.579452  438716 logs.go:276] 0 containers: []
	W0819 19:14:37.579463  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:37.579475  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:37.579491  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:37.592673  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:37.592704  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:37.674026  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:37.674048  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:37.674065  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:37.752206  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:37.752244  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:37.791281  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:37.791321  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:40.345520  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:40.358771  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:40.358835  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:40.394515  438716 cri.go:89] found id: ""
	I0819 19:14:40.394549  438716 logs.go:276] 0 containers: []
	W0819 19:14:40.394565  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:40.394575  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:40.394637  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:40.430971  438716 cri.go:89] found id: ""
	I0819 19:14:40.431007  438716 logs.go:276] 0 containers: []
	W0819 19:14:40.431018  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:40.431027  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:40.431094  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:40.471417  438716 cri.go:89] found id: ""
	I0819 19:14:40.471443  438716 logs.go:276] 0 containers: []
	W0819 19:14:40.471452  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:40.471458  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:40.471511  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:40.508641  438716 cri.go:89] found id: ""
	I0819 19:14:40.508670  438716 logs.go:276] 0 containers: []
	W0819 19:14:40.508678  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:40.508684  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:40.508749  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:37.903083  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:40.402562  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:37.695031  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:40.193724  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:39.921480  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:42.420201  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:40.542418  438716 cri.go:89] found id: ""
	I0819 19:14:40.542456  438716 logs.go:276] 0 containers: []
	W0819 19:14:40.542465  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:40.542472  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:40.542533  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:40.577367  438716 cri.go:89] found id: ""
	I0819 19:14:40.577399  438716 logs.go:276] 0 containers: []
	W0819 19:14:40.577408  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:40.577414  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:40.577476  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:40.611111  438716 cri.go:89] found id: ""
	I0819 19:14:40.611138  438716 logs.go:276] 0 containers: []
	W0819 19:14:40.611147  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:40.611155  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:40.611222  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:40.650769  438716 cri.go:89] found id: ""
	I0819 19:14:40.650797  438716 logs.go:276] 0 containers: []
	W0819 19:14:40.650805  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:40.650814  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:40.650827  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:40.688085  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:40.688111  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:40.740187  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:40.740225  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:40.754774  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:40.754803  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:40.828689  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:40.828712  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:40.828728  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:43.419171  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:43.432127  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:43.432201  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:43.468751  438716 cri.go:89] found id: ""
	I0819 19:14:43.468778  438716 logs.go:276] 0 containers: []
	W0819 19:14:43.468787  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:43.468803  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:43.468870  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:43.503290  438716 cri.go:89] found id: ""
	I0819 19:14:43.503319  438716 logs.go:276] 0 containers: []
	W0819 19:14:43.503328  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:43.503334  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:43.503390  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:43.536382  438716 cri.go:89] found id: ""
	I0819 19:14:43.536416  438716 logs.go:276] 0 containers: []
	W0819 19:14:43.536435  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:43.536443  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:43.536494  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:43.571570  438716 cri.go:89] found id: ""
	I0819 19:14:43.571602  438716 logs.go:276] 0 containers: []
	W0819 19:14:43.571611  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:43.571617  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:43.571682  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:43.610421  438716 cri.go:89] found id: ""
	I0819 19:14:43.610455  438716 logs.go:276] 0 containers: []
	W0819 19:14:43.610465  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:43.610473  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:43.610524  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:43.647173  438716 cri.go:89] found id: ""
	I0819 19:14:43.647200  438716 logs.go:276] 0 containers: []
	W0819 19:14:43.647209  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:43.647215  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:43.647266  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:43.684493  438716 cri.go:89] found id: ""
	I0819 19:14:43.684525  438716 logs.go:276] 0 containers: []
	W0819 19:14:43.684535  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:43.684541  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:43.684609  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:43.718781  438716 cri.go:89] found id: ""
	I0819 19:14:43.718811  438716 logs.go:276] 0 containers: []
	W0819 19:14:43.718822  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:43.718834  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:43.718858  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:43.732546  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:43.732578  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:43.819640  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:43.819665  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:43.819700  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:43.900246  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:43.900286  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:43.941751  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:43.941783  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:42.901387  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:44.901876  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:42.693950  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:45.193132  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:44.918631  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:47.417977  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:46.498232  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:46.511167  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:46.511237  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:46.545493  438716 cri.go:89] found id: ""
	I0819 19:14:46.545528  438716 logs.go:276] 0 containers: []
	W0819 19:14:46.545541  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:46.545549  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:46.545607  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:46.580599  438716 cri.go:89] found id: ""
	I0819 19:14:46.580626  438716 logs.go:276] 0 containers: []
	W0819 19:14:46.580634  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:46.580640  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:46.580760  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:46.614515  438716 cri.go:89] found id: ""
	I0819 19:14:46.614551  438716 logs.go:276] 0 containers: []
	W0819 19:14:46.614561  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:46.614570  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:46.614637  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:46.647767  438716 cri.go:89] found id: ""
	I0819 19:14:46.647803  438716 logs.go:276] 0 containers: []
	W0819 19:14:46.647816  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:46.647825  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:46.647893  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:46.681660  438716 cri.go:89] found id: ""
	I0819 19:14:46.681695  438716 logs.go:276] 0 containers: []
	W0819 19:14:46.681707  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:46.681717  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:46.681788  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:46.718828  438716 cri.go:89] found id: ""
	I0819 19:14:46.718858  438716 logs.go:276] 0 containers: []
	W0819 19:14:46.718868  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:46.718875  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:46.718929  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:46.760524  438716 cri.go:89] found id: ""
	I0819 19:14:46.760553  438716 logs.go:276] 0 containers: []
	W0819 19:14:46.760561  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:46.760569  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:46.760634  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:46.799014  438716 cri.go:89] found id: ""
	I0819 19:14:46.799042  438716 logs.go:276] 0 containers: []
	W0819 19:14:46.799054  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:46.799067  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:46.799135  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:46.850769  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:46.850812  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:46.865647  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:46.865698  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:46.942197  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:46.942228  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:46.942244  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:47.019295  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:47.019337  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:49.562713  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:49.575406  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:49.575484  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:49.610067  438716 cri.go:89] found id: ""
	I0819 19:14:49.610105  438716 logs.go:276] 0 containers: []
	W0819 19:14:49.610115  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:49.610121  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:49.610182  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:49.646164  438716 cri.go:89] found id: ""
	I0819 19:14:49.646205  438716 logs.go:276] 0 containers: []
	W0819 19:14:49.646230  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:49.646238  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:49.646317  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:49.680268  438716 cri.go:89] found id: ""
	I0819 19:14:49.680303  438716 logs.go:276] 0 containers: []
	W0819 19:14:49.680314  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:49.680322  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:49.680387  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:49.714952  438716 cri.go:89] found id: ""
	I0819 19:14:49.714981  438716 logs.go:276] 0 containers: []
	W0819 19:14:49.714992  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:49.715001  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:49.715067  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:49.749483  438716 cri.go:89] found id: ""
	I0819 19:14:49.749516  438716 logs.go:276] 0 containers: []
	W0819 19:14:49.749528  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:49.749537  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:49.749616  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:49.794506  438716 cri.go:89] found id: ""
	I0819 19:14:49.794538  438716 logs.go:276] 0 containers: []
	W0819 19:14:49.794550  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:49.794558  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:49.794628  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:49.847284  438716 cri.go:89] found id: ""
	I0819 19:14:49.847313  438716 logs.go:276] 0 containers: []
	W0819 19:14:49.847324  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:49.847334  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:49.847398  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:49.903800  438716 cri.go:89] found id: ""
	I0819 19:14:49.903829  438716 logs.go:276] 0 containers: []
	W0819 19:14:49.903839  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:49.903850  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:49.903867  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:49.972836  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:49.972866  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:49.972885  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:50.049939  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:50.049976  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:50.086514  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:50.086550  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:50.140681  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:50.140718  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:46.903667  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:49.402220  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:51.402281  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:47.693723  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:49.694755  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:52.193220  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:49.919931  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:52.419880  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:52.656573  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:52.670043  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:52.670124  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:52.704514  438716 cri.go:89] found id: ""
	I0819 19:14:52.704541  438716 logs.go:276] 0 containers: []
	W0819 19:14:52.704551  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:52.704558  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:52.704621  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:52.738329  438716 cri.go:89] found id: ""
	I0819 19:14:52.738357  438716 logs.go:276] 0 containers: []
	W0819 19:14:52.738365  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:52.738371  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:52.738423  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:52.774886  438716 cri.go:89] found id: ""
	I0819 19:14:52.774917  438716 logs.go:276] 0 containers: []
	W0819 19:14:52.774926  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:52.774933  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:52.774986  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:52.810262  438716 cri.go:89] found id: ""
	I0819 19:14:52.810288  438716 logs.go:276] 0 containers: []
	W0819 19:14:52.810296  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:52.810303  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:52.810363  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:52.848429  438716 cri.go:89] found id: ""
	I0819 19:14:52.848455  438716 logs.go:276] 0 containers: []
	W0819 19:14:52.848463  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:52.848474  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:52.848539  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:52.886135  438716 cri.go:89] found id: ""
	I0819 19:14:52.886163  438716 logs.go:276] 0 containers: []
	W0819 19:14:52.886179  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:52.886185  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:52.886241  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:52.923288  438716 cri.go:89] found id: ""
	I0819 19:14:52.923314  438716 logs.go:276] 0 containers: []
	W0819 19:14:52.923325  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:52.923333  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:52.923397  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:52.957273  438716 cri.go:89] found id: ""
	I0819 19:14:52.957303  438716 logs.go:276] 0 containers: []
	W0819 19:14:52.957315  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:52.957328  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:52.957345  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:52.970687  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:52.970714  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:53.045081  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:53.045108  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:53.045125  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:53.122233  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:53.122279  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:53.161525  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:53.161554  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:53.901584  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:55.902739  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:54.194220  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:56.197070  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:54.917358  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:56.918562  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:58.919041  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:55.714177  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:55.733726  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:55.733809  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:55.781435  438716 cri.go:89] found id: ""
	I0819 19:14:55.781472  438716 logs.go:276] 0 containers: []
	W0819 19:14:55.781485  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:55.781493  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:55.781560  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:55.846316  438716 cri.go:89] found id: ""
	I0819 19:14:55.846351  438716 logs.go:276] 0 containers: []
	W0819 19:14:55.846362  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:55.846370  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:55.846439  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:55.881587  438716 cri.go:89] found id: ""
	I0819 19:14:55.881623  438716 logs.go:276] 0 containers: []
	W0819 19:14:55.881635  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:55.881644  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:55.881719  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:55.919332  438716 cri.go:89] found id: ""
	I0819 19:14:55.919374  438716 logs.go:276] 0 containers: []
	W0819 19:14:55.919382  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:55.919389  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:55.919441  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:55.954704  438716 cri.go:89] found id: ""
	I0819 19:14:55.954739  438716 logs.go:276] 0 containers: []
	W0819 19:14:55.954752  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:55.954761  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:55.954836  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:55.989289  438716 cri.go:89] found id: ""
	I0819 19:14:55.989321  438716 logs.go:276] 0 containers: []
	W0819 19:14:55.989332  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:55.989340  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:55.989406  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:56.025771  438716 cri.go:89] found id: ""
	I0819 19:14:56.025800  438716 logs.go:276] 0 containers: []
	W0819 19:14:56.025809  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:56.025816  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:56.025883  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:56.065631  438716 cri.go:89] found id: ""
	I0819 19:14:56.065673  438716 logs.go:276] 0 containers: []
	W0819 19:14:56.065686  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:56.065699  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:56.065722  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:56.119482  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:56.119523  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:56.133885  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:56.133915  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:56.207012  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:56.207033  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:56.207045  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:56.288158  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:56.288195  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:58.829677  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:58.844085  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:58.844158  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:58.880900  438716 cri.go:89] found id: ""
	I0819 19:14:58.880934  438716 logs.go:276] 0 containers: []
	W0819 19:14:58.880945  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:58.880951  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:58.881016  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:58.918833  438716 cri.go:89] found id: ""
	I0819 19:14:58.918862  438716 logs.go:276] 0 containers: []
	W0819 19:14:58.918872  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:58.918881  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:58.918939  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:58.956577  438716 cri.go:89] found id: ""
	I0819 19:14:58.956612  438716 logs.go:276] 0 containers: []
	W0819 19:14:58.956623  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:58.956634  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:58.956705  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:58.993884  438716 cri.go:89] found id: ""
	I0819 19:14:58.993914  438716 logs.go:276] 0 containers: []
	W0819 19:14:58.993923  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:58.993930  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:58.993988  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:59.031366  438716 cri.go:89] found id: ""
	I0819 19:14:59.031389  438716 logs.go:276] 0 containers: []
	W0819 19:14:59.031398  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:59.031405  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:59.031464  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:59.072014  438716 cri.go:89] found id: ""
	I0819 19:14:59.072047  438716 logs.go:276] 0 containers: []
	W0819 19:14:59.072058  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:59.072065  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:59.072129  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:59.108713  438716 cri.go:89] found id: ""
	I0819 19:14:59.108744  438716 logs.go:276] 0 containers: []
	W0819 19:14:59.108756  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:59.108765  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:59.108866  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:59.147599  438716 cri.go:89] found id: ""
	I0819 19:14:59.147634  438716 logs.go:276] 0 containers: []
	W0819 19:14:59.147647  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:59.147659  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:59.147695  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:59.224745  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:59.224781  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:59.264586  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:59.264616  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:59.317065  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:59.317104  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:59.331230  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:59.331264  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:59.398370  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:58.401471  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:00.402623  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:58.694096  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:01.193262  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:01.418063  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:03.418302  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:01.899123  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:01.912743  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:01.912824  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:01.949717  438716 cri.go:89] found id: ""
	I0819 19:15:01.949748  438716 logs.go:276] 0 containers: []
	W0819 19:15:01.949756  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:01.949763  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:01.949819  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:01.992776  438716 cri.go:89] found id: ""
	I0819 19:15:01.992802  438716 logs.go:276] 0 containers: []
	W0819 19:15:01.992812  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:01.992819  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:01.992884  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:02.030551  438716 cri.go:89] found id: ""
	I0819 19:15:02.030579  438716 logs.go:276] 0 containers: []
	W0819 19:15:02.030592  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:02.030600  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:02.030672  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:02.069927  438716 cri.go:89] found id: ""
	I0819 19:15:02.069955  438716 logs.go:276] 0 containers: []
	W0819 19:15:02.069964  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:02.069971  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:02.070031  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:02.106584  438716 cri.go:89] found id: ""
	I0819 19:15:02.106609  438716 logs.go:276] 0 containers: []
	W0819 19:15:02.106619  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:02.106629  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:02.106695  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:02.145007  438716 cri.go:89] found id: ""
	I0819 19:15:02.145035  438716 logs.go:276] 0 containers: []
	W0819 19:15:02.145044  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:02.145051  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:02.145113  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:02.180693  438716 cri.go:89] found id: ""
	I0819 19:15:02.180730  438716 logs.go:276] 0 containers: []
	W0819 19:15:02.180741  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:02.180748  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:02.180800  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:02.215563  438716 cri.go:89] found id: ""
	I0819 19:15:02.215597  438716 logs.go:276] 0 containers: []
	W0819 19:15:02.215609  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:02.215623  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:02.215641  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:02.285658  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:02.285692  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:02.285711  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:02.363620  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:02.363660  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:02.414240  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:02.414274  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:02.467336  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:02.467380  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:04.981935  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:04.995537  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:04.995611  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:05.032700  438716 cri.go:89] found id: ""
	I0819 19:15:05.032735  438716 logs.go:276] 0 containers: []
	W0819 19:15:05.032748  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:05.032756  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:05.032827  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:05.069132  438716 cri.go:89] found id: ""
	I0819 19:15:05.069162  438716 logs.go:276] 0 containers: []
	W0819 19:15:05.069173  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:05.069181  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:05.069247  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:05.105320  438716 cri.go:89] found id: ""
	I0819 19:15:05.105346  438716 logs.go:276] 0 containers: []
	W0819 19:15:05.105355  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:05.105361  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:05.105421  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:05.142311  438716 cri.go:89] found id: ""
	I0819 19:15:05.142343  438716 logs.go:276] 0 containers: []
	W0819 19:15:05.142354  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:05.142362  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:05.142412  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:05.177398  438716 cri.go:89] found id: ""
	I0819 19:15:05.177426  438716 logs.go:276] 0 containers: []
	W0819 19:15:05.177437  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:05.177450  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:05.177506  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:05.212749  438716 cri.go:89] found id: ""
	I0819 19:15:05.212780  438716 logs.go:276] 0 containers: []
	W0819 19:15:05.212789  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:05.212796  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:05.212854  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:05.246325  438716 cri.go:89] found id: ""
	I0819 19:15:05.246356  438716 logs.go:276] 0 containers: []
	W0819 19:15:05.246364  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:05.246371  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:05.246420  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:05.287429  438716 cri.go:89] found id: ""
	I0819 19:15:05.287456  438716 logs.go:276] 0 containers: []
	W0819 19:15:05.287466  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:05.287476  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:05.287489  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:05.338742  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:05.338787  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:05.352948  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:05.352978  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:05.421478  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:05.421502  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:05.421529  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:05.497772  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:05.497809  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:02.902202  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:05.403518  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:03.193491  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:05.194340  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:05.419361  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:07.918522  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:08.040403  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:08.053761  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:08.053827  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:08.087047  438716 cri.go:89] found id: ""
	I0819 19:15:08.087073  438716 logs.go:276] 0 containers: []
	W0819 19:15:08.087082  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:08.087089  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:08.087140  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:08.122012  438716 cri.go:89] found id: ""
	I0819 19:15:08.122048  438716 logs.go:276] 0 containers: []
	W0819 19:15:08.122059  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:08.122068  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:08.122134  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:08.155319  438716 cri.go:89] found id: ""
	I0819 19:15:08.155349  438716 logs.go:276] 0 containers: []
	W0819 19:15:08.155360  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:08.155368  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:08.155447  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:08.196003  438716 cri.go:89] found id: ""
	I0819 19:15:08.196027  438716 logs.go:276] 0 containers: []
	W0819 19:15:08.196035  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:08.196041  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:08.196091  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:08.230798  438716 cri.go:89] found id: ""
	I0819 19:15:08.230826  438716 logs.go:276] 0 containers: []
	W0819 19:15:08.230836  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:08.230845  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:08.230910  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:08.267522  438716 cri.go:89] found id: ""
	I0819 19:15:08.267554  438716 logs.go:276] 0 containers: []
	W0819 19:15:08.267562  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:08.267569  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:08.267621  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:08.304775  438716 cri.go:89] found id: ""
	I0819 19:15:08.304801  438716 logs.go:276] 0 containers: []
	W0819 19:15:08.304809  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:08.304815  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:08.304866  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:08.344694  438716 cri.go:89] found id: ""
	I0819 19:15:08.344720  438716 logs.go:276] 0 containers: []
	W0819 19:15:08.344734  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:08.344744  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:08.344757  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:08.383581  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:08.383619  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:08.433868  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:08.433905  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:08.447627  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:08.447657  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:08.518846  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:08.518869  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:08.518887  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:07.901746  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:09.902647  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:07.693351  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:10.193893  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:12.194400  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:09.919436  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:12.418215  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:11.104449  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:11.118149  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:11.118228  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:11.157917  438716 cri.go:89] found id: ""
	I0819 19:15:11.157951  438716 logs.go:276] 0 containers: []
	W0819 19:15:11.157963  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:11.157971  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:11.158040  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:11.196685  438716 cri.go:89] found id: ""
	I0819 19:15:11.196711  438716 logs.go:276] 0 containers: []
	W0819 19:15:11.196721  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:11.196729  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:11.196788  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:11.231089  438716 cri.go:89] found id: ""
	I0819 19:15:11.231124  438716 logs.go:276] 0 containers: []
	W0819 19:15:11.231135  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:11.231144  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:11.231223  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:11.267001  438716 cri.go:89] found id: ""
	I0819 19:15:11.267032  438716 logs.go:276] 0 containers: []
	W0819 19:15:11.267041  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:11.267048  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:11.267113  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:11.302178  438716 cri.go:89] found id: ""
	I0819 19:15:11.302210  438716 logs.go:276] 0 containers: []
	W0819 19:15:11.302223  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:11.302232  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:11.302292  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:11.336335  438716 cri.go:89] found id: ""
	I0819 19:15:11.336368  438716 logs.go:276] 0 containers: []
	W0819 19:15:11.336442  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:11.336458  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:11.336525  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:11.370891  438716 cri.go:89] found id: ""
	I0819 19:15:11.370926  438716 logs.go:276] 0 containers: []
	W0819 19:15:11.370937  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:11.370945  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:11.371007  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:11.407439  438716 cri.go:89] found id: ""
	I0819 19:15:11.407466  438716 logs.go:276] 0 containers: []
	W0819 19:15:11.407473  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:11.407482  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:11.407497  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:11.458692  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:11.458735  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:11.473104  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:11.473133  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:11.542004  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:11.542031  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:11.542050  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:11.619972  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:11.620014  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:14.159220  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:14.173135  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:14.173204  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:14.210347  438716 cri.go:89] found id: ""
	I0819 19:15:14.210377  438716 logs.go:276] 0 containers: []
	W0819 19:15:14.210389  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:14.210398  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:14.210468  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:14.247143  438716 cri.go:89] found id: ""
	I0819 19:15:14.247169  438716 logs.go:276] 0 containers: []
	W0819 19:15:14.247180  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:14.247187  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:14.247260  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:14.284949  438716 cri.go:89] found id: ""
	I0819 19:15:14.284981  438716 logs.go:276] 0 containers: []
	W0819 19:15:14.284995  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:14.285003  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:14.285071  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:14.326801  438716 cri.go:89] found id: ""
	I0819 19:15:14.326826  438716 logs.go:276] 0 containers: []
	W0819 19:15:14.326834  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:14.326842  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:14.326903  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:14.362730  438716 cri.go:89] found id: ""
	I0819 19:15:14.362764  438716 logs.go:276] 0 containers: []
	W0819 19:15:14.362775  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:14.362783  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:14.362852  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:14.403406  438716 cri.go:89] found id: ""
	I0819 19:15:14.403437  438716 logs.go:276] 0 containers: []
	W0819 19:15:14.403448  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:14.403456  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:14.403514  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:14.440641  438716 cri.go:89] found id: ""
	I0819 19:15:14.440670  438716 logs.go:276] 0 containers: []
	W0819 19:15:14.440678  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:14.440685  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:14.440737  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:14.479477  438716 cri.go:89] found id: ""
	I0819 19:15:14.479511  438716 logs.go:276] 0 containers: []
	W0819 19:15:14.479521  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:14.479530  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:14.479544  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:14.530573  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:14.530620  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:14.545329  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:14.545368  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:14.619632  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:14.619652  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:14.619680  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:14.694923  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:14.694956  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:12.401350  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:14.402845  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:14.693534  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:16.693737  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:14.420872  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:16.918227  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:18.919244  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:17.237830  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:17.250579  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:17.250645  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:17.284706  438716 cri.go:89] found id: ""
	I0819 19:15:17.284738  438716 logs.go:276] 0 containers: []
	W0819 19:15:17.284750  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:17.284759  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:17.284832  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:17.320313  438716 cri.go:89] found id: ""
	I0819 19:15:17.320342  438716 logs.go:276] 0 containers: []
	W0819 19:15:17.320350  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:17.320356  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:17.320419  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:17.355974  438716 cri.go:89] found id: ""
	I0819 19:15:17.356008  438716 logs.go:276] 0 containers: []
	W0819 19:15:17.356018  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:17.356027  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:17.356093  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:17.390759  438716 cri.go:89] found id: ""
	I0819 19:15:17.390786  438716 logs.go:276] 0 containers: []
	W0819 19:15:17.390795  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:17.390803  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:17.390861  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:17.431951  438716 cri.go:89] found id: ""
	I0819 19:15:17.431982  438716 logs.go:276] 0 containers: []
	W0819 19:15:17.431993  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:17.432001  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:17.432068  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:17.467183  438716 cri.go:89] found id: ""
	I0819 19:15:17.467215  438716 logs.go:276] 0 containers: []
	W0819 19:15:17.467227  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:17.467236  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:17.467306  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:17.502678  438716 cri.go:89] found id: ""
	I0819 19:15:17.502709  438716 logs.go:276] 0 containers: []
	W0819 19:15:17.502721  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:17.502730  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:17.502801  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:17.537597  438716 cri.go:89] found id: ""
	I0819 19:15:17.537629  438716 logs.go:276] 0 containers: []
	W0819 19:15:17.537643  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:17.537656  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:17.537672  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:17.620076  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:17.620117  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:17.659979  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:17.660009  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:17.710963  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:17.711006  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:17.725556  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:17.725590  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:17.796176  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:20.297246  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:20.311395  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:20.311476  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:20.352279  438716 cri.go:89] found id: ""
	I0819 19:15:20.352317  438716 logs.go:276] 0 containers: []
	W0819 19:15:20.352328  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:20.352338  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:20.352401  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:20.390335  438716 cri.go:89] found id: ""
	I0819 19:15:20.390368  438716 logs.go:276] 0 containers: []
	W0819 19:15:20.390377  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:20.390384  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:20.390450  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:20.430264  438716 cri.go:89] found id: ""
	I0819 19:15:20.430300  438716 logs.go:276] 0 containers: []
	W0819 19:15:20.430312  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:20.430320  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:20.430386  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:20.469670  438716 cri.go:89] found id: ""
	I0819 19:15:20.469703  438716 logs.go:276] 0 containers: []
	W0819 19:15:20.469715  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:20.469723  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:20.469790  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:20.503233  438716 cri.go:89] found id: ""
	I0819 19:15:20.503263  438716 logs.go:276] 0 containers: []
	W0819 19:15:20.503274  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:20.503283  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:20.503371  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:16.902246  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:19.402407  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:18.693921  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:21.193124  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:21.418463  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:23.418730  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:20.538180  438716 cri.go:89] found id: ""
	I0819 19:15:20.538211  438716 logs.go:276] 0 containers: []
	W0819 19:15:20.538223  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:20.538231  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:20.538302  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:20.573301  438716 cri.go:89] found id: ""
	I0819 19:15:20.573329  438716 logs.go:276] 0 containers: []
	W0819 19:15:20.573337  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:20.573352  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:20.573411  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:20.606962  438716 cri.go:89] found id: ""
	I0819 19:15:20.606995  438716 logs.go:276] 0 containers: []
	W0819 19:15:20.607007  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:20.607019  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:20.607035  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:20.658392  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:20.658428  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:20.672063  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:20.672092  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:20.747987  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:20.748010  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:20.748035  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:20.829367  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:20.829415  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:23.378885  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:23.393711  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:23.393778  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:23.430629  438716 cri.go:89] found id: ""
	I0819 19:15:23.430655  438716 logs.go:276] 0 containers: []
	W0819 19:15:23.430665  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:23.430675  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:23.430727  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:23.467509  438716 cri.go:89] found id: ""
	I0819 19:15:23.467541  438716 logs.go:276] 0 containers: []
	W0819 19:15:23.467552  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:23.467560  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:23.467634  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:23.505313  438716 cri.go:89] found id: ""
	I0819 19:15:23.505351  438716 logs.go:276] 0 containers: []
	W0819 19:15:23.505359  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:23.505366  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:23.505416  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:23.543393  438716 cri.go:89] found id: ""
	I0819 19:15:23.543428  438716 logs.go:276] 0 containers: []
	W0819 19:15:23.543441  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:23.543450  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:23.543514  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:23.578265  438716 cri.go:89] found id: ""
	I0819 19:15:23.578293  438716 logs.go:276] 0 containers: []
	W0819 19:15:23.578301  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:23.578308  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:23.578376  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:23.613951  438716 cri.go:89] found id: ""
	I0819 19:15:23.613981  438716 logs.go:276] 0 containers: []
	W0819 19:15:23.613989  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:23.613996  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:23.614061  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:23.647387  438716 cri.go:89] found id: ""
	I0819 19:15:23.647418  438716 logs.go:276] 0 containers: []
	W0819 19:15:23.647426  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:23.647433  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:23.647501  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:23.682482  438716 cri.go:89] found id: ""
	I0819 19:15:23.682510  438716 logs.go:276] 0 containers: []
	W0819 19:15:23.682519  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:23.682530  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:23.682547  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:23.696601  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:23.696629  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:23.766762  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:23.766788  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:23.766804  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:23.850947  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:23.850988  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:23.891113  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:23.891146  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:21.902926  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:24.401874  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:23.193192  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:25.193347  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:25.919555  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:28.419920  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:26.444086  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:26.457774  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:26.457844  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:26.494525  438716 cri.go:89] found id: ""
	I0819 19:15:26.494552  438716 logs.go:276] 0 containers: []
	W0819 19:15:26.494560  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:26.494567  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:26.494618  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:26.535317  438716 cri.go:89] found id: ""
	I0819 19:15:26.535348  438716 logs.go:276] 0 containers: []
	W0819 19:15:26.535359  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:26.535368  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:26.535437  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:26.570853  438716 cri.go:89] found id: ""
	I0819 19:15:26.570886  438716 logs.go:276] 0 containers: []
	W0819 19:15:26.570896  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:26.570920  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:26.570987  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:26.610739  438716 cri.go:89] found id: ""
	I0819 19:15:26.610773  438716 logs.go:276] 0 containers: []
	W0819 19:15:26.610785  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:26.610794  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:26.610885  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:26.651274  438716 cri.go:89] found id: ""
	I0819 19:15:26.651303  438716 logs.go:276] 0 containers: []
	W0819 19:15:26.651311  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:26.651318  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:26.651367  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:26.689963  438716 cri.go:89] found id: ""
	I0819 19:15:26.689993  438716 logs.go:276] 0 containers: []
	W0819 19:15:26.690005  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:26.690013  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:26.690083  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:26.729433  438716 cri.go:89] found id: ""
	I0819 19:15:26.729465  438716 logs.go:276] 0 containers: []
	W0819 19:15:26.729475  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:26.729483  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:26.729548  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:26.768386  438716 cri.go:89] found id: ""
	I0819 19:15:26.768418  438716 logs.go:276] 0 containers: []
	W0819 19:15:26.768427  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:26.768436  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:26.768449  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:26.821526  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:26.821564  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:26.835714  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:26.835763  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:26.907981  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:26.908007  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:26.908023  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:26.991969  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:26.992008  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:29.529743  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:29.544812  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:29.544883  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:29.581455  438716 cri.go:89] found id: ""
	I0819 19:15:29.581486  438716 logs.go:276] 0 containers: []
	W0819 19:15:29.581496  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:29.581503  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:29.581559  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:29.634542  438716 cri.go:89] found id: ""
	I0819 19:15:29.634576  438716 logs.go:276] 0 containers: []
	W0819 19:15:29.634587  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:29.634596  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:29.634663  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:29.670388  438716 cri.go:89] found id: ""
	I0819 19:15:29.670422  438716 logs.go:276] 0 containers: []
	W0819 19:15:29.670439  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:29.670449  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:29.670511  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:29.712267  438716 cri.go:89] found id: ""
	I0819 19:15:29.712293  438716 logs.go:276] 0 containers: []
	W0819 19:15:29.712304  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:29.712313  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:29.712376  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:29.752392  438716 cri.go:89] found id: ""
	I0819 19:15:29.752423  438716 logs.go:276] 0 containers: []
	W0819 19:15:29.752432  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:29.752438  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:29.752500  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:29.791734  438716 cri.go:89] found id: ""
	I0819 19:15:29.791763  438716 logs.go:276] 0 containers: []
	W0819 19:15:29.791772  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:29.791778  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:29.791830  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:29.832882  438716 cri.go:89] found id: ""
	I0819 19:15:29.832910  438716 logs.go:276] 0 containers: []
	W0819 19:15:29.832921  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:29.832929  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:29.832986  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:29.872035  438716 cri.go:89] found id: ""
	I0819 19:15:29.872068  438716 logs.go:276] 0 containers: []
	W0819 19:15:29.872076  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:29.872086  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:29.872098  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:29.926551  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:29.926588  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:29.940500  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:29.940537  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:30.010327  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:30.010348  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:30.010368  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:30.090864  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:30.090910  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:26.902881  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:29.401449  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:27.692753  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:29.693161  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:32.193256  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:30.421066  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:32.918642  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:32.636291  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:32.649264  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:32.649334  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:32.683746  438716 cri.go:89] found id: ""
	I0819 19:15:32.683774  438716 logs.go:276] 0 containers: []
	W0819 19:15:32.683785  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:32.683794  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:32.683867  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:32.723805  438716 cri.go:89] found id: ""
	I0819 19:15:32.723838  438716 logs.go:276] 0 containers: []
	W0819 19:15:32.723850  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:32.723858  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:32.723917  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:32.758119  438716 cri.go:89] found id: ""
	I0819 19:15:32.758148  438716 logs.go:276] 0 containers: []
	W0819 19:15:32.758157  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:32.758164  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:32.758215  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:32.792726  438716 cri.go:89] found id: ""
	I0819 19:15:32.792754  438716 logs.go:276] 0 containers: []
	W0819 19:15:32.792768  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:32.792775  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:32.792823  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:32.829180  438716 cri.go:89] found id: ""
	I0819 19:15:32.829208  438716 logs.go:276] 0 containers: []
	W0819 19:15:32.829217  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:32.829224  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:32.829274  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:32.869045  438716 cri.go:89] found id: ""
	I0819 19:15:32.869081  438716 logs.go:276] 0 containers: []
	W0819 19:15:32.869093  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:32.869102  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:32.869172  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:32.904780  438716 cri.go:89] found id: ""
	I0819 19:15:32.904803  438716 logs.go:276] 0 containers: []
	W0819 19:15:32.904811  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:32.904818  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:32.904870  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:32.940846  438716 cri.go:89] found id: ""
	I0819 19:15:32.940876  438716 logs.go:276] 0 containers: []
	W0819 19:15:32.940886  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:32.940900  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:32.940924  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:33.008569  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:33.008592  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:33.008606  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:33.092605  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:33.092657  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:33.133016  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:33.133045  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:33.188335  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:33.188376  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:31.901719  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:34.401060  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:36.401983  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:34.193690  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:36.694042  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:34.918948  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:37.418186  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:35.704043  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:35.717647  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:35.717708  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:35.752337  438716 cri.go:89] found id: ""
	I0819 19:15:35.752364  438716 logs.go:276] 0 containers: []
	W0819 19:15:35.752372  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:35.752378  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:35.752431  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:35.787233  438716 cri.go:89] found id: ""
	I0819 19:15:35.787261  438716 logs.go:276] 0 containers: []
	W0819 19:15:35.787269  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:35.787275  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:35.787334  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:35.819641  438716 cri.go:89] found id: ""
	I0819 19:15:35.819667  438716 logs.go:276] 0 containers: []
	W0819 19:15:35.819697  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:35.819705  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:35.819775  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:35.856133  438716 cri.go:89] found id: ""
	I0819 19:15:35.856160  438716 logs.go:276] 0 containers: []
	W0819 19:15:35.856169  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:35.856176  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:35.856240  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:35.889390  438716 cri.go:89] found id: ""
	I0819 19:15:35.889422  438716 logs.go:276] 0 containers: []
	W0819 19:15:35.889432  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:35.889438  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:35.889501  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:35.927477  438716 cri.go:89] found id: ""
	I0819 19:15:35.927519  438716 logs.go:276] 0 containers: []
	W0819 19:15:35.927531  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:35.927539  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:35.927600  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:35.961787  438716 cri.go:89] found id: ""
	I0819 19:15:35.961825  438716 logs.go:276] 0 containers: []
	W0819 19:15:35.961837  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:35.961845  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:35.961912  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:35.998350  438716 cri.go:89] found id: ""
	I0819 19:15:35.998384  438716 logs.go:276] 0 containers: []
	W0819 19:15:35.998396  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:35.998407  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:35.998419  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:36.054352  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:36.054394  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:36.078278  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:36.078311  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:36.166388  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:36.166416  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:36.166433  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:36.247222  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:36.247269  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:38.786510  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:38.800306  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:38.800364  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:38.834555  438716 cri.go:89] found id: ""
	I0819 19:15:38.834583  438716 logs.go:276] 0 containers: []
	W0819 19:15:38.834591  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:38.834598  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:38.834648  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:38.869078  438716 cri.go:89] found id: ""
	I0819 19:15:38.869105  438716 logs.go:276] 0 containers: []
	W0819 19:15:38.869114  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:38.869120  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:38.869174  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:38.903702  438716 cri.go:89] found id: ""
	I0819 19:15:38.903728  438716 logs.go:276] 0 containers: []
	W0819 19:15:38.903736  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:38.903743  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:38.903795  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:38.938326  438716 cri.go:89] found id: ""
	I0819 19:15:38.938352  438716 logs.go:276] 0 containers: []
	W0819 19:15:38.938360  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:38.938367  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:38.938422  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:38.976032  438716 cri.go:89] found id: ""
	I0819 19:15:38.976063  438716 logs.go:276] 0 containers: []
	W0819 19:15:38.976075  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:38.976084  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:38.976149  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:39.009957  438716 cri.go:89] found id: ""
	I0819 19:15:39.009991  438716 logs.go:276] 0 containers: []
	W0819 19:15:39.010002  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:39.010011  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:39.010077  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:39.046381  438716 cri.go:89] found id: ""
	I0819 19:15:39.046408  438716 logs.go:276] 0 containers: []
	W0819 19:15:39.046416  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:39.046422  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:39.046474  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:39.083022  438716 cri.go:89] found id: ""
	I0819 19:15:39.083050  438716 logs.go:276] 0 containers: []
	W0819 19:15:39.083058  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:39.083067  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:39.083079  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:39.160731  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:39.160768  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:39.204846  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:39.204879  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:39.259248  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:39.259287  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:39.273764  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:39.273796  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:39.344477  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:38.402275  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:40.901494  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:39.194367  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:41.692933  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:39.419291  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:41.919708  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:43.919984  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:41.845258  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:41.861691  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:41.861754  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:41.908235  438716 cri.go:89] found id: ""
	I0819 19:15:41.908269  438716 logs.go:276] 0 containers: []
	W0819 19:15:41.908281  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:41.908289  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:41.908357  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:41.965631  438716 cri.go:89] found id: ""
	I0819 19:15:41.965657  438716 logs.go:276] 0 containers: []
	W0819 19:15:41.965667  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:41.965673  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:41.965732  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:42.004540  438716 cri.go:89] found id: ""
	I0819 19:15:42.004569  438716 logs.go:276] 0 containers: []
	W0819 19:15:42.004578  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:42.004585  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:42.004650  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:42.042189  438716 cri.go:89] found id: ""
	I0819 19:15:42.042215  438716 logs.go:276] 0 containers: []
	W0819 19:15:42.042224  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:42.042231  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:42.042299  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:42.079313  438716 cri.go:89] found id: ""
	I0819 19:15:42.079349  438716 logs.go:276] 0 containers: []
	W0819 19:15:42.079361  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:42.079370  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:42.079450  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:42.116130  438716 cri.go:89] found id: ""
	I0819 19:15:42.116164  438716 logs.go:276] 0 containers: []
	W0819 19:15:42.116176  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:42.116184  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:42.116253  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:42.154886  438716 cri.go:89] found id: ""
	I0819 19:15:42.154919  438716 logs.go:276] 0 containers: []
	W0819 19:15:42.154928  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:42.154935  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:42.154987  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:42.191204  438716 cri.go:89] found id: ""
	I0819 19:15:42.191237  438716 logs.go:276] 0 containers: []
	W0819 19:15:42.191248  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:42.191258  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:42.191275  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:42.244395  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:42.244434  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:42.258029  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:42.258066  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:42.323461  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:42.323481  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:42.323498  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:42.401932  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:42.401969  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:44.943615  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:44.958243  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:44.958315  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:44.995181  438716 cri.go:89] found id: ""
	I0819 19:15:44.995217  438716 logs.go:276] 0 containers: []
	W0819 19:15:44.995236  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:44.995244  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:44.995309  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:45.030705  438716 cri.go:89] found id: ""
	I0819 19:15:45.030743  438716 logs.go:276] 0 containers: []
	W0819 19:15:45.030752  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:45.030759  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:45.030814  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:45.068186  438716 cri.go:89] found id: ""
	I0819 19:15:45.068215  438716 logs.go:276] 0 containers: []
	W0819 19:15:45.068224  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:45.068231  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:45.068314  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:45.105415  438716 cri.go:89] found id: ""
	I0819 19:15:45.105443  438716 logs.go:276] 0 containers: []
	W0819 19:15:45.105452  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:45.105458  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:45.105517  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:45.143628  438716 cri.go:89] found id: ""
	I0819 19:15:45.143662  438716 logs.go:276] 0 containers: []
	W0819 19:15:45.143694  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:45.143704  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:45.143771  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:45.184896  438716 cri.go:89] found id: ""
	I0819 19:15:45.184922  438716 logs.go:276] 0 containers: []
	W0819 19:15:45.184930  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:45.184937  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:45.185000  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:45.222599  438716 cri.go:89] found id: ""
	I0819 19:15:45.222631  438716 logs.go:276] 0 containers: []
	W0819 19:15:45.222639  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:45.222645  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:45.222700  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:45.260310  438716 cri.go:89] found id: ""
	I0819 19:15:45.260341  438716 logs.go:276] 0 containers: []
	W0819 19:15:45.260352  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:45.260361  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:45.260379  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:45.273687  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:45.273718  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:45.351367  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:45.351390  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:45.351407  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:45.428751  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:45.428787  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:45.468830  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:45.468869  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:42.902576  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:45.402812  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:43.693205  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:46.192804  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:46.419903  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:48.918620  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:48.023654  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:48.037206  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:48.037294  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:48.071647  438716 cri.go:89] found id: ""
	I0819 19:15:48.071686  438716 logs.go:276] 0 containers: []
	W0819 19:15:48.071695  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:48.071704  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:48.071765  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:48.106542  438716 cri.go:89] found id: ""
	I0819 19:15:48.106575  438716 logs.go:276] 0 containers: []
	W0819 19:15:48.106586  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:48.106596  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:48.106662  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:48.151917  438716 cri.go:89] found id: ""
	I0819 19:15:48.151949  438716 logs.go:276] 0 containers: []
	W0819 19:15:48.151959  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:48.151966  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:48.152022  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:48.190095  438716 cri.go:89] found id: ""
	I0819 19:15:48.190125  438716 logs.go:276] 0 containers: []
	W0819 19:15:48.190137  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:48.190146  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:48.190211  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:48.227193  438716 cri.go:89] found id: ""
	I0819 19:15:48.227228  438716 logs.go:276] 0 containers: []
	W0819 19:15:48.227240  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:48.227248  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:48.227317  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:48.261353  438716 cri.go:89] found id: ""
	I0819 19:15:48.261386  438716 logs.go:276] 0 containers: []
	W0819 19:15:48.261396  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:48.261403  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:48.261455  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:48.295749  438716 cri.go:89] found id: ""
	I0819 19:15:48.295782  438716 logs.go:276] 0 containers: []
	W0819 19:15:48.295794  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:48.295803  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:48.295874  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:48.338350  438716 cri.go:89] found id: ""
	I0819 19:15:48.338383  438716 logs.go:276] 0 containers: []
	W0819 19:15:48.338394  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:48.338404  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:48.338420  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:48.420705  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:48.420749  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:48.464114  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:48.464153  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:48.519461  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:48.519505  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:48.534324  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:48.534357  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:48.603580  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:47.900813  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:49.902363  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:48.194425  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:50.693598  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:51.419909  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:53.918494  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:51.104343  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:51.117552  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:51.117629  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:51.150630  438716 cri.go:89] found id: ""
	I0819 19:15:51.150665  438716 logs.go:276] 0 containers: []
	W0819 19:15:51.150677  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:51.150691  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:51.150765  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:51.184316  438716 cri.go:89] found id: ""
	I0819 19:15:51.184346  438716 logs.go:276] 0 containers: []
	W0819 19:15:51.184356  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:51.184362  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:51.184410  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:51.221252  438716 cri.go:89] found id: ""
	I0819 19:15:51.221277  438716 logs.go:276] 0 containers: []
	W0819 19:15:51.221286  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:51.221292  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:51.221349  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:51.255727  438716 cri.go:89] found id: ""
	I0819 19:15:51.255755  438716 logs.go:276] 0 containers: []
	W0819 19:15:51.255763  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:51.255769  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:51.255823  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:51.290615  438716 cri.go:89] found id: ""
	I0819 19:15:51.290651  438716 logs.go:276] 0 containers: []
	W0819 19:15:51.290660  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:51.290667  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:51.290721  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:51.326895  438716 cri.go:89] found id: ""
	I0819 19:15:51.326922  438716 logs.go:276] 0 containers: []
	W0819 19:15:51.326930  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:51.326937  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:51.326987  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:51.365516  438716 cri.go:89] found id: ""
	I0819 19:15:51.365547  438716 logs.go:276] 0 containers: []
	W0819 19:15:51.365558  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:51.365566  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:51.365632  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:51.399002  438716 cri.go:89] found id: ""
	I0819 19:15:51.399030  438716 logs.go:276] 0 containers: []
	W0819 19:15:51.399038  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:51.399048  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:51.399059  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:51.453481  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:51.453524  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:51.467246  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:51.467277  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:51.548547  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:51.548578  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:51.548595  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:51.635627  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:51.635670  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:54.175003  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:54.190462  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:54.190537  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:54.232140  438716 cri.go:89] found id: ""
	I0819 19:15:54.232168  438716 logs.go:276] 0 containers: []
	W0819 19:15:54.232178  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:54.232186  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:54.232254  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:54.267700  438716 cri.go:89] found id: ""
	I0819 19:15:54.267732  438716 logs.go:276] 0 containers: []
	W0819 19:15:54.267742  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:54.267748  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:54.267807  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:54.306272  438716 cri.go:89] found id: ""
	I0819 19:15:54.306300  438716 logs.go:276] 0 containers: []
	W0819 19:15:54.306308  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:54.306315  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:54.306368  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:54.341503  438716 cri.go:89] found id: ""
	I0819 19:15:54.341536  438716 logs.go:276] 0 containers: []
	W0819 19:15:54.341549  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:54.341556  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:54.341609  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:54.375535  438716 cri.go:89] found id: ""
	I0819 19:15:54.375570  438716 logs.go:276] 0 containers: []
	W0819 19:15:54.375582  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:54.375591  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:54.375661  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:54.409611  438716 cri.go:89] found id: ""
	I0819 19:15:54.409641  438716 logs.go:276] 0 containers: []
	W0819 19:15:54.409653  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:54.409662  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:54.409731  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:54.444318  438716 cri.go:89] found id: ""
	I0819 19:15:54.444346  438716 logs.go:276] 0 containers: []
	W0819 19:15:54.444358  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:54.444366  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:54.444425  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:54.480746  438716 cri.go:89] found id: ""
	I0819 19:15:54.480777  438716 logs.go:276] 0 containers: []
	W0819 19:15:54.480789  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:54.480802  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:54.480817  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:54.534209  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:54.534245  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:54.549557  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:54.549598  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:54.625086  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:54.625111  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:54.625136  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:54.705549  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:54.705589  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:52.401150  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:54.402049  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:56.402545  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:52.693826  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:54.694875  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:57.193741  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:56.418166  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:58.418955  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:57.257440  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:57.276724  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:57.276812  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:57.319032  438716 cri.go:89] found id: ""
	I0819 19:15:57.319062  438716 logs.go:276] 0 containers: []
	W0819 19:15:57.319073  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:57.319081  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:57.319163  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:57.357093  438716 cri.go:89] found id: ""
	I0819 19:15:57.357129  438716 logs.go:276] 0 containers: []
	W0819 19:15:57.357140  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:57.357152  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:57.357222  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:57.393978  438716 cri.go:89] found id: ""
	I0819 19:15:57.394013  438716 logs.go:276] 0 containers: []
	W0819 19:15:57.394025  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:57.394033  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:57.394102  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:57.428731  438716 cri.go:89] found id: ""
	I0819 19:15:57.428760  438716 logs.go:276] 0 containers: []
	W0819 19:15:57.428768  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:57.428775  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:57.428824  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:57.467772  438716 cri.go:89] found id: ""
	I0819 19:15:57.467810  438716 logs.go:276] 0 containers: []
	W0819 19:15:57.467822  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:57.467832  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:57.467904  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:57.502398  438716 cri.go:89] found id: ""
	I0819 19:15:57.502434  438716 logs.go:276] 0 containers: []
	W0819 19:15:57.502444  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:57.502450  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:57.502503  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:57.536729  438716 cri.go:89] found id: ""
	I0819 19:15:57.536760  438716 logs.go:276] 0 containers: []
	W0819 19:15:57.536771  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:57.536779  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:57.536845  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:57.574738  438716 cri.go:89] found id: ""
	I0819 19:15:57.574762  438716 logs.go:276] 0 containers: []
	W0819 19:15:57.574770  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:57.574780  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:57.574793  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:57.630063  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:57.630113  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:57.643083  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:57.643111  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:57.725081  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:57.725104  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:57.725118  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:57.805065  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:57.805105  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:00.344557  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:00.357940  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:00.358005  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:00.399319  438716 cri.go:89] found id: ""
	I0819 19:16:00.399355  438716 logs.go:276] 0 containers: []
	W0819 19:16:00.399368  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:00.399377  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:00.399446  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:00.444223  438716 cri.go:89] found id: ""
	I0819 19:16:00.444254  438716 logs.go:276] 0 containers: []
	W0819 19:16:00.444264  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:00.444271  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:00.444323  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:00.479903  438716 cri.go:89] found id: ""
	I0819 19:16:00.479932  438716 logs.go:276] 0 containers: []
	W0819 19:16:00.479942  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:00.479948  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:00.480003  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:00.515923  438716 cri.go:89] found id: ""
	I0819 19:16:00.515954  438716 logs.go:276] 0 containers: []
	W0819 19:16:00.515966  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:00.515974  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:00.516043  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:58.901349  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:00.902114  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:59.194660  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:01.693174  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:00.419210  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:02.918814  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:00.551319  438716 cri.go:89] found id: ""
	I0819 19:16:00.551348  438716 logs.go:276] 0 containers: []
	W0819 19:16:00.551360  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:00.551370  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:00.551434  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:00.587847  438716 cri.go:89] found id: ""
	I0819 19:16:00.587882  438716 logs.go:276] 0 containers: []
	W0819 19:16:00.587892  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:00.587901  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:00.587976  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:00.624769  438716 cri.go:89] found id: ""
	I0819 19:16:00.624800  438716 logs.go:276] 0 containers: []
	W0819 19:16:00.624812  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:00.624820  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:00.624894  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:00.659300  438716 cri.go:89] found id: ""
	I0819 19:16:00.659330  438716 logs.go:276] 0 containers: []
	W0819 19:16:00.659342  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:00.659355  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:00.659371  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:00.739073  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:00.739113  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:00.779087  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:00.779116  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:00.831864  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:00.831914  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:00.845832  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:00.845863  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:00.920622  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:03.420751  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:03.434599  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:03.434664  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:03.469288  438716 cri.go:89] found id: ""
	I0819 19:16:03.469326  438716 logs.go:276] 0 containers: []
	W0819 19:16:03.469349  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:03.469372  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:03.469445  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:03.507885  438716 cri.go:89] found id: ""
	I0819 19:16:03.507911  438716 logs.go:276] 0 containers: []
	W0819 19:16:03.507927  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:03.507934  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:03.507987  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:03.543805  438716 cri.go:89] found id: ""
	I0819 19:16:03.543837  438716 logs.go:276] 0 containers: []
	W0819 19:16:03.543847  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:03.543854  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:03.543928  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:03.584060  438716 cri.go:89] found id: ""
	I0819 19:16:03.584093  438716 logs.go:276] 0 containers: []
	W0819 19:16:03.584105  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:03.584114  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:03.584202  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:03.619724  438716 cri.go:89] found id: ""
	I0819 19:16:03.619758  438716 logs.go:276] 0 containers: []
	W0819 19:16:03.619769  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:03.619776  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:03.619854  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:03.657180  438716 cri.go:89] found id: ""
	I0819 19:16:03.657213  438716 logs.go:276] 0 containers: []
	W0819 19:16:03.657225  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:03.657234  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:03.657303  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:03.695099  438716 cri.go:89] found id: ""
	I0819 19:16:03.695125  438716 logs.go:276] 0 containers: []
	W0819 19:16:03.695134  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:03.695139  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:03.695193  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:03.730263  438716 cri.go:89] found id: ""
	I0819 19:16:03.730291  438716 logs.go:276] 0 containers: []
	W0819 19:16:03.730302  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:03.730314  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:03.730331  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:03.780776  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:03.780816  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:03.795381  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:03.795419  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:03.869995  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:03.870016  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:03.870029  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:03.949654  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:03.949691  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:03.402500  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:05.902412  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:03.694220  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:06.193280  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:04.919284  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:07.418061  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:06.493589  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:06.506758  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:06.506834  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:06.545325  438716 cri.go:89] found id: ""
	I0819 19:16:06.545357  438716 logs.go:276] 0 containers: []
	W0819 19:16:06.545370  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:06.545378  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:06.545443  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:06.581708  438716 cri.go:89] found id: ""
	I0819 19:16:06.581741  438716 logs.go:276] 0 containers: []
	W0819 19:16:06.581753  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:06.581761  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:06.581828  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:06.626543  438716 cri.go:89] found id: ""
	I0819 19:16:06.626588  438716 logs.go:276] 0 containers: []
	W0819 19:16:06.626600  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:06.626609  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:06.626676  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:06.662466  438716 cri.go:89] found id: ""
	I0819 19:16:06.662499  438716 logs.go:276] 0 containers: []
	W0819 19:16:06.662509  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:06.662518  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:06.662585  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:06.701584  438716 cri.go:89] found id: ""
	I0819 19:16:06.701619  438716 logs.go:276] 0 containers: []
	W0819 19:16:06.701628  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:06.701635  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:06.701688  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:06.736245  438716 cri.go:89] found id: ""
	I0819 19:16:06.736280  438716 logs.go:276] 0 containers: []
	W0819 19:16:06.736292  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:06.736300  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:06.736392  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:06.774411  438716 cri.go:89] found id: ""
	I0819 19:16:06.774439  438716 logs.go:276] 0 containers: []
	W0819 19:16:06.774447  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:06.774454  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:06.774510  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:06.809560  438716 cri.go:89] found id: ""
	I0819 19:16:06.809597  438716 logs.go:276] 0 containers: []
	W0819 19:16:06.809609  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:06.809624  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:06.809648  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:06.884841  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:06.884862  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:06.884878  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:06.971467  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:06.971507  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:07.010737  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:07.010767  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:07.063807  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:07.063846  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:09.578451  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:09.591643  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:09.591737  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:09.625607  438716 cri.go:89] found id: ""
	I0819 19:16:09.625639  438716 logs.go:276] 0 containers: []
	W0819 19:16:09.625650  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:09.625659  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:09.625727  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:09.669145  438716 cri.go:89] found id: ""
	I0819 19:16:09.669177  438716 logs.go:276] 0 containers: []
	W0819 19:16:09.669185  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:09.669191  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:09.669254  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:09.707035  438716 cri.go:89] found id: ""
	I0819 19:16:09.707064  438716 logs.go:276] 0 containers: []
	W0819 19:16:09.707073  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:09.707080  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:09.707142  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:09.742089  438716 cri.go:89] found id: ""
	I0819 19:16:09.742116  438716 logs.go:276] 0 containers: []
	W0819 19:16:09.742125  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:09.742132  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:09.742193  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:09.782736  438716 cri.go:89] found id: ""
	I0819 19:16:09.782774  438716 logs.go:276] 0 containers: []
	W0819 19:16:09.782785  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:09.782794  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:09.782860  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:09.818003  438716 cri.go:89] found id: ""
	I0819 19:16:09.818031  438716 logs.go:276] 0 containers: []
	W0819 19:16:09.818040  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:09.818047  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:09.818110  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:09.852716  438716 cri.go:89] found id: ""
	I0819 19:16:09.852748  438716 logs.go:276] 0 containers: []
	W0819 19:16:09.852757  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:09.852764  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:09.852828  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:09.887176  438716 cri.go:89] found id: ""
	I0819 19:16:09.887206  438716 logs.go:276] 0 containers: []
	W0819 19:16:09.887218  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:09.887230  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:09.887247  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:09.901547  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:09.901573  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:09.969153  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:09.969190  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:09.969205  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:10.053777  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:10.053820  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:10.100888  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:10.100916  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:08.401650  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:10.402279  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:08.194305  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:10.693097  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:09.418856  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:11.918836  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:12.655112  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:12.667824  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:12.667897  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:12.702337  438716 cri.go:89] found id: ""
	I0819 19:16:12.702364  438716 logs.go:276] 0 containers: []
	W0819 19:16:12.702373  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:12.702379  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:12.702432  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:12.736628  438716 cri.go:89] found id: ""
	I0819 19:16:12.736655  438716 logs.go:276] 0 containers: []
	W0819 19:16:12.736663  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:12.736669  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:12.736720  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:12.773598  438716 cri.go:89] found id: ""
	I0819 19:16:12.773628  438716 logs.go:276] 0 containers: []
	W0819 19:16:12.773636  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:12.773643  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:12.773695  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:12.806584  438716 cri.go:89] found id: ""
	I0819 19:16:12.806620  438716 logs.go:276] 0 containers: []
	W0819 19:16:12.806632  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:12.806640  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:12.806723  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:12.840535  438716 cri.go:89] found id: ""
	I0819 19:16:12.840561  438716 logs.go:276] 0 containers: []
	W0819 19:16:12.840569  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:12.840575  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:12.840639  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:12.877680  438716 cri.go:89] found id: ""
	I0819 19:16:12.877712  438716 logs.go:276] 0 containers: []
	W0819 19:16:12.877721  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:12.877728  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:12.877779  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:12.912226  438716 cri.go:89] found id: ""
	I0819 19:16:12.912253  438716 logs.go:276] 0 containers: []
	W0819 19:16:12.912264  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:12.912272  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:12.912342  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:12.953463  438716 cri.go:89] found id: ""
	I0819 19:16:12.953493  438716 logs.go:276] 0 containers: []
	W0819 19:16:12.953504  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:12.953524  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:12.953542  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:13.007648  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:13.007691  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:13.022452  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:13.022494  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:13.092411  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:13.092439  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:13.092455  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:13.168711  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:13.168750  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:12.903478  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:15.402551  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:12.693162  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:14.698051  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:17.193988  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:14.417821  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:16.418541  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:18.918478  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:15.711501  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:15.724841  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:15.724921  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:15.760120  438716 cri.go:89] found id: ""
	I0819 19:16:15.760149  438716 logs.go:276] 0 containers: []
	W0819 19:16:15.760158  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:15.760166  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:15.760234  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:15.794959  438716 cri.go:89] found id: ""
	I0819 19:16:15.794988  438716 logs.go:276] 0 containers: []
	W0819 19:16:15.794996  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:15.795002  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:15.795054  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:15.842776  438716 cri.go:89] found id: ""
	I0819 19:16:15.842804  438716 logs.go:276] 0 containers: []
	W0819 19:16:15.842814  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:15.842820  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:15.842874  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:15.882134  438716 cri.go:89] found id: ""
	I0819 19:16:15.882167  438716 logs.go:276] 0 containers: []
	W0819 19:16:15.882178  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:15.882187  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:15.882251  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:15.919296  438716 cri.go:89] found id: ""
	I0819 19:16:15.919325  438716 logs.go:276] 0 containers: []
	W0819 19:16:15.919336  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:15.919345  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:15.919409  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:15.956401  438716 cri.go:89] found id: ""
	I0819 19:16:15.956429  438716 logs.go:276] 0 containers: []
	W0819 19:16:15.956437  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:15.956444  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:15.956507  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:15.994271  438716 cri.go:89] found id: ""
	I0819 19:16:15.994304  438716 logs.go:276] 0 containers: []
	W0819 19:16:15.994314  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:15.994320  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:15.994378  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:16.033685  438716 cri.go:89] found id: ""
	I0819 19:16:16.033714  438716 logs.go:276] 0 containers: []
	W0819 19:16:16.033724  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:16.033736  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:16.033754  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:16.083929  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:16.083964  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:16.107309  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:16.107342  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:16.193657  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:16.193681  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:16.193697  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:16.276974  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:16.277016  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:18.818532  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:18.831586  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:18.831655  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:18.866663  438716 cri.go:89] found id: ""
	I0819 19:16:18.866689  438716 logs.go:276] 0 containers: []
	W0819 19:16:18.866700  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:18.866709  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:18.866769  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:18.900711  438716 cri.go:89] found id: ""
	I0819 19:16:18.900746  438716 logs.go:276] 0 containers: []
	W0819 19:16:18.900757  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:18.900765  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:18.900849  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:18.935156  438716 cri.go:89] found id: ""
	I0819 19:16:18.935179  438716 logs.go:276] 0 containers: []
	W0819 19:16:18.935186  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:18.935193  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:18.935246  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:18.973853  438716 cri.go:89] found id: ""
	I0819 19:16:18.973889  438716 logs.go:276] 0 containers: []
	W0819 19:16:18.973902  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:18.973911  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:18.973978  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:19.014212  438716 cri.go:89] found id: ""
	I0819 19:16:19.014241  438716 logs.go:276] 0 containers: []
	W0819 19:16:19.014250  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:19.014255  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:19.014317  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:19.056089  438716 cri.go:89] found id: ""
	I0819 19:16:19.056125  438716 logs.go:276] 0 containers: []
	W0819 19:16:19.056137  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:19.056146  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:19.056211  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:19.091372  438716 cri.go:89] found id: ""
	I0819 19:16:19.091399  438716 logs.go:276] 0 containers: []
	W0819 19:16:19.091411  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:19.091420  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:19.091478  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:19.129737  438716 cri.go:89] found id: ""
	I0819 19:16:19.129767  438716 logs.go:276] 0 containers: []
	W0819 19:16:19.129777  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:19.129787  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:19.129800  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:19.207325  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:19.207360  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:19.247780  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:19.247816  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:19.302496  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:19.302543  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:19.317706  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:19.317739  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:19.395029  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:17.901762  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:19.901818  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:19.195079  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:21.693863  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:21.418534  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:23.420217  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:21.895538  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:21.910595  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:21.910658  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:21.948363  438716 cri.go:89] found id: ""
	I0819 19:16:21.948398  438716 logs.go:276] 0 containers: []
	W0819 19:16:21.948410  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:21.948419  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:21.948492  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:21.983391  438716 cri.go:89] found id: ""
	I0819 19:16:21.983428  438716 logs.go:276] 0 containers: []
	W0819 19:16:21.983440  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:21.983449  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:21.983520  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:22.022383  438716 cri.go:89] found id: ""
	I0819 19:16:22.022415  438716 logs.go:276] 0 containers: []
	W0819 19:16:22.022427  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:22.022436  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:22.022493  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:22.060676  438716 cri.go:89] found id: ""
	I0819 19:16:22.060707  438716 logs.go:276] 0 containers: []
	W0819 19:16:22.060716  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:22.060725  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:22.060778  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:22.095188  438716 cri.go:89] found id: ""
	I0819 19:16:22.095218  438716 logs.go:276] 0 containers: []
	W0819 19:16:22.095227  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:22.095234  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:22.095300  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:22.131164  438716 cri.go:89] found id: ""
	I0819 19:16:22.131192  438716 logs.go:276] 0 containers: []
	W0819 19:16:22.131200  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:22.131209  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:22.131275  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:22.166539  438716 cri.go:89] found id: ""
	I0819 19:16:22.166566  438716 logs.go:276] 0 containers: []
	W0819 19:16:22.166573  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:22.166580  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:22.166643  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:22.205604  438716 cri.go:89] found id: ""
	I0819 19:16:22.205631  438716 logs.go:276] 0 containers: []
	W0819 19:16:22.205640  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:22.205649  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:22.205662  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:22.265650  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:22.265689  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:22.280401  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:22.280443  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:22.356818  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:22.356851  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:22.356872  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:22.437678  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:22.437719  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:24.979655  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:24.993462  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:24.993526  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:25.029955  438716 cri.go:89] found id: ""
	I0819 19:16:25.029983  438716 logs.go:276] 0 containers: []
	W0819 19:16:25.029992  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:25.029999  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:25.030049  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:25.068478  438716 cri.go:89] found id: ""
	I0819 19:16:25.068507  438716 logs.go:276] 0 containers: []
	W0819 19:16:25.068518  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:25.068527  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:25.068594  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:25.105209  438716 cri.go:89] found id: ""
	I0819 19:16:25.105238  438716 logs.go:276] 0 containers: []
	W0819 19:16:25.105247  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:25.105256  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:25.105327  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:25.143166  438716 cri.go:89] found id: ""
	I0819 19:16:25.143203  438716 logs.go:276] 0 containers: []
	W0819 19:16:25.143218  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:25.143225  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:25.143279  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:25.177993  438716 cri.go:89] found id: ""
	I0819 19:16:25.178023  438716 logs.go:276] 0 containers: []
	W0819 19:16:25.178035  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:25.178044  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:25.178129  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:25.216473  438716 cri.go:89] found id: ""
	I0819 19:16:25.216501  438716 logs.go:276] 0 containers: []
	W0819 19:16:25.216523  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:25.216540  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:25.216603  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:25.251454  438716 cri.go:89] found id: ""
	I0819 19:16:25.251486  438716 logs.go:276] 0 containers: []
	W0819 19:16:25.251495  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:25.251501  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:25.251555  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:25.287145  438716 cri.go:89] found id: ""
	I0819 19:16:25.287179  438716 logs.go:276] 0 containers: []
	W0819 19:16:25.287188  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:25.287198  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:25.287210  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:25.371571  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:25.371619  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:25.418247  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:25.418277  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:25.472209  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:25.472248  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:25.486286  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:25.486315  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 19:16:21.902887  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:23.904358  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:26.403026  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:24.193797  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:26.194535  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:25.919371  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:28.418267  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	W0819 19:16:25.554470  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:28.055382  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:28.068750  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:28.068827  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:28.101856  438716 cri.go:89] found id: ""
	I0819 19:16:28.101891  438716 logs.go:276] 0 containers: []
	W0819 19:16:28.101903  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:28.101912  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:28.101977  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:28.136402  438716 cri.go:89] found id: ""
	I0819 19:16:28.136437  438716 logs.go:276] 0 containers: []
	W0819 19:16:28.136449  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:28.136460  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:28.136528  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:28.171766  438716 cri.go:89] found id: ""
	I0819 19:16:28.171795  438716 logs.go:276] 0 containers: []
	W0819 19:16:28.171803  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:28.171809  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:28.171864  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:28.206228  438716 cri.go:89] found id: ""
	I0819 19:16:28.206256  438716 logs.go:276] 0 containers: []
	W0819 19:16:28.206264  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:28.206272  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:28.206337  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:28.248877  438716 cri.go:89] found id: ""
	I0819 19:16:28.248912  438716 logs.go:276] 0 containers: []
	W0819 19:16:28.248923  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:28.248931  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:28.249002  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:28.290160  438716 cri.go:89] found id: ""
	I0819 19:16:28.290201  438716 logs.go:276] 0 containers: []
	W0819 19:16:28.290212  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:28.290221  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:28.290287  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:28.340413  438716 cri.go:89] found id: ""
	I0819 19:16:28.340445  438716 logs.go:276] 0 containers: []
	W0819 19:16:28.340454  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:28.340461  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:28.340513  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:28.385486  438716 cri.go:89] found id: ""
	I0819 19:16:28.385513  438716 logs.go:276] 0 containers: []
	W0819 19:16:28.385521  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:28.385532  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:28.385544  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:28.441987  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:28.442029  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:28.456509  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:28.456538  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:28.527941  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:28.527976  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:28.527993  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:28.612696  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:28.612738  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:28.901312  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:30.901640  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:28.693578  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:30.693686  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:30.418811  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:32.919696  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:31.154773  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:31.168718  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:31.168789  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:31.205365  438716 cri.go:89] found id: ""
	I0819 19:16:31.205399  438716 logs.go:276] 0 containers: []
	W0819 19:16:31.205411  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:31.205419  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:31.205496  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:31.238829  438716 cri.go:89] found id: ""
	I0819 19:16:31.238871  438716 logs.go:276] 0 containers: []
	W0819 19:16:31.238879  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:31.238886  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:31.238936  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:31.273229  438716 cri.go:89] found id: ""
	I0819 19:16:31.273259  438716 logs.go:276] 0 containers: []
	W0819 19:16:31.273304  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:31.273313  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:31.273377  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:31.309559  438716 cri.go:89] found id: ""
	I0819 19:16:31.309601  438716 logs.go:276] 0 containers: []
	W0819 19:16:31.309613  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:31.309622  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:31.309689  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:31.344939  438716 cri.go:89] found id: ""
	I0819 19:16:31.344971  438716 logs.go:276] 0 containers: []
	W0819 19:16:31.344981  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:31.344987  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:31.345043  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:31.382423  438716 cri.go:89] found id: ""
	I0819 19:16:31.382455  438716 logs.go:276] 0 containers: []
	W0819 19:16:31.382468  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:31.382474  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:31.382525  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:31.420148  438716 cri.go:89] found id: ""
	I0819 19:16:31.420174  438716 logs.go:276] 0 containers: []
	W0819 19:16:31.420184  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:31.420192  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:31.420262  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:31.455691  438716 cri.go:89] found id: ""
	I0819 19:16:31.455720  438716 logs.go:276] 0 containers: []
	W0819 19:16:31.455730  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:31.455740  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:31.455753  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:31.509501  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:31.509549  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:31.523650  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:31.523693  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:31.591535  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:31.591557  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:31.591574  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:31.674038  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:31.674077  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:34.216506  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:34.232782  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:34.232875  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:34.286103  438716 cri.go:89] found id: ""
	I0819 19:16:34.286136  438716 logs.go:276] 0 containers: []
	W0819 19:16:34.286147  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:34.286156  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:34.286221  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:34.324193  438716 cri.go:89] found id: ""
	I0819 19:16:34.324220  438716 logs.go:276] 0 containers: []
	W0819 19:16:34.324229  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:34.324235  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:34.324292  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:34.382777  438716 cri.go:89] found id: ""
	I0819 19:16:34.382804  438716 logs.go:276] 0 containers: []
	W0819 19:16:34.382814  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:34.382822  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:34.382887  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:34.420714  438716 cri.go:89] found id: ""
	I0819 19:16:34.420743  438716 logs.go:276] 0 containers: []
	W0819 19:16:34.420753  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:34.420771  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:34.420840  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:34.455338  438716 cri.go:89] found id: ""
	I0819 19:16:34.455369  438716 logs.go:276] 0 containers: []
	W0819 19:16:34.455381  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:34.455391  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:34.455467  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:34.489528  438716 cri.go:89] found id: ""
	I0819 19:16:34.489566  438716 logs.go:276] 0 containers: []
	W0819 19:16:34.489575  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:34.489581  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:34.489634  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:34.523830  438716 cri.go:89] found id: ""
	I0819 19:16:34.523857  438716 logs.go:276] 0 containers: []
	W0819 19:16:34.523866  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:34.523873  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:34.523940  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:34.559023  438716 cri.go:89] found id: ""
	I0819 19:16:34.559052  438716 logs.go:276] 0 containers: []
	W0819 19:16:34.559063  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:34.559077  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:34.559092  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:34.639116  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:34.639159  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:34.675990  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:34.676017  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:34.730900  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:34.730935  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:34.744938  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:34.744964  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:34.816267  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:32.902138  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:35.401865  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:32.696537  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:35.192648  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:35.687633  438245 pod_ready.go:82] duration metric: took 4m0.000667446s for pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace to be "Ready" ...
	E0819 19:16:35.687688  438245 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0819 19:16:35.687715  438245 pod_ready.go:39] duration metric: took 4m13.552784118s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 19:16:35.687770  438245 kubeadm.go:597] duration metric: took 4m20.936149722s to restartPrimaryControlPlane
	W0819 19:16:35.687875  438245 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0819 19:16:35.687929  438245 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 19:16:35.419327  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:37.420007  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:37.317314  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:37.331915  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:37.331982  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:37.370233  438716 cri.go:89] found id: ""
	I0819 19:16:37.370261  438716 logs.go:276] 0 containers: []
	W0819 19:16:37.370269  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:37.370276  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:37.370343  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:37.409042  438716 cri.go:89] found id: ""
	I0819 19:16:37.409071  438716 logs.go:276] 0 containers: []
	W0819 19:16:37.409082  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:37.409090  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:37.409161  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:37.445903  438716 cri.go:89] found id: ""
	I0819 19:16:37.445932  438716 logs.go:276] 0 containers: []
	W0819 19:16:37.445941  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:37.445948  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:37.445999  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:37.484275  438716 cri.go:89] found id: ""
	I0819 19:16:37.484318  438716 logs.go:276] 0 containers: []
	W0819 19:16:37.484328  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:37.484334  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:37.484393  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:37.528131  438716 cri.go:89] found id: ""
	I0819 19:16:37.528161  438716 logs.go:276] 0 containers: []
	W0819 19:16:37.528174  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:37.528180  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:37.528243  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:37.563374  438716 cri.go:89] found id: ""
	I0819 19:16:37.563406  438716 logs.go:276] 0 containers: []
	W0819 19:16:37.563414  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:37.563421  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:37.563473  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:37.597234  438716 cri.go:89] found id: ""
	I0819 19:16:37.597260  438716 logs.go:276] 0 containers: []
	W0819 19:16:37.597267  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:37.597274  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:37.597329  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:37.634809  438716 cri.go:89] found id: ""
	I0819 19:16:37.634845  438716 logs.go:276] 0 containers: []
	W0819 19:16:37.634854  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:37.634864  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:37.634879  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:37.704354  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:37.704380  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:37.704396  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:37.788606  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:37.788646  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:37.830486  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:37.830513  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:37.890642  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:37.890681  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:40.405473  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:40.420019  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:40.420094  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:40.458558  438716 cri.go:89] found id: ""
	I0819 19:16:40.458586  438716 logs.go:276] 0 containers: []
	W0819 19:16:40.458598  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:40.458606  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:40.458671  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:40.500353  438716 cri.go:89] found id: ""
	I0819 19:16:40.500379  438716 logs.go:276] 0 containers: []
	W0819 19:16:40.500388  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:40.500394  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:40.500445  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:37.901881  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:39.902097  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:39.918877  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:41.919112  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:43.920092  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:40.534281  438716 cri.go:89] found id: ""
	I0819 19:16:40.534307  438716 logs.go:276] 0 containers: []
	W0819 19:16:40.534316  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:40.534322  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:40.534379  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:40.569537  438716 cri.go:89] found id: ""
	I0819 19:16:40.569568  438716 logs.go:276] 0 containers: []
	W0819 19:16:40.569578  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:40.569587  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:40.569654  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:40.603066  438716 cri.go:89] found id: ""
	I0819 19:16:40.603097  438716 logs.go:276] 0 containers: []
	W0819 19:16:40.603110  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:40.603118  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:40.603171  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:40.637598  438716 cri.go:89] found id: ""
	I0819 19:16:40.637628  438716 logs.go:276] 0 containers: []
	W0819 19:16:40.637637  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:40.637643  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:40.637704  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:40.673583  438716 cri.go:89] found id: ""
	I0819 19:16:40.673616  438716 logs.go:276] 0 containers: []
	W0819 19:16:40.673629  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:40.673637  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:40.673692  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:40.708324  438716 cri.go:89] found id: ""
	I0819 19:16:40.708354  438716 logs.go:276] 0 containers: []
	W0819 19:16:40.708363  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:40.708373  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:40.708387  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:40.789743  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:40.789782  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:40.830849  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:40.830884  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:40.882662  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:40.882700  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:40.896843  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:40.896869  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:40.969491  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:43.470579  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:43.483791  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:43.483876  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:43.523764  438716 cri.go:89] found id: ""
	I0819 19:16:43.523797  438716 logs.go:276] 0 containers: []
	W0819 19:16:43.523809  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:43.523817  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:43.523882  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:43.557925  438716 cri.go:89] found id: ""
	I0819 19:16:43.557953  438716 logs.go:276] 0 containers: []
	W0819 19:16:43.557960  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:43.557966  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:43.558017  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:43.591324  438716 cri.go:89] found id: ""
	I0819 19:16:43.591355  438716 logs.go:276] 0 containers: []
	W0819 19:16:43.591364  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:43.591370  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:43.591421  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:43.625798  438716 cri.go:89] found id: ""
	I0819 19:16:43.625826  438716 logs.go:276] 0 containers: []
	W0819 19:16:43.625834  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:43.625840  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:43.625898  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:43.659787  438716 cri.go:89] found id: ""
	I0819 19:16:43.659815  438716 logs.go:276] 0 containers: []
	W0819 19:16:43.659823  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:43.659830  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:43.659882  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:43.692982  438716 cri.go:89] found id: ""
	I0819 19:16:43.693008  438716 logs.go:276] 0 containers: []
	W0819 19:16:43.693017  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:43.693024  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:43.693075  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:43.726059  438716 cri.go:89] found id: ""
	I0819 19:16:43.726092  438716 logs.go:276] 0 containers: []
	W0819 19:16:43.726104  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:43.726113  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:43.726187  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:43.760906  438716 cri.go:89] found id: ""
	I0819 19:16:43.760947  438716 logs.go:276] 0 containers: []
	W0819 19:16:43.760958  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:43.760971  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:43.760994  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:43.812249  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:43.812285  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:43.826538  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:43.826566  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:43.894904  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:43.894926  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:43.894941  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:43.975746  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:43.975796  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:41.902398  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:43.902728  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:46.401834  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:46.419345  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:48.918688  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:46.515329  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:46.529088  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:46.529170  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:46.564525  438716 cri.go:89] found id: ""
	I0819 19:16:46.564557  438716 logs.go:276] 0 containers: []
	W0819 19:16:46.564570  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:46.564578  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:46.564647  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:46.598457  438716 cri.go:89] found id: ""
	I0819 19:16:46.598485  438716 logs.go:276] 0 containers: []
	W0819 19:16:46.598494  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:46.598499  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:46.598549  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:46.631767  438716 cri.go:89] found id: ""
	I0819 19:16:46.631798  438716 logs.go:276] 0 containers: []
	W0819 19:16:46.631807  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:46.631814  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:46.631867  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:46.664978  438716 cri.go:89] found id: ""
	I0819 19:16:46.665013  438716 logs.go:276] 0 containers: []
	W0819 19:16:46.665026  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:46.665034  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:46.665094  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:46.701024  438716 cri.go:89] found id: ""
	I0819 19:16:46.701052  438716 logs.go:276] 0 containers: []
	W0819 19:16:46.701061  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:46.701067  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:46.701132  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:46.735834  438716 cri.go:89] found id: ""
	I0819 19:16:46.735874  438716 logs.go:276] 0 containers: []
	W0819 19:16:46.735886  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:46.735894  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:46.735978  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:46.773392  438716 cri.go:89] found id: ""
	I0819 19:16:46.773426  438716 logs.go:276] 0 containers: []
	W0819 19:16:46.773437  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:46.773445  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:46.773498  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:46.819800  438716 cri.go:89] found id: ""
	I0819 19:16:46.819829  438716 logs.go:276] 0 containers: []
	W0819 19:16:46.819841  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:46.819869  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:46.819889  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:46.860633  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:46.860669  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:46.911895  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:46.911936  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:46.927388  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:46.927422  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:46.998601  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:46.998628  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:46.998645  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:49.585303  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:49.598962  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:49.599032  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:49.631891  438716 cri.go:89] found id: ""
	I0819 19:16:49.631920  438716 logs.go:276] 0 containers: []
	W0819 19:16:49.631931  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:49.631940  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:49.631998  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:49.671731  438716 cri.go:89] found id: ""
	I0819 19:16:49.671761  438716 logs.go:276] 0 containers: []
	W0819 19:16:49.671777  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:49.671786  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:49.671846  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:49.707517  438716 cri.go:89] found id: ""
	I0819 19:16:49.707556  438716 logs.go:276] 0 containers: []
	W0819 19:16:49.707568  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:49.707578  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:49.707651  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:49.744255  438716 cri.go:89] found id: ""
	I0819 19:16:49.744289  438716 logs.go:276] 0 containers: []
	W0819 19:16:49.744299  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:49.744305  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:49.744357  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:49.779224  438716 cri.go:89] found id: ""
	I0819 19:16:49.779252  438716 logs.go:276] 0 containers: []
	W0819 19:16:49.779259  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:49.779266  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:49.779322  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:49.815641  438716 cri.go:89] found id: ""
	I0819 19:16:49.815689  438716 logs.go:276] 0 containers: []
	W0819 19:16:49.815701  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:49.815711  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:49.815769  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:49.851861  438716 cri.go:89] found id: ""
	I0819 19:16:49.851894  438716 logs.go:276] 0 containers: []
	W0819 19:16:49.851906  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:49.851915  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:49.851984  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:49.888140  438716 cri.go:89] found id: ""
	I0819 19:16:49.888173  438716 logs.go:276] 0 containers: []
	W0819 19:16:49.888186  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:49.888199  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:49.888215  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:49.940389  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:49.940430  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:49.954519  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:49.954553  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:50.028462  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:50.028486  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:50.028502  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:50.108319  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:50.108362  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:48.901902  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:50.902702  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:50.919079  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:52.919271  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:52.647146  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:52.660468  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:52.660558  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:52.697665  438716 cri.go:89] found id: ""
	I0819 19:16:52.697703  438716 logs.go:276] 0 containers: []
	W0819 19:16:52.697719  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:52.697727  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:52.697786  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:52.739169  438716 cri.go:89] found id: ""
	I0819 19:16:52.739203  438716 logs.go:276] 0 containers: []
	W0819 19:16:52.739214  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:52.739222  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:52.739289  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:52.776580  438716 cri.go:89] found id: ""
	I0819 19:16:52.776610  438716 logs.go:276] 0 containers: []
	W0819 19:16:52.776619  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:52.776630  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:52.776683  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:52.813443  438716 cri.go:89] found id: ""
	I0819 19:16:52.813475  438716 logs.go:276] 0 containers: []
	W0819 19:16:52.813488  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:52.813497  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:52.813557  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:52.848035  438716 cri.go:89] found id: ""
	I0819 19:16:52.848064  438716 logs.go:276] 0 containers: []
	W0819 19:16:52.848075  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:52.848082  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:52.848150  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:52.881814  438716 cri.go:89] found id: ""
	I0819 19:16:52.881841  438716 logs.go:276] 0 containers: []
	W0819 19:16:52.881858  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:52.881867  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:52.881930  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:52.922179  438716 cri.go:89] found id: ""
	I0819 19:16:52.922202  438716 logs.go:276] 0 containers: []
	W0819 19:16:52.922210  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:52.922216  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:52.922277  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:52.958110  438716 cri.go:89] found id: ""
	I0819 19:16:52.958136  438716 logs.go:276] 0 containers: []
	W0819 19:16:52.958144  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:52.958153  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:52.958167  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:53.008553  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:53.008592  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:53.022826  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:53.022860  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:53.094940  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:53.094967  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:53.094982  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:53.173877  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:53.173920  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:53.403382  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:55.905504  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:55.419297  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:55.419331  438295 pod_ready.go:82] duration metric: took 4m0.007107243s for pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace to be "Ready" ...
	E0819 19:16:55.419345  438295 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0819 19:16:55.419355  438295 pod_ready.go:39] duration metric: took 4m4.316528467s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 19:16:55.419408  438295 api_server.go:52] waiting for apiserver process to appear ...
	I0819 19:16:55.419449  438295 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:55.419499  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:55.466648  438295 cri.go:89] found id: "d66ad075c652a3b446078444a32327c07459f74199be8f89197067dbad566d5a"
	I0819 19:16:55.466679  438295 cri.go:89] found id: ""
	I0819 19:16:55.466690  438295 logs.go:276] 1 containers: [d66ad075c652a3b446078444a32327c07459f74199be8f89197067dbad566d5a]
	I0819 19:16:55.466758  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:16:55.471085  438295 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:55.471164  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:55.509883  438295 cri.go:89] found id: "a3cb2c04e3eb3398fa324b660ca1864f22175cbf41fd84eae34a24ce7928b672"
	I0819 19:16:55.509910  438295 cri.go:89] found id: ""
	I0819 19:16:55.509921  438295 logs.go:276] 1 containers: [a3cb2c04e3eb3398fa324b660ca1864f22175cbf41fd84eae34a24ce7928b672]
	I0819 19:16:55.509984  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:16:55.516866  438295 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:55.516954  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:55.560957  438295 cri.go:89] found id: "a6bc5b24f616e32fdffb80b6ed0201250b02f143c8217d56ef90dc55551d709f"
	I0819 19:16:55.560988  438295 cri.go:89] found id: ""
	I0819 19:16:55.560999  438295 logs.go:276] 1 containers: [a6bc5b24f616e32fdffb80b6ed0201250b02f143c8217d56ef90dc55551d709f]
	I0819 19:16:55.561065  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:16:55.565592  438295 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:55.565662  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:55.610872  438295 cri.go:89] found id: "c09c2a3840c6b84c4d187a5b4938f1e79c515609ad3ff7077a163e94acd5fc22"
	I0819 19:16:55.610905  438295 cri.go:89] found id: ""
	I0819 19:16:55.610914  438295 logs.go:276] 1 containers: [c09c2a3840c6b84c4d187a5b4938f1e79c515609ad3ff7077a163e94acd5fc22]
	I0819 19:16:55.610976  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:16:55.615411  438295 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:55.615486  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:55.652759  438295 cri.go:89] found id: "3e23a8501fe9333693618c26b918ed665ca9f2ea955dfc771ddbd90f4af91338"
	I0819 19:16:55.652792  438295 cri.go:89] found id: ""
	I0819 19:16:55.652807  438295 logs.go:276] 1 containers: [3e23a8501fe9333693618c26b918ed665ca9f2ea955dfc771ddbd90f4af91338]
	I0819 19:16:55.652873  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:16:55.657124  438295 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:55.657190  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:55.699063  438295 cri.go:89] found id: "6e6dab43bac16fb6a2155177fd2cb01da57c882a322ae89145bc332c50c87071"
	I0819 19:16:55.699085  438295 cri.go:89] found id: ""
	I0819 19:16:55.699093  438295 logs.go:276] 1 containers: [6e6dab43bac16fb6a2155177fd2cb01da57c882a322ae89145bc332c50c87071]
	I0819 19:16:55.699145  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:16:55.703224  438295 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:55.703292  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:55.753166  438295 cri.go:89] found id: ""
	I0819 19:16:55.753198  438295 logs.go:276] 0 containers: []
	W0819 19:16:55.753210  438295 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:55.753218  438295 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 19:16:55.753286  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 19:16:55.803518  438295 cri.go:89] found id: "902796698c02b97c3f50f231cba5dfbc00bc7e8344f104fe7a36109e1d10a4f8"
	I0819 19:16:55.803551  438295 cri.go:89] found id: "44a4290db8405288dc877d1dbfa8f1a4976cb6221431aef419db3cdff822d3b6"
	I0819 19:16:55.803558  438295 cri.go:89] found id: ""
	I0819 19:16:55.803568  438295 logs.go:276] 2 containers: [902796698c02b97c3f50f231cba5dfbc00bc7e8344f104fe7a36109e1d10a4f8 44a4290db8405288dc877d1dbfa8f1a4976cb6221431aef419db3cdff822d3b6]
	I0819 19:16:55.803637  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:16:55.808063  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:16:55.812708  438295 logs.go:123] Gathering logs for container status ...
	I0819 19:16:55.812737  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:55.861697  438295 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:55.861736  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 19:16:55.911203  438295 logs.go:138] Found kubelet problem: Aug 19 19:12:40 embed-certs-024748 kubelet[936]: W0819 19:12:40.671901     936 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:embed-certs-024748" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-024748' and this object
	W0819 19:16:55.911420  438295 logs.go:138] Found kubelet problem: Aug 19 19:12:40 embed-certs-024748 kubelet[936]: E0819 19:12:40.672098     936 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:embed-certs-024748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-024748' and this object" logger="UnhandledError"
	W0819 19:16:55.911603  438295 logs.go:138] Found kubelet problem: Aug 19 19:12:40 embed-certs-024748 kubelet[936]: W0819 19:12:40.672624     936 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-024748" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-024748' and this object
	W0819 19:16:55.911834  438295 logs.go:138] Found kubelet problem: Aug 19 19:12:40 embed-certs-024748 kubelet[936]: E0819 19:12:40.672667     936 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:embed-certs-024748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-024748' and this object" logger="UnhandledError"
	I0819 19:16:55.949585  438295 logs.go:123] Gathering logs for kube-scheduler [c09c2a3840c6b84c4d187a5b4938f1e79c515609ad3ff7077a163e94acd5fc22] ...
	I0819 19:16:55.949663  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c09c2a3840c6b84c4d187a5b4938f1e79c515609ad3ff7077a163e94acd5fc22"
	I0819 19:16:55.995063  438295 logs.go:123] Gathering logs for kube-controller-manager [6e6dab43bac16fb6a2155177fd2cb01da57c882a322ae89145bc332c50c87071] ...
	I0819 19:16:55.995100  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e6dab43bac16fb6a2155177fd2cb01da57c882a322ae89145bc332c50c87071"
	I0819 19:16:56.062320  438295 logs.go:123] Gathering logs for storage-provisioner [902796698c02b97c3f50f231cba5dfbc00bc7e8344f104fe7a36109e1d10a4f8] ...
	I0819 19:16:56.062376  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 902796698c02b97c3f50f231cba5dfbc00bc7e8344f104fe7a36109e1d10a4f8"
	I0819 19:16:56.100112  438295 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:56.100152  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:56.589439  438295 logs.go:123] Gathering logs for kube-proxy [3e23a8501fe9333693618c26b918ed665ca9f2ea955dfc771ddbd90f4af91338] ...
	I0819 19:16:56.589486  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e23a8501fe9333693618c26b918ed665ca9f2ea955dfc771ddbd90f4af91338"
	I0819 19:16:56.632096  438295 logs.go:123] Gathering logs for storage-provisioner [44a4290db8405288dc877d1dbfa8f1a4976cb6221431aef419db3cdff822d3b6] ...
	I0819 19:16:56.632132  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44a4290db8405288dc877d1dbfa8f1a4976cb6221431aef419db3cdff822d3b6"
	I0819 19:16:56.670952  438295 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:56.670984  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:56.685246  438295 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:56.685279  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 19:16:56.826418  438295 logs.go:123] Gathering logs for kube-apiserver [d66ad075c652a3b446078444a32327c07459f74199be8f89197067dbad566d5a] ...
	I0819 19:16:56.826456  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d66ad075c652a3b446078444a32327c07459f74199be8f89197067dbad566d5a"
	I0819 19:16:56.876901  438295 logs.go:123] Gathering logs for etcd [a3cb2c04e3eb3398fa324b660ca1864f22175cbf41fd84eae34a24ce7928b672] ...
	I0819 19:16:56.876944  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a3cb2c04e3eb3398fa324b660ca1864f22175cbf41fd84eae34a24ce7928b672"
	I0819 19:16:56.920390  438295 logs.go:123] Gathering logs for coredns [a6bc5b24f616e32fdffb80b6ed0201250b02f143c8217d56ef90dc55551d709f] ...
	I0819 19:16:56.920423  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6bc5b24f616e32fdffb80b6ed0201250b02f143c8217d56ef90dc55551d709f"
	I0819 19:16:56.961691  438295 out.go:358] Setting ErrFile to fd 2...
	I0819 19:16:56.961718  438295 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 19:16:56.961793  438295 out.go:270] X Problems detected in kubelet:
	W0819 19:16:56.961805  438295 out.go:270]   Aug 19 19:12:40 embed-certs-024748 kubelet[936]: W0819 19:12:40.671901     936 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:embed-certs-024748" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-024748' and this object
	W0819 19:16:56.961824  438295 out.go:270]   Aug 19 19:12:40 embed-certs-024748 kubelet[936]: E0819 19:12:40.672098     936 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:embed-certs-024748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-024748' and this object" logger="UnhandledError"
	W0819 19:16:56.961839  438295 out.go:270]   Aug 19 19:12:40 embed-certs-024748 kubelet[936]: W0819 19:12:40.672624     936 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-024748" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-024748' and this object
	W0819 19:16:56.961853  438295 out.go:270]   Aug 19 19:12:40 embed-certs-024748 kubelet[936]: E0819 19:12:40.672667     936 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:embed-certs-024748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-024748' and this object" logger="UnhandledError"
	I0819 19:16:56.961884  438295 out.go:358] Setting ErrFile to fd 2...
	I0819 19:16:56.961893  438295 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:16:55.716096  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:55.734732  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:55.734817  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:55.780484  438716 cri.go:89] found id: ""
	I0819 19:16:55.780514  438716 logs.go:276] 0 containers: []
	W0819 19:16:55.780525  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:55.780534  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:55.780607  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:55.821755  438716 cri.go:89] found id: ""
	I0819 19:16:55.821778  438716 logs.go:276] 0 containers: []
	W0819 19:16:55.821786  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:55.821792  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:55.821855  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:55.861032  438716 cri.go:89] found id: ""
	I0819 19:16:55.861066  438716 logs.go:276] 0 containers: []
	W0819 19:16:55.861077  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:55.861086  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:55.861159  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:55.909978  438716 cri.go:89] found id: ""
	I0819 19:16:55.910004  438716 logs.go:276] 0 containers: []
	W0819 19:16:55.910015  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:55.910024  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:55.910087  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:55.956603  438716 cri.go:89] found id: ""
	I0819 19:16:55.956634  438716 logs.go:276] 0 containers: []
	W0819 19:16:55.956645  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:55.956653  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:55.956722  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:55.999176  438716 cri.go:89] found id: ""
	I0819 19:16:55.999203  438716 logs.go:276] 0 containers: []
	W0819 19:16:55.999216  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:55.999225  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:55.999286  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:56.035141  438716 cri.go:89] found id: ""
	I0819 19:16:56.035172  438716 logs.go:276] 0 containers: []
	W0819 19:16:56.035183  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:56.035192  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:56.035255  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:56.076152  438716 cri.go:89] found id: ""
	I0819 19:16:56.076185  438716 logs.go:276] 0 containers: []
	W0819 19:16:56.076197  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:56.076209  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:56.076226  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:56.136624  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:56.136671  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:56.151867  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:56.151902  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:56.231650  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:56.231696  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:56.231713  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:56.307203  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:56.307247  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:58.848295  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:58.861984  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:58.862172  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:58.900089  438716 cri.go:89] found id: ""
	I0819 19:16:58.900114  438716 logs.go:276] 0 containers: []
	W0819 19:16:58.900124  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:58.900132  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:58.900203  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:58.932528  438716 cri.go:89] found id: ""
	I0819 19:16:58.932551  438716 logs.go:276] 0 containers: []
	W0819 19:16:58.932559  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:58.932565  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:58.932618  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:58.967255  438716 cri.go:89] found id: ""
	I0819 19:16:58.967283  438716 logs.go:276] 0 containers: []
	W0819 19:16:58.967291  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:58.967298  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:58.967349  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:59.000887  438716 cri.go:89] found id: ""
	I0819 19:16:59.000923  438716 logs.go:276] 0 containers: []
	W0819 19:16:59.000934  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:59.000942  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:59.001009  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:59.041386  438716 cri.go:89] found id: ""
	I0819 19:16:59.041417  438716 logs.go:276] 0 containers: []
	W0819 19:16:59.041428  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:59.041436  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:59.041499  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:59.080036  438716 cri.go:89] found id: ""
	I0819 19:16:59.080078  438716 logs.go:276] 0 containers: []
	W0819 19:16:59.080090  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:59.080099  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:59.080168  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:59.113946  438716 cri.go:89] found id: ""
	I0819 19:16:59.113982  438716 logs.go:276] 0 containers: []
	W0819 19:16:59.113995  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:59.114004  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:59.114066  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:59.155413  438716 cri.go:89] found id: ""
	I0819 19:16:59.155437  438716 logs.go:276] 0 containers: []
	W0819 19:16:59.155446  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:59.155456  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:59.155477  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:59.223795  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:59.223815  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:59.223828  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:59.304516  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:59.304554  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:59.344975  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:59.345005  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:59.397751  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:59.397789  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:58.402453  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:00.901494  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:02.043611  438245 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.355651212s)
	I0819 19:17:02.043735  438245 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 19:17:02.066981  438245 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 19:17:02.083179  438245 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 19:17:02.100807  438245 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 19:17:02.100829  438245 kubeadm.go:157] found existing configuration files:
	
	I0819 19:17:02.100877  438245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0819 19:17:02.116462  438245 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 19:17:02.116534  438245 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 19:17:02.127313  438245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0819 19:17:02.147096  438245 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 19:17:02.147170  438245 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 19:17:02.159262  438245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0819 19:17:02.168825  438245 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 19:17:02.168918  438245 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 19:17:02.179354  438245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0819 19:17:02.188982  438245 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 19:17:02.189051  438245 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 19:17:02.199291  438245 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 19:17:01.914433  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:17:01.927468  438716 kubeadm.go:597] duration metric: took 4m3.453401239s to restartPrimaryControlPlane
	W0819 19:17:01.927564  438716 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0819 19:17:01.927600  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 19:17:02.647971  438716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 19:17:02.665946  438716 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 19:17:02.676665  438716 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 19:17:02.686818  438716 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 19:17:02.686840  438716 kubeadm.go:157] found existing configuration files:
	
	I0819 19:17:02.686885  438716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 19:17:02.697160  438716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 19:17:02.697228  438716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 19:17:02.707774  438716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 19:17:02.717251  438716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 19:17:02.717310  438716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 19:17:02.727481  438716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 19:17:02.738085  438716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 19:17:02.738141  438716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 19:17:02.749286  438716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 19:17:02.759965  438716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 19:17:02.760025  438716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 19:17:02.770753  438716 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 19:17:02.835857  438716 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0819 19:17:02.835940  438716 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 19:17:02.983775  438716 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 19:17:02.983974  438716 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 19:17:02.984149  438716 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0819 19:17:03.173404  438716 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 19:17:03.175412  438716 out.go:235]   - Generating certificates and keys ...
	I0819 19:17:03.175520  438716 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 19:17:03.175659  438716 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 19:17:03.175805  438716 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 19:17:03.175913  438716 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 19:17:03.176021  438716 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 19:17:03.176125  438716 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 19:17:03.176626  438716 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 19:17:03.177624  438716 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 19:17:03.178399  438716 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 19:17:03.179325  438716 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 19:17:03.179599  438716 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 19:17:03.179702  438716 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 19:17:03.416467  438716 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 19:17:03.505378  438716 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 19:17:03.588959  438716 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 19:17:03.680602  438716 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 19:17:03.697717  438716 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 19:17:03.700436  438716 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 19:17:03.700579  438716 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 19:17:03.858804  438716 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 19:17:03.861395  438716 out.go:235]   - Booting up control plane ...
	I0819 19:17:03.861520  438716 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 19:17:03.877387  438716 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 19:17:03.878611  438716 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 19:17:03.882842  438716 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 19:17:03.887436  438716 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0819 19:17:02.902839  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:05.402376  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:02.248409  438245 kubeadm.go:310] W0819 19:17:02.217617    2563 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 19:17:02.250447  438245 kubeadm.go:310] W0819 19:17:02.219827    2563 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 19:17:02.377127  438245 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 19:17:06.962848  438295 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:17:06.984774  438295 api_server.go:72] duration metric: took 4m23.117653428s to wait for apiserver process to appear ...
	I0819 19:17:06.984811  438295 api_server.go:88] waiting for apiserver healthz status ...
	I0819 19:17:06.984865  438295 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:17:06.984939  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:17:07.025158  438295 cri.go:89] found id: "d66ad075c652a3b446078444a32327c07459f74199be8f89197067dbad566d5a"
	I0819 19:17:07.025201  438295 cri.go:89] found id: ""
	I0819 19:17:07.025213  438295 logs.go:276] 1 containers: [d66ad075c652a3b446078444a32327c07459f74199be8f89197067dbad566d5a]
	I0819 19:17:07.025287  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:07.032365  438295 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:17:07.032446  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:17:07.073368  438295 cri.go:89] found id: "a3cb2c04e3eb3398fa324b660ca1864f22175cbf41fd84eae34a24ce7928b672"
	I0819 19:17:07.073394  438295 cri.go:89] found id: ""
	I0819 19:17:07.073403  438295 logs.go:276] 1 containers: [a3cb2c04e3eb3398fa324b660ca1864f22175cbf41fd84eae34a24ce7928b672]
	I0819 19:17:07.073463  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:07.078781  438295 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:17:07.078891  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:17:07.123263  438295 cri.go:89] found id: "a6bc5b24f616e32fdffb80b6ed0201250b02f143c8217d56ef90dc55551d709f"
	I0819 19:17:07.123293  438295 cri.go:89] found id: ""
	I0819 19:17:07.123303  438295 logs.go:276] 1 containers: [a6bc5b24f616e32fdffb80b6ed0201250b02f143c8217d56ef90dc55551d709f]
	I0819 19:17:07.123365  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:07.128485  438295 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:17:07.128579  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:17:07.167105  438295 cri.go:89] found id: "c09c2a3840c6b84c4d187a5b4938f1e79c515609ad3ff7077a163e94acd5fc22"
	I0819 19:17:07.167137  438295 cri.go:89] found id: ""
	I0819 19:17:07.167148  438295 logs.go:276] 1 containers: [c09c2a3840c6b84c4d187a5b4938f1e79c515609ad3ff7077a163e94acd5fc22]
	I0819 19:17:07.167215  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:07.171571  438295 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:17:07.171641  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:17:07.215524  438295 cri.go:89] found id: "3e23a8501fe9333693618c26b918ed665ca9f2ea955dfc771ddbd90f4af91338"
	I0819 19:17:07.215547  438295 cri.go:89] found id: ""
	I0819 19:17:07.215555  438295 logs.go:276] 1 containers: [3e23a8501fe9333693618c26b918ed665ca9f2ea955dfc771ddbd90f4af91338]
	I0819 19:17:07.215621  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:07.221604  438295 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:17:07.221676  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:17:07.263106  438295 cri.go:89] found id: "6e6dab43bac16fb6a2155177fd2cb01da57c882a322ae89145bc332c50c87071"
	I0819 19:17:07.263140  438295 cri.go:89] found id: ""
	I0819 19:17:07.263149  438295 logs.go:276] 1 containers: [6e6dab43bac16fb6a2155177fd2cb01da57c882a322ae89145bc332c50c87071]
	I0819 19:17:07.263209  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:07.267703  438295 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:17:07.267770  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:17:07.316006  438295 cri.go:89] found id: ""
	I0819 19:17:07.316042  438295 logs.go:276] 0 containers: []
	W0819 19:17:07.316054  438295 logs.go:278] No container was found matching "kindnet"
	I0819 19:17:07.316062  438295 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 19:17:07.316132  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 19:17:07.361100  438295 cri.go:89] found id: "902796698c02b97c3f50f231cba5dfbc00bc7e8344f104fe7a36109e1d10a4f8"
	I0819 19:17:07.361123  438295 cri.go:89] found id: "44a4290db8405288dc877d1dbfa8f1a4976cb6221431aef419db3cdff822d3b6"
	I0819 19:17:07.361126  438295 cri.go:89] found id: ""
	I0819 19:17:07.361133  438295 logs.go:276] 2 containers: [902796698c02b97c3f50f231cba5dfbc00bc7e8344f104fe7a36109e1d10a4f8 44a4290db8405288dc877d1dbfa8f1a4976cb6221431aef419db3cdff822d3b6]
	I0819 19:17:07.361190  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:07.366949  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:07.372724  438295 logs.go:123] Gathering logs for kubelet ...
	I0819 19:17:07.372748  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 19:17:07.413540  438295 logs.go:138] Found kubelet problem: Aug 19 19:12:40 embed-certs-024748 kubelet[936]: W0819 19:12:40.671901     936 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:embed-certs-024748" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-024748' and this object
	W0819 19:17:07.413722  438295 logs.go:138] Found kubelet problem: Aug 19 19:12:40 embed-certs-024748 kubelet[936]: E0819 19:12:40.672098     936 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:embed-certs-024748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-024748' and this object" logger="UnhandledError"
	W0819 19:17:07.413858  438295 logs.go:138] Found kubelet problem: Aug 19 19:12:40 embed-certs-024748 kubelet[936]: W0819 19:12:40.672624     936 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-024748" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-024748' and this object
	W0819 19:17:07.414017  438295 logs.go:138] Found kubelet problem: Aug 19 19:12:40 embed-certs-024748 kubelet[936]: E0819 19:12:40.672667     936 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:embed-certs-024748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-024748' and this object" logger="UnhandledError"
	I0819 19:17:07.452061  438295 logs.go:123] Gathering logs for coredns [a6bc5b24f616e32fdffb80b6ed0201250b02f143c8217d56ef90dc55551d709f] ...
	I0819 19:17:07.452104  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6bc5b24f616e32fdffb80b6ed0201250b02f143c8217d56ef90dc55551d709f"
	I0819 19:17:07.490598  438295 logs.go:123] Gathering logs for kube-scheduler [c09c2a3840c6b84c4d187a5b4938f1e79c515609ad3ff7077a163e94acd5fc22] ...
	I0819 19:17:07.490636  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c09c2a3840c6b84c4d187a5b4938f1e79c515609ad3ff7077a163e94acd5fc22"
	I0819 19:17:07.530454  438295 logs.go:123] Gathering logs for kube-proxy [3e23a8501fe9333693618c26b918ed665ca9f2ea955dfc771ddbd90f4af91338] ...
	I0819 19:17:07.530486  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e23a8501fe9333693618c26b918ed665ca9f2ea955dfc771ddbd90f4af91338"
	I0819 19:17:07.581488  438295 logs.go:123] Gathering logs for storage-provisioner [902796698c02b97c3f50f231cba5dfbc00bc7e8344f104fe7a36109e1d10a4f8] ...
	I0819 19:17:07.581528  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 902796698c02b97c3f50f231cba5dfbc00bc7e8344f104fe7a36109e1d10a4f8"
	I0819 19:17:07.621752  438295 logs.go:123] Gathering logs for storage-provisioner [44a4290db8405288dc877d1dbfa8f1a4976cb6221431aef419db3cdff822d3b6] ...
	I0819 19:17:07.621787  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44a4290db8405288dc877d1dbfa8f1a4976cb6221431aef419db3cdff822d3b6"
	I0819 19:17:07.661330  438295 logs.go:123] Gathering logs for container status ...
	I0819 19:17:07.661365  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:17:07.709227  438295 logs.go:123] Gathering logs for dmesg ...
	I0819 19:17:07.709261  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:17:07.724634  438295 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:17:07.724670  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 19:17:07.850212  438295 logs.go:123] Gathering logs for kube-apiserver [d66ad075c652a3b446078444a32327c07459f74199be8f89197067dbad566d5a] ...
	I0819 19:17:07.850247  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d66ad075c652a3b446078444a32327c07459f74199be8f89197067dbad566d5a"
	I0819 19:17:07.894464  438295 logs.go:123] Gathering logs for etcd [a3cb2c04e3eb3398fa324b660ca1864f22175cbf41fd84eae34a24ce7928b672] ...
	I0819 19:17:07.894507  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a3cb2c04e3eb3398fa324b660ca1864f22175cbf41fd84eae34a24ce7928b672"
	I0819 19:17:07.943807  438295 logs.go:123] Gathering logs for kube-controller-manager [6e6dab43bac16fb6a2155177fd2cb01da57c882a322ae89145bc332c50c87071] ...
	I0819 19:17:07.943841  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e6dab43bac16fb6a2155177fd2cb01da57c882a322ae89145bc332c50c87071"
	I0819 19:17:08.007428  438295 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:17:08.007463  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:17:08.487397  438295 out.go:358] Setting ErrFile to fd 2...
	I0819 19:17:08.487435  438295 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 19:17:08.487518  438295 out.go:270] X Problems detected in kubelet:
	W0819 19:17:08.487534  438295 out.go:270]   Aug 19 19:12:40 embed-certs-024748 kubelet[936]: W0819 19:12:40.671901     936 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:embed-certs-024748" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-024748' and this object
	W0819 19:17:08.487546  438295 out.go:270]   Aug 19 19:12:40 embed-certs-024748 kubelet[936]: E0819 19:12:40.672098     936 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:embed-certs-024748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-024748' and this object" logger="UnhandledError"
	W0819 19:17:08.487560  438295 out.go:270]   Aug 19 19:12:40 embed-certs-024748 kubelet[936]: W0819 19:12:40.672624     936 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-024748" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-024748' and this object
	W0819 19:17:08.487574  438295 out.go:270]   Aug 19 19:12:40 embed-certs-024748 kubelet[936]: E0819 19:12:40.672667     936 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:embed-certs-024748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-024748' and this object" logger="UnhandledError"
	I0819 19:17:08.487584  438295 out.go:358] Setting ErrFile to fd 2...
	I0819 19:17:08.487598  438295 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:17:10.237580  438245 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0819 19:17:10.237675  438245 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 19:17:10.237792  438245 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 19:17:10.237934  438245 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 19:17:10.238088  438245 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0819 19:17:10.238194  438245 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 19:17:10.239873  438245 out.go:235]   - Generating certificates and keys ...
	I0819 19:17:10.239957  438245 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 19:17:10.240051  438245 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 19:17:10.240187  438245 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 19:17:10.240294  438245 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 19:17:10.240410  438245 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 19:17:10.240495  438245 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 19:17:10.240598  438245 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 19:17:10.240680  438245 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 19:17:10.240747  438245 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 19:17:10.240843  438245 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 19:17:10.240886  438245 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 19:17:10.240958  438245 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 19:17:10.241024  438245 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 19:17:10.241094  438245 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0819 19:17:10.241159  438245 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 19:17:10.241248  438245 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 19:17:10.241328  438245 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 19:17:10.241431  438245 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 19:17:10.241535  438245 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 19:17:10.243764  438245 out.go:235]   - Booting up control plane ...
	I0819 19:17:10.243859  438245 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 19:17:10.243934  438245 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 19:17:10.243994  438245 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 19:17:10.244131  438245 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 19:17:10.244263  438245 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 19:17:10.244301  438245 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 19:17:10.244458  438245 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0819 19:17:10.244611  438245 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0819 19:17:10.244685  438245 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.412341ms
	I0819 19:17:10.244770  438245 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0819 19:17:10.244850  438245 kubeadm.go:310] [api-check] The API server is healthy after 5.002047877s
	I0819 19:17:10.244953  438245 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 19:17:10.245093  438245 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 19:17:10.245199  438245 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 19:17:10.245400  438245 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-982795 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 19:17:10.245465  438245 kubeadm.go:310] [bootstrap-token] Using token: trsfx5.kx2phd1605yhia2w
	I0819 19:17:10.247722  438245 out.go:235]   - Configuring RBAC rules ...
	I0819 19:17:10.247861  438245 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 19:17:10.247955  438245 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 19:17:10.248144  438245 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 19:17:10.248264  438245 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 19:17:10.248379  438245 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 19:17:10.248468  438245 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 19:17:10.248567  438245 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 19:17:10.248612  438245 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 19:17:10.248654  438245 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 19:17:10.248660  438245 kubeadm.go:310] 
	I0819 19:17:10.248708  438245 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 19:17:10.248713  438245 kubeadm.go:310] 
	I0819 19:17:10.248779  438245 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 19:17:10.248786  438245 kubeadm.go:310] 
	I0819 19:17:10.248806  438245 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 19:17:10.248866  438245 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 19:17:10.248910  438245 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 19:17:10.248916  438245 kubeadm.go:310] 
	I0819 19:17:10.248966  438245 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 19:17:10.248972  438245 kubeadm.go:310] 
	I0819 19:17:10.249014  438245 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 19:17:10.249024  438245 kubeadm.go:310] 
	I0819 19:17:10.249069  438245 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 19:17:10.249136  438245 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 19:17:10.249209  438245 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 19:17:10.249221  438245 kubeadm.go:310] 
	I0819 19:17:10.249319  438245 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 19:17:10.249386  438245 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 19:17:10.249392  438245 kubeadm.go:310] 
	I0819 19:17:10.249464  438245 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token trsfx5.kx2phd1605yhia2w \
	I0819 19:17:10.249553  438245 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3fcbd90565c5acbc36a47b2db682cb22dce9b172c9bf3af21e506ebb67608039 \
	I0819 19:17:10.249575  438245 kubeadm.go:310] 	--control-plane 
	I0819 19:17:10.249581  438245 kubeadm.go:310] 
	I0819 19:17:10.249658  438245 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 19:17:10.249664  438245 kubeadm.go:310] 
	I0819 19:17:10.249734  438245 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token trsfx5.kx2phd1605yhia2w \
	I0819 19:17:10.249833  438245 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3fcbd90565c5acbc36a47b2db682cb22dce9b172c9bf3af21e506ebb67608039 
	I0819 19:17:10.249849  438245 cni.go:84] Creating CNI manager for ""
	I0819 19:17:10.249857  438245 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 19:17:10.252133  438245 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 19:17:07.403590  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:09.901861  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:10.253419  438245 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 19:17:10.264266  438245 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 19:17:10.289509  438245 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 19:17:10.289661  438245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-982795 minikube.k8s.io/updated_at=2024_08_19T19_17_10_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=9c2db9d51ec33b5c53a86e9ba3d384ee332e3411 minikube.k8s.io/name=default-k8s-diff-port-982795 minikube.k8s.io/primary=true
	I0819 19:17:10.289663  438245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:17:10.322738  438245 ops.go:34] apiserver oom_adj: -16
	I0819 19:17:10.519946  438245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:17:11.020736  438245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:17:11.520925  438245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:17:12.020276  438245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:17:12.520277  438245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:17:13.020787  438245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:17:13.520048  438245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:17:14.020893  438245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:17:14.520869  438245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:17:14.642214  438245 kubeadm.go:1113] duration metric: took 4.352638211s to wait for elevateKubeSystemPrivileges
	I0819 19:17:14.642251  438245 kubeadm.go:394] duration metric: took 4m59.943476935s to StartCluster
	I0819 19:17:14.642295  438245 settings.go:142] acquiring lock: {Name:mk396fcf49a1d0e69583cf37ff3c819e37118163 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:17:14.642382  438245 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19468-372744/kubeconfig
	I0819 19:17:14.644103  438245 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/kubeconfig: {Name:mk8e7b4e1bb7da665111d2acd83eb48882c66853 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:17:14.644408  438245 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.48 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 19:17:14.644550  438245 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 19:17:14.644641  438245 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-982795"
	I0819 19:17:14.644665  438245 config.go:182] Loaded profile config "default-k8s-diff-port-982795": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:17:14.644687  438245 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-982795"
	W0819 19:17:14.644701  438245 addons.go:243] addon storage-provisioner should already be in state true
	I0819 19:17:14.644712  438245 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-982795"
	I0819 19:17:14.644735  438245 host.go:66] Checking if "default-k8s-diff-port-982795" exists ...
	I0819 19:17:14.644757  438245 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-982795"
	W0819 19:17:14.644770  438245 addons.go:243] addon metrics-server should already be in state true
	I0819 19:17:14.644678  438245 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-982795"
	I0819 19:17:14.644852  438245 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-982795"
	I0819 19:17:14.644797  438245 host.go:66] Checking if "default-k8s-diff-port-982795" exists ...
	I0819 19:17:14.645125  438245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:17:14.645176  438245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:17:14.645272  438245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:17:14.645291  438245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:17:14.645355  438245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:17:14.645401  438245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:17:14.646083  438245 out.go:177] * Verifying Kubernetes components...
	I0819 19:17:14.647579  438245 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:17:14.662756  438245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42581
	I0819 19:17:14.663407  438245 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:17:14.664088  438245 main.go:141] libmachine: Using API Version  1
	I0819 19:17:14.664117  438245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:17:14.664528  438245 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:17:14.665189  438245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:17:14.665222  438245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:17:14.665665  438245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43637
	I0819 19:17:14.665842  438245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44021
	I0819 19:17:14.666204  438245 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:17:14.666321  438245 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:17:14.666761  438245 main.go:141] libmachine: Using API Version  1
	I0819 19:17:14.666783  438245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:17:14.666955  438245 main.go:141] libmachine: Using API Version  1
	I0819 19:17:14.666979  438245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:17:14.667173  438245 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:17:14.667363  438245 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:17:14.667592  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetState
	I0819 19:17:14.667786  438245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:17:14.667818  438245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:17:14.671231  438245 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-982795"
	W0819 19:17:14.671249  438245 addons.go:243] addon default-storageclass should already be in state true
	I0819 19:17:14.671273  438245 host.go:66] Checking if "default-k8s-diff-port-982795" exists ...
	I0819 19:17:14.671507  438245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:17:14.671533  438245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:17:14.682996  438245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36593
	I0819 19:17:14.683560  438245 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:17:14.684268  438245 main.go:141] libmachine: Using API Version  1
	I0819 19:17:14.684292  438245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:17:14.684686  438245 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:17:14.684899  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetState
	I0819 19:17:14.686943  438245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44459
	I0819 19:17:14.687384  438245 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:17:14.687309  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .DriverName
	I0819 19:17:14.687874  438245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46587
	I0819 19:17:14.687965  438245 main.go:141] libmachine: Using API Version  1
	I0819 19:17:14.687980  438245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:17:14.688367  438245 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:17:14.688420  438245 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:17:14.688623  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetState
	I0819 19:17:14.689039  438245 main.go:141] libmachine: Using API Version  1
	I0819 19:17:14.689362  438245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:17:14.689690  438245 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:17:14.690179  438245 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:17:14.690626  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .DriverName
	I0819 19:17:14.690789  438245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:17:14.690823  438245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:17:14.690938  438245 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 19:17:14.690958  438245 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 19:17:14.690979  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHHostname
	I0819 19:17:14.692114  438245 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0819 19:17:11.902284  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:13.903205  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:16.402298  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:14.693147  438245 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0819 19:17:14.693163  438245 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0819 19:17:14.693182  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHHostname
	I0819 19:17:14.694601  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:17:14.695302  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:17:14.695333  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:17:14.695541  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHPort
	I0819 19:17:14.695760  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:17:14.696133  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHUsername
	I0819 19:17:14.696303  438245 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/default-k8s-diff-port-982795/id_rsa Username:docker}
	I0819 19:17:14.696554  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:17:14.696979  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:17:14.697003  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:17:14.697110  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHPort
	I0819 19:17:14.697274  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:17:14.697445  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHUsername
	I0819 19:17:14.697578  438245 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/default-k8s-diff-port-982795/id_rsa Username:docker}
	I0819 19:17:14.708592  438245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38807
	I0819 19:17:14.709140  438245 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:17:14.709716  438245 main.go:141] libmachine: Using API Version  1
	I0819 19:17:14.709737  438245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:17:14.710049  438245 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:17:14.710269  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetState
	I0819 19:17:14.711887  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .DriverName
	I0819 19:17:14.712147  438245 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 19:17:14.712162  438245 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 19:17:14.712179  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHHostname
	I0819 19:17:14.715593  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:17:14.716040  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:17:14.716062  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:17:14.716384  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHPort
	I0819 19:17:14.716561  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:17:14.716710  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHUsername
	I0819 19:17:14.716938  438245 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/default-k8s-diff-port-982795/id_rsa Username:docker}
	I0819 19:17:14.874857  438245 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 19:17:14.903798  438245 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-982795" to be "Ready" ...
	I0819 19:17:14.919842  438245 node_ready.go:49] node "default-k8s-diff-port-982795" has status "Ready":"True"
	I0819 19:17:14.919866  438245 node_ready.go:38] duration metric: took 16.039402ms for node "default-k8s-diff-port-982795" to be "Ready" ...
	I0819 19:17:14.919877  438245 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 19:17:14.932785  438245 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-845gx" in "kube-system" namespace to be "Ready" ...
	I0819 19:17:15.019664  438245 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0819 19:17:15.019718  438245 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0819 19:17:15.030317  438245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 19:17:15.056177  438245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 19:17:15.074202  438245 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0819 19:17:15.074235  438245 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0819 19:17:15.127037  438245 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 19:17:15.127071  438245 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0819 19:17:15.217951  438245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 19:17:15.351034  438245 main.go:141] libmachine: Making call to close driver server
	I0819 19:17:15.351067  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .Close
	I0819 19:17:15.351398  438245 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:17:15.351417  438245 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:17:15.351429  438245 main.go:141] libmachine: Making call to close driver server
	I0819 19:17:15.351441  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .Close
	I0819 19:17:15.351678  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | Closing plugin on server side
	I0819 19:17:15.351728  438245 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:17:15.351750  438245 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:17:15.357999  438245 main.go:141] libmachine: Making call to close driver server
	I0819 19:17:15.358023  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .Close
	I0819 19:17:15.358291  438245 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:17:15.358316  438245 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:17:16.196638  438245 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.140417152s)
	I0819 19:17:16.196694  438245 main.go:141] libmachine: Making call to close driver server
	I0819 19:17:16.196707  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .Close
	I0819 19:17:16.197022  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | Closing plugin on server side
	I0819 19:17:16.197112  438245 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:17:16.197137  438245 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:17:16.197157  438245 main.go:141] libmachine: Making call to close driver server
	I0819 19:17:16.197167  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .Close
	I0819 19:17:16.197449  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | Closing plugin on server side
	I0819 19:17:16.197493  438245 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:17:16.197505  438245 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:17:16.638069  438245 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.42006496s)
	I0819 19:17:16.638141  438245 main.go:141] libmachine: Making call to close driver server
	I0819 19:17:16.638159  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .Close
	I0819 19:17:16.638488  438245 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:17:16.638518  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | Closing plugin on server side
	I0819 19:17:16.638529  438245 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:17:16.638564  438245 main.go:141] libmachine: Making call to close driver server
	I0819 19:17:16.638574  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .Close
	I0819 19:17:16.638861  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | Closing plugin on server side
	I0819 19:17:16.638896  438245 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:17:16.638904  438245 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:17:16.638915  438245 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-982795"
	I0819 19:17:16.641476  438245 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0819 19:17:16.642733  438245 addons.go:510] duration metric: took 1.998196502s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0819 19:17:16.954631  438245 pod_ready.go:103] pod "coredns-6f6b679f8f-845gx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:18.489333  438295 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0819 19:17:18.494609  438295 api_server.go:279] https://192.168.72.96:8443/healthz returned 200:
	ok
	I0819 19:17:18.495587  438295 api_server.go:141] control plane version: v1.31.0
	I0819 19:17:18.495613  438295 api_server.go:131] duration metric: took 11.510793296s to wait for apiserver health ...
	I0819 19:17:18.495624  438295 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 19:17:18.495656  438295 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:17:18.495735  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:17:18.540446  438295 cri.go:89] found id: "d66ad075c652a3b446078444a32327c07459f74199be8f89197067dbad566d5a"
	I0819 19:17:18.540477  438295 cri.go:89] found id: ""
	I0819 19:17:18.540487  438295 logs.go:276] 1 containers: [d66ad075c652a3b446078444a32327c07459f74199be8f89197067dbad566d5a]
	I0819 19:17:18.540555  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:18.551443  438295 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:17:18.551527  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:17:18.592388  438295 cri.go:89] found id: "a3cb2c04e3eb3398fa324b660ca1864f22175cbf41fd84eae34a24ce7928b672"
	I0819 19:17:18.592416  438295 cri.go:89] found id: ""
	I0819 19:17:18.592427  438295 logs.go:276] 1 containers: [a3cb2c04e3eb3398fa324b660ca1864f22175cbf41fd84eae34a24ce7928b672]
	I0819 19:17:18.592495  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:18.597534  438295 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:17:18.597615  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:17:18.637782  438295 cri.go:89] found id: "a6bc5b24f616e32fdffb80b6ed0201250b02f143c8217d56ef90dc55551d709f"
	I0819 19:17:18.637804  438295 cri.go:89] found id: ""
	I0819 19:17:18.637812  438295 logs.go:276] 1 containers: [a6bc5b24f616e32fdffb80b6ed0201250b02f143c8217d56ef90dc55551d709f]
	I0819 19:17:18.637861  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:18.642557  438295 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:17:18.642618  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:17:18.679573  438295 cri.go:89] found id: "c09c2a3840c6b84c4d187a5b4938f1e79c515609ad3ff7077a163e94acd5fc22"
	I0819 19:17:18.679597  438295 cri.go:89] found id: ""
	I0819 19:17:18.679605  438295 logs.go:276] 1 containers: [c09c2a3840c6b84c4d187a5b4938f1e79c515609ad3ff7077a163e94acd5fc22]
	I0819 19:17:18.679657  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:18.684160  438295 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:17:18.684230  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:17:18.726848  438295 cri.go:89] found id: "3e23a8501fe9333693618c26b918ed665ca9f2ea955dfc771ddbd90f4af91338"
	I0819 19:17:18.726881  438295 cri.go:89] found id: ""
	I0819 19:17:18.726889  438295 logs.go:276] 1 containers: [3e23a8501fe9333693618c26b918ed665ca9f2ea955dfc771ddbd90f4af91338]
	I0819 19:17:18.726943  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:18.731422  438295 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:17:18.731484  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:17:18.773623  438295 cri.go:89] found id: "6e6dab43bac16fb6a2155177fd2cb01da57c882a322ae89145bc332c50c87071"
	I0819 19:17:18.773649  438295 cri.go:89] found id: ""
	I0819 19:17:18.773658  438295 logs.go:276] 1 containers: [6e6dab43bac16fb6a2155177fd2cb01da57c882a322ae89145bc332c50c87071]
	I0819 19:17:18.773709  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:18.779609  438295 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:17:18.779687  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:17:18.822876  438295 cri.go:89] found id: ""
	I0819 19:17:18.822911  438295 logs.go:276] 0 containers: []
	W0819 19:17:18.822922  438295 logs.go:278] No container was found matching "kindnet"
	I0819 19:17:18.822931  438295 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 19:17:18.822998  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 19:17:18.868653  438295 cri.go:89] found id: "902796698c02b97c3f50f231cba5dfbc00bc7e8344f104fe7a36109e1d10a4f8"
	I0819 19:17:18.868685  438295 cri.go:89] found id: "44a4290db8405288dc877d1dbfa8f1a4976cb6221431aef419db3cdff822d3b6"
	I0819 19:17:18.868691  438295 cri.go:89] found id: ""
	I0819 19:17:18.868701  438295 logs.go:276] 2 containers: [902796698c02b97c3f50f231cba5dfbc00bc7e8344f104fe7a36109e1d10a4f8 44a4290db8405288dc877d1dbfa8f1a4976cb6221431aef419db3cdff822d3b6]
	I0819 19:17:18.868776  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:18.873136  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:18.877397  438295 logs.go:123] Gathering logs for kube-proxy [3e23a8501fe9333693618c26b918ed665ca9f2ea955dfc771ddbd90f4af91338] ...
	I0819 19:17:18.877425  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e23a8501fe9333693618c26b918ed665ca9f2ea955dfc771ddbd90f4af91338"
	I0819 19:17:18.918085  438295 logs.go:123] Gathering logs for kube-controller-manager [6e6dab43bac16fb6a2155177fd2cb01da57c882a322ae89145bc332c50c87071] ...
	I0819 19:17:18.918118  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e6dab43bac16fb6a2155177fd2cb01da57c882a322ae89145bc332c50c87071"
	I0819 19:17:18.973344  438295 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:17:18.973378  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:17:18.901539  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:20.902550  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:19.440295  438245 pod_ready.go:103] pod "coredns-6f6b679f8f-845gx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:21.939652  438245 pod_ready.go:103] pod "coredns-6f6b679f8f-845gx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:19.443625  438295 logs.go:123] Gathering logs for container status ...
	I0819 19:17:19.443689  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:17:19.492650  438295 logs.go:123] Gathering logs for dmesg ...
	I0819 19:17:19.492696  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:17:19.507957  438295 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:17:19.507996  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 19:17:19.617295  438295 logs.go:123] Gathering logs for coredns [a6bc5b24f616e32fdffb80b6ed0201250b02f143c8217d56ef90dc55551d709f] ...
	I0819 19:17:19.617341  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6bc5b24f616e32fdffb80b6ed0201250b02f143c8217d56ef90dc55551d709f"
	I0819 19:17:19.669869  438295 logs.go:123] Gathering logs for kube-scheduler [c09c2a3840c6b84c4d187a5b4938f1e79c515609ad3ff7077a163e94acd5fc22] ...
	I0819 19:17:19.669930  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c09c2a3840c6b84c4d187a5b4938f1e79c515609ad3ff7077a163e94acd5fc22"
	I0819 19:17:19.706649  438295 logs.go:123] Gathering logs for storage-provisioner [44a4290db8405288dc877d1dbfa8f1a4976cb6221431aef419db3cdff822d3b6] ...
	I0819 19:17:19.706681  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44a4290db8405288dc877d1dbfa8f1a4976cb6221431aef419db3cdff822d3b6"
	I0819 19:17:19.746742  438295 logs.go:123] Gathering logs for kubelet ...
	I0819 19:17:19.746780  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 19:17:19.796224  438295 logs.go:138] Found kubelet problem: Aug 19 19:12:40 embed-certs-024748 kubelet[936]: W0819 19:12:40.671901     936 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:embed-certs-024748" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-024748' and this object
	W0819 19:17:19.796442  438295 logs.go:138] Found kubelet problem: Aug 19 19:12:40 embed-certs-024748 kubelet[936]: E0819 19:12:40.672098     936 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:embed-certs-024748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-024748' and this object" logger="UnhandledError"
	W0819 19:17:19.796622  438295 logs.go:138] Found kubelet problem: Aug 19 19:12:40 embed-certs-024748 kubelet[936]: W0819 19:12:40.672624     936 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-024748" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-024748' and this object
	W0819 19:17:19.796845  438295 logs.go:138] Found kubelet problem: Aug 19 19:12:40 embed-certs-024748 kubelet[936]: E0819 19:12:40.672667     936 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:embed-certs-024748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-024748' and this object" logger="UnhandledError"
	I0819 19:17:19.836283  438295 logs.go:123] Gathering logs for kube-apiserver [d66ad075c652a3b446078444a32327c07459f74199be8f89197067dbad566d5a] ...
	I0819 19:17:19.836328  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d66ad075c652a3b446078444a32327c07459f74199be8f89197067dbad566d5a"
	I0819 19:17:19.889829  438295 logs.go:123] Gathering logs for etcd [a3cb2c04e3eb3398fa324b660ca1864f22175cbf41fd84eae34a24ce7928b672] ...
	I0819 19:17:19.889875  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a3cb2c04e3eb3398fa324b660ca1864f22175cbf41fd84eae34a24ce7928b672"
	I0819 19:17:19.938361  438295 logs.go:123] Gathering logs for storage-provisioner [902796698c02b97c3f50f231cba5dfbc00bc7e8344f104fe7a36109e1d10a4f8] ...
	I0819 19:17:19.938397  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 902796698c02b97c3f50f231cba5dfbc00bc7e8344f104fe7a36109e1d10a4f8"
	I0819 19:17:19.978525  438295 out.go:358] Setting ErrFile to fd 2...
	I0819 19:17:19.978557  438295 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 19:17:19.978628  438295 out.go:270] X Problems detected in kubelet:
	W0819 19:17:19.978642  438295 out.go:270]   Aug 19 19:12:40 embed-certs-024748 kubelet[936]: W0819 19:12:40.671901     936 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:embed-certs-024748" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-024748' and this object
	W0819 19:17:19.978656  438295 out.go:270]   Aug 19 19:12:40 embed-certs-024748 kubelet[936]: E0819 19:12:40.672098     936 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:embed-certs-024748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-024748' and this object" logger="UnhandledError"
	W0819 19:17:19.978669  438295 out.go:270]   Aug 19 19:12:40 embed-certs-024748 kubelet[936]: W0819 19:12:40.672624     936 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-024748" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-024748' and this object
	W0819 19:17:19.978680  438295 out.go:270]   Aug 19 19:12:40 embed-certs-024748 kubelet[936]: E0819 19:12:40.672667     936 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:embed-certs-024748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-024748' and this object" logger="UnhandledError"
	I0819 19:17:19.978690  438295 out.go:358] Setting ErrFile to fd 2...
	I0819 19:17:19.978699  438295 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:17:23.941399  438245 pod_ready.go:93] pod "coredns-6f6b679f8f-845gx" in "kube-system" namespace has status "Ready":"True"
	I0819 19:17:23.941426  438245 pod_ready.go:82] duration metric: took 9.00859927s for pod "coredns-6f6b679f8f-845gx" in "kube-system" namespace to be "Ready" ...
	I0819 19:17:23.941438  438245 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-tlxtt" in "kube-system" namespace to be "Ready" ...
	I0819 19:17:23.946827  438245 pod_ready.go:93] pod "coredns-6f6b679f8f-tlxtt" in "kube-system" namespace has status "Ready":"True"
	I0819 19:17:23.946848  438245 pod_ready.go:82] duration metric: took 5.40058ms for pod "coredns-6f6b679f8f-tlxtt" in "kube-system" namespace to be "Ready" ...
	I0819 19:17:23.946859  438245 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:17:23.956158  438245 pod_ready.go:93] pod "etcd-default-k8s-diff-port-982795" in "kube-system" namespace has status "Ready":"True"
	I0819 19:17:23.956181  438245 pod_ready.go:82] duration metric: took 9.312871ms for pod "etcd-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:17:23.956193  438245 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:17:23.962573  438245 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-982795" in "kube-system" namespace has status "Ready":"True"
	I0819 19:17:23.962595  438245 pod_ready.go:82] duration metric: took 6.3934ms for pod "kube-apiserver-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:17:23.962607  438245 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:17:23.968186  438245 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-982795" in "kube-system" namespace has status "Ready":"True"
	I0819 19:17:23.968206  438245 pod_ready.go:82] duration metric: took 5.591464ms for pod "kube-controller-manager-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:17:23.968214  438245 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2v4hk" in "kube-system" namespace to be "Ready" ...
	I0819 19:17:24.337409  438245 pod_ready.go:93] pod "kube-proxy-2v4hk" in "kube-system" namespace has status "Ready":"True"
	I0819 19:17:24.337443  438245 pod_ready.go:82] duration metric: took 369.220318ms for pod "kube-proxy-2v4hk" in "kube-system" namespace to be "Ready" ...
	I0819 19:17:24.337460  438245 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:17:24.737326  438245 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-982795" in "kube-system" namespace has status "Ready":"True"
	I0819 19:17:24.737362  438245 pod_ready.go:82] duration metric: took 399.891804ms for pod "kube-scheduler-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:17:24.737375  438245 pod_ready.go:39] duration metric: took 9.817484404s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 19:17:24.737396  438245 api_server.go:52] waiting for apiserver process to appear ...
	I0819 19:17:24.737467  438245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:17:24.753681  438245 api_server.go:72] duration metric: took 10.109231411s to wait for apiserver process to appear ...
	I0819 19:17:24.753711  438245 api_server.go:88] waiting for apiserver healthz status ...
	I0819 19:17:24.753734  438245 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8444/healthz ...
	I0819 19:17:24.757976  438245 api_server.go:279] https://192.168.61.48:8444/healthz returned 200:
	ok
	I0819 19:17:24.758875  438245 api_server.go:141] control plane version: v1.31.0
	I0819 19:17:24.758899  438245 api_server.go:131] duration metric: took 5.179486ms to wait for apiserver health ...
	I0819 19:17:24.758908  438245 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 19:17:24.944008  438245 system_pods.go:59] 9 kube-system pods found
	I0819 19:17:24.944053  438245 system_pods.go:61] "coredns-6f6b679f8f-845gx" [95155dd2-d46c-4445-b735-26eae16aaff9] Running
	I0819 19:17:24.944058  438245 system_pods.go:61] "coredns-6f6b679f8f-tlxtt" [150ac4be-bef1-4f0a-ab16-f085284686cb] Running
	I0819 19:17:24.944062  438245 system_pods.go:61] "etcd-default-k8s-diff-port-982795" [eb29f445-6242-4b60-a8d5-7c684df17926] Running
	I0819 19:17:24.944066  438245 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-982795" [2add6270-bf14-43e7-834b-3e629f46efa3] Running
	I0819 19:17:24.944070  438245 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-982795" [6b636d4b-0efa-4cef-b0d4-d4539ddc5c90] Running
	I0819 19:17:24.944073  438245 system_pods.go:61] "kube-proxy-2v4hk" [042d5d54-6557-4d8e-8f4e-2d56e95882ce] Running
	I0819 19:17:24.944076  438245 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-982795" [6eff3815-26b3-4e95-a754-2dc65fd29126] Running
	I0819 19:17:24.944082  438245 system_pods.go:61] "metrics-server-6867b74b74-2dp5r" [04e0ce68-d9a2-426a-a0e9-47f6f7867efd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 19:17:24.944086  438245 system_pods.go:61] "storage-provisioner" [23fcea86-977e-4eb1-9e5a-23d6bdfb09c0] Running
	I0819 19:17:24.944094  438245 system_pods.go:74] duration metric: took 185.180015ms to wait for pod list to return data ...
	I0819 19:17:24.944104  438245 default_sa.go:34] waiting for default service account to be created ...
	I0819 19:17:25.137108  438245 default_sa.go:45] found service account: "default"
	I0819 19:17:25.137147  438245 default_sa.go:55] duration metric: took 193.033434ms for default service account to be created ...
	I0819 19:17:25.137160  438245 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 19:17:25.340115  438245 system_pods.go:86] 9 kube-system pods found
	I0819 19:17:25.340146  438245 system_pods.go:89] "coredns-6f6b679f8f-845gx" [95155dd2-d46c-4445-b735-26eae16aaff9] Running
	I0819 19:17:25.340155  438245 system_pods.go:89] "coredns-6f6b679f8f-tlxtt" [150ac4be-bef1-4f0a-ab16-f085284686cb] Running
	I0819 19:17:25.340161  438245 system_pods.go:89] "etcd-default-k8s-diff-port-982795" [eb29f445-6242-4b60-a8d5-7c684df17926] Running
	I0819 19:17:25.340167  438245 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-982795" [2add6270-bf14-43e7-834b-3e629f46efa3] Running
	I0819 19:17:25.340173  438245 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-982795" [6b636d4b-0efa-4cef-b0d4-d4539ddc5c90] Running
	I0819 19:17:25.340177  438245 system_pods.go:89] "kube-proxy-2v4hk" [042d5d54-6557-4d8e-8f4e-2d56e95882ce] Running
	I0819 19:17:25.340182  438245 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-982795" [6eff3815-26b3-4e95-a754-2dc65fd29126] Running
	I0819 19:17:25.340192  438245 system_pods.go:89] "metrics-server-6867b74b74-2dp5r" [04e0ce68-d9a2-426a-a0e9-47f6f7867efd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 19:17:25.340198  438245 system_pods.go:89] "storage-provisioner" [23fcea86-977e-4eb1-9e5a-23d6bdfb09c0] Running
	I0819 19:17:25.340211  438245 system_pods.go:126] duration metric: took 203.044324ms to wait for k8s-apps to be running ...
	I0819 19:17:25.340224  438245 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 19:17:25.340278  438245 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 19:17:25.355190  438245 system_svc.go:56] duration metric: took 14.954269ms WaitForService to wait for kubelet
	I0819 19:17:25.355223  438245 kubeadm.go:582] duration metric: took 10.710777567s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 19:17:25.355252  438245 node_conditions.go:102] verifying NodePressure condition ...
	I0819 19:17:25.537425  438245 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 19:17:25.537459  438245 node_conditions.go:123] node cpu capacity is 2
	I0819 19:17:25.537472  438245 node_conditions.go:105] duration metric: took 182.213218ms to run NodePressure ...
	I0819 19:17:25.537491  438245 start.go:241] waiting for startup goroutines ...
	I0819 19:17:25.537501  438245 start.go:246] waiting for cluster config update ...
	I0819 19:17:25.537516  438245 start.go:255] writing updated cluster config ...
	I0819 19:17:25.537851  438245 ssh_runner.go:195] Run: rm -f paused
	I0819 19:17:25.589212  438245 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 19:17:25.591352  438245 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-982795" cluster and "default" namespace by default
	I0819 19:17:22.902846  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:25.401911  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:29.988042  438295 system_pods.go:59] 8 kube-system pods found
	I0819 19:17:29.988074  438295 system_pods.go:61] "coredns-6f6b679f8f-7ww4z" [bbde00d4-6027-4d8d-b51e-bd68915da166] Running
	I0819 19:17:29.988080  438295 system_pods.go:61] "etcd-embed-certs-024748" [846ff0f0-5399-43fd-8e7b-1f64997cd291] Running
	I0819 19:17:29.988084  438295 system_pods.go:61] "kube-apiserver-embed-certs-024748" [3ff558d6-e82e-47a0-bb81-15244bee6470] Running
	I0819 19:17:29.988088  438295 system_pods.go:61] "kube-controller-manager-embed-certs-024748" [993b82ba-e8e7-4896-a06b-87c4f08d5985] Running
	I0819 19:17:29.988092  438295 system_pods.go:61] "kube-proxy-bmmbh" [1f77f152-f5f4-40f6-9632-1eaa36b9ea31] Running
	I0819 19:17:29.988095  438295 system_pods.go:61] "kube-scheduler-embed-certs-024748" [34684d4c-2479-45c5-883b-158cf9f974f5] Running
	I0819 19:17:29.988100  438295 system_pods.go:61] "metrics-server-6867b74b74-kxcwh" [15f86629-d916-4fdc-9ecf-9cb1b6c83f85] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 19:17:29.988104  438295 system_pods.go:61] "storage-provisioner" [7acb6ce1-21b6-4cdd-a5cb-76d694fc0a38] Running
	I0819 19:17:29.988113  438295 system_pods.go:74] duration metric: took 11.492481541s to wait for pod list to return data ...
	I0819 19:17:29.988120  438295 default_sa.go:34] waiting for default service account to be created ...
	I0819 19:17:29.991728  438295 default_sa.go:45] found service account: "default"
	I0819 19:17:29.991755  438295 default_sa.go:55] duration metric: took 3.62838ms for default service account to be created ...
	I0819 19:17:29.991764  438295 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 19:17:29.997212  438295 system_pods.go:86] 8 kube-system pods found
	I0819 19:17:29.997237  438295 system_pods.go:89] "coredns-6f6b679f8f-7ww4z" [bbde00d4-6027-4d8d-b51e-bd68915da166] Running
	I0819 19:17:29.997243  438295 system_pods.go:89] "etcd-embed-certs-024748" [846ff0f0-5399-43fd-8e7b-1f64997cd291] Running
	I0819 19:17:29.997247  438295 system_pods.go:89] "kube-apiserver-embed-certs-024748" [3ff558d6-e82e-47a0-bb81-15244bee6470] Running
	I0819 19:17:29.997252  438295 system_pods.go:89] "kube-controller-manager-embed-certs-024748" [993b82ba-e8e7-4896-a06b-87c4f08d5985] Running
	I0819 19:17:29.997256  438295 system_pods.go:89] "kube-proxy-bmmbh" [1f77f152-f5f4-40f6-9632-1eaa36b9ea31] Running
	I0819 19:17:29.997260  438295 system_pods.go:89] "kube-scheduler-embed-certs-024748" [34684d4c-2479-45c5-883b-158cf9f974f5] Running
	I0819 19:17:29.997267  438295 system_pods.go:89] "metrics-server-6867b74b74-kxcwh" [15f86629-d916-4fdc-9ecf-9cb1b6c83f85] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 19:17:29.997270  438295 system_pods.go:89] "storage-provisioner" [7acb6ce1-21b6-4cdd-a5cb-76d694fc0a38] Running
	I0819 19:17:29.997277  438295 system_pods.go:126] duration metric: took 5.507363ms to wait for k8s-apps to be running ...
	I0819 19:17:29.997283  438295 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 19:17:29.997329  438295 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 19:17:30.015349  438295 system_svc.go:56] duration metric: took 18.05422ms WaitForService to wait for kubelet
	I0819 19:17:30.015385  438295 kubeadm.go:582] duration metric: took 4m46.148274918s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 19:17:30.015408  438295 node_conditions.go:102] verifying NodePressure condition ...
	I0819 19:17:30.019744  438295 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 19:17:30.019767  438295 node_conditions.go:123] node cpu capacity is 2
	I0819 19:17:30.019779  438295 node_conditions.go:105] duration metric: took 4.364435ms to run NodePressure ...
	I0819 19:17:30.019791  438295 start.go:241] waiting for startup goroutines ...
	I0819 19:17:30.019798  438295 start.go:246] waiting for cluster config update ...
	I0819 19:17:30.019809  438295 start.go:255] writing updated cluster config ...
	I0819 19:17:30.020080  438295 ssh_runner.go:195] Run: rm -f paused
	I0819 19:17:30.071945  438295 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 19:17:30.073912  438295 out.go:177] * Done! kubectl is now configured to use "embed-certs-024748" cluster and "default" namespace by default
	I0819 19:17:27.901471  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:29.901560  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:32.401214  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:34.402184  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:36.901979  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:38.902132  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:41.401103  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:43.889122  438716 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0819 19:17:43.889226  438716 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 19:17:43.889441  438716 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 19:17:43.402531  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:45.402739  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:48.889647  438716 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 19:17:48.889896  438716 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 19:17:47.902033  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:48.402784  438001 pod_ready.go:82] duration metric: took 4m0.007573449s for pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace to be "Ready" ...
	E0819 19:17:48.402807  438001 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0819 19:17:48.402814  438001 pod_ready.go:39] duration metric: took 4m5.043625176s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 19:17:48.402837  438001 api_server.go:52] waiting for apiserver process to appear ...
	I0819 19:17:48.402866  438001 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:17:48.402916  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:17:48.465049  438001 cri.go:89] found id: "cdac290df2d44c9b30a9c4378f98137a73e603fccd18bc228cca5d017f0a7094"
	I0819 19:17:48.465072  438001 cri.go:89] found id: ""
	I0819 19:17:48.465081  438001 logs.go:276] 1 containers: [cdac290df2d44c9b30a9c4378f98137a73e603fccd18bc228cca5d017f0a7094]
	I0819 19:17:48.465157  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:48.469640  438001 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:17:48.469708  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:17:48.506800  438001 cri.go:89] found id: "27d104597d0ca1b418bd0cab630536ff2d859717c314b48ea994680b21a5bd9a"
	I0819 19:17:48.506825  438001 cri.go:89] found id: ""
	I0819 19:17:48.506836  438001 logs.go:276] 1 containers: [27d104597d0ca1b418bd0cab630536ff2d859717c314b48ea994680b21a5bd9a]
	I0819 19:17:48.506900  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:48.511810  438001 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:17:48.511899  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:17:48.558215  438001 cri.go:89] found id: "6ad390cacd3d89ad9a5e7af71dab26d472a67971ffda086057b7cf0e0a9560aa"
	I0819 19:17:48.558240  438001 cri.go:89] found id: ""
	I0819 19:17:48.558250  438001 logs.go:276] 1 containers: [6ad390cacd3d89ad9a5e7af71dab26d472a67971ffda086057b7cf0e0a9560aa]
	I0819 19:17:48.558308  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:48.562785  438001 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:17:48.562844  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:17:48.602715  438001 cri.go:89] found id: "123f84ccdc9cf1aa830891307b79d42c9166f018bff19b498a5107e428feb92f"
	I0819 19:17:48.602738  438001 cri.go:89] found id: ""
	I0819 19:17:48.602748  438001 logs.go:276] 1 containers: [123f84ccdc9cf1aa830891307b79d42c9166f018bff19b498a5107e428feb92f]
	I0819 19:17:48.602815  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:48.607456  438001 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:17:48.607512  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:17:48.648285  438001 cri.go:89] found id: "236b4296ad713b251ca958489ebfc4ce41bd2cb64d538cf0cf5f72cc9243e94a"
	I0819 19:17:48.648314  438001 cri.go:89] found id: ""
	I0819 19:17:48.648324  438001 logs.go:276] 1 containers: [236b4296ad713b251ca958489ebfc4ce41bd2cb64d538cf0cf5f72cc9243e94a]
	I0819 19:17:48.648374  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:48.653772  438001 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:17:48.653830  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:17:48.697336  438001 cri.go:89] found id: "390aeac356048873634022bb4093a927ddaf293b994b7316b79cfc2c4c329346"
	I0819 19:17:48.697365  438001 cri.go:89] found id: ""
	I0819 19:17:48.697376  438001 logs.go:276] 1 containers: [390aeac356048873634022bb4093a927ddaf293b994b7316b79cfc2c4c329346]
	I0819 19:17:48.697438  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:48.701661  438001 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:17:48.701726  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:17:48.737952  438001 cri.go:89] found id: ""
	I0819 19:17:48.737990  438001 logs.go:276] 0 containers: []
	W0819 19:17:48.738002  438001 logs.go:278] No container was found matching "kindnet"
	I0819 19:17:48.738010  438001 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 19:17:48.738076  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 19:17:48.780047  438001 cri.go:89] found id: "fd16c88623359ff9e44155c82c7e33b07dc040678d1d6f1915a25d80a5db0bbd"
	I0819 19:17:48.780076  438001 cri.go:89] found id: "482a17643a2dedc658bdc88ca54e2ffb40166833acfc42adf452364226e51dc6"
	I0819 19:17:48.780082  438001 cri.go:89] found id: ""
	I0819 19:17:48.780092  438001 logs.go:276] 2 containers: [fd16c88623359ff9e44155c82c7e33b07dc040678d1d6f1915a25d80a5db0bbd 482a17643a2dedc658bdc88ca54e2ffb40166833acfc42adf452364226e51dc6]
	I0819 19:17:48.780168  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:48.784558  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:48.788803  438001 logs.go:123] Gathering logs for kube-apiserver [cdac290df2d44c9b30a9c4378f98137a73e603fccd18bc228cca5d017f0a7094] ...
	I0819 19:17:48.788826  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cdac290df2d44c9b30a9c4378f98137a73e603fccd18bc228cca5d017f0a7094"
	I0819 19:17:48.843469  438001 logs.go:123] Gathering logs for kube-scheduler [123f84ccdc9cf1aa830891307b79d42c9166f018bff19b498a5107e428feb92f] ...
	I0819 19:17:48.843501  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 123f84ccdc9cf1aa830891307b79d42c9166f018bff19b498a5107e428feb92f"
	I0819 19:17:48.884461  438001 logs.go:123] Gathering logs for kube-proxy [236b4296ad713b251ca958489ebfc4ce41bd2cb64d538cf0cf5f72cc9243e94a] ...
	I0819 19:17:48.884495  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 236b4296ad713b251ca958489ebfc4ce41bd2cb64d538cf0cf5f72cc9243e94a"
	I0819 19:17:48.927064  438001 logs.go:123] Gathering logs for storage-provisioner [fd16c88623359ff9e44155c82c7e33b07dc040678d1d6f1915a25d80a5db0bbd] ...
	I0819 19:17:48.927093  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd16c88623359ff9e44155c82c7e33b07dc040678d1d6f1915a25d80a5db0bbd"
	I0819 19:17:48.963812  438001 logs.go:123] Gathering logs for container status ...
	I0819 19:17:48.963845  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:17:49.017381  438001 logs.go:123] Gathering logs for kubelet ...
	I0819 19:17:49.017420  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:17:49.093572  438001 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:17:49.093614  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 19:17:49.236680  438001 logs.go:123] Gathering logs for coredns [6ad390cacd3d89ad9a5e7af71dab26d472a67971ffda086057b7cf0e0a9560aa] ...
	I0819 19:17:49.236721  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ad390cacd3d89ad9a5e7af71dab26d472a67971ffda086057b7cf0e0a9560aa"
	I0819 19:17:49.274636  438001 logs.go:123] Gathering logs for kube-controller-manager [390aeac356048873634022bb4093a927ddaf293b994b7316b79cfc2c4c329346] ...
	I0819 19:17:49.274677  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 390aeac356048873634022bb4093a927ddaf293b994b7316b79cfc2c4c329346"
	I0819 19:17:49.326208  438001 logs.go:123] Gathering logs for storage-provisioner [482a17643a2dedc658bdc88ca54e2ffb40166833acfc42adf452364226e51dc6] ...
	I0819 19:17:49.326242  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 482a17643a2dedc658bdc88ca54e2ffb40166833acfc42adf452364226e51dc6"
	I0819 19:17:49.363589  438001 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:17:49.363628  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:17:49.841705  438001 logs.go:123] Gathering logs for dmesg ...
	I0819 19:17:49.841757  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:17:49.858466  438001 logs.go:123] Gathering logs for etcd [27d104597d0ca1b418bd0cab630536ff2d859717c314b48ea994680b21a5bd9a] ...
	I0819 19:17:49.858504  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27d104597d0ca1b418bd0cab630536ff2d859717c314b48ea994680b21a5bd9a"
	I0819 19:17:52.406197  438001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:17:52.422951  438001 api_server.go:72] duration metric: took 4m16.822246565s to wait for apiserver process to appear ...
	I0819 19:17:52.422981  438001 api_server.go:88] waiting for apiserver healthz status ...
	I0819 19:17:52.423019  438001 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:17:52.423075  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:17:52.464305  438001 cri.go:89] found id: "cdac290df2d44c9b30a9c4378f98137a73e603fccd18bc228cca5d017f0a7094"
	I0819 19:17:52.464327  438001 cri.go:89] found id: ""
	I0819 19:17:52.464335  438001 logs.go:276] 1 containers: [cdac290df2d44c9b30a9c4378f98137a73e603fccd18bc228cca5d017f0a7094]
	I0819 19:17:52.464387  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:52.468824  438001 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:17:52.468904  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:17:52.508907  438001 cri.go:89] found id: "27d104597d0ca1b418bd0cab630536ff2d859717c314b48ea994680b21a5bd9a"
	I0819 19:17:52.508929  438001 cri.go:89] found id: ""
	I0819 19:17:52.508937  438001 logs.go:276] 1 containers: [27d104597d0ca1b418bd0cab630536ff2d859717c314b48ea994680b21a5bd9a]
	I0819 19:17:52.508998  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:52.513206  438001 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:17:52.513281  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:17:52.553908  438001 cri.go:89] found id: "6ad390cacd3d89ad9a5e7af71dab26d472a67971ffda086057b7cf0e0a9560aa"
	I0819 19:17:52.553940  438001 cri.go:89] found id: ""
	I0819 19:17:52.553948  438001 logs.go:276] 1 containers: [6ad390cacd3d89ad9a5e7af71dab26d472a67971ffda086057b7cf0e0a9560aa]
	I0819 19:17:52.554007  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:52.558420  438001 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:17:52.558487  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:17:52.598450  438001 cri.go:89] found id: "123f84ccdc9cf1aa830891307b79d42c9166f018bff19b498a5107e428feb92f"
	I0819 19:17:52.598480  438001 cri.go:89] found id: ""
	I0819 19:17:52.598491  438001 logs.go:276] 1 containers: [123f84ccdc9cf1aa830891307b79d42c9166f018bff19b498a5107e428feb92f]
	I0819 19:17:52.598564  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:52.603421  438001 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:17:52.603485  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:17:52.639017  438001 cri.go:89] found id: "236b4296ad713b251ca958489ebfc4ce41bd2cb64d538cf0cf5f72cc9243e94a"
	I0819 19:17:52.639049  438001 cri.go:89] found id: ""
	I0819 19:17:52.639060  438001 logs.go:276] 1 containers: [236b4296ad713b251ca958489ebfc4ce41bd2cb64d538cf0cf5f72cc9243e94a]
	I0819 19:17:52.639129  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:52.645313  438001 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:17:52.645392  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:17:52.687266  438001 cri.go:89] found id: "390aeac356048873634022bb4093a927ddaf293b994b7316b79cfc2c4c329346"
	I0819 19:17:52.687296  438001 cri.go:89] found id: ""
	I0819 19:17:52.687305  438001 logs.go:276] 1 containers: [390aeac356048873634022bb4093a927ddaf293b994b7316b79cfc2c4c329346]
	I0819 19:17:52.687369  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:52.691770  438001 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:17:52.691830  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:17:52.734067  438001 cri.go:89] found id: ""
	I0819 19:17:52.734098  438001 logs.go:276] 0 containers: []
	W0819 19:17:52.734107  438001 logs.go:278] No container was found matching "kindnet"
	I0819 19:17:52.734113  438001 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 19:17:52.734171  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 19:17:52.781039  438001 cri.go:89] found id: "fd16c88623359ff9e44155c82c7e33b07dc040678d1d6f1915a25d80a5db0bbd"
	I0819 19:17:52.781062  438001 cri.go:89] found id: "482a17643a2dedc658bdc88ca54e2ffb40166833acfc42adf452364226e51dc6"
	I0819 19:17:52.781066  438001 cri.go:89] found id: ""
	I0819 19:17:52.781074  438001 logs.go:276] 2 containers: [fd16c88623359ff9e44155c82c7e33b07dc040678d1d6f1915a25d80a5db0bbd 482a17643a2dedc658bdc88ca54e2ffb40166833acfc42adf452364226e51dc6]
	I0819 19:17:52.781135  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:52.785730  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:52.789946  438001 logs.go:123] Gathering logs for kube-scheduler [123f84ccdc9cf1aa830891307b79d42c9166f018bff19b498a5107e428feb92f] ...
	I0819 19:17:52.789978  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 123f84ccdc9cf1aa830891307b79d42c9166f018bff19b498a5107e428feb92f"
	I0819 19:17:52.830509  438001 logs.go:123] Gathering logs for kube-controller-manager [390aeac356048873634022bb4093a927ddaf293b994b7316b79cfc2c4c329346] ...
	I0819 19:17:52.830541  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 390aeac356048873634022bb4093a927ddaf293b994b7316b79cfc2c4c329346"
	I0819 19:17:52.892964  438001 logs.go:123] Gathering logs for container status ...
	I0819 19:17:52.893017  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:17:52.947999  438001 logs.go:123] Gathering logs for kubelet ...
	I0819 19:17:52.948028  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:17:53.019377  438001 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:17:53.019423  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 19:17:53.134032  438001 logs.go:123] Gathering logs for kube-apiserver [cdac290df2d44c9b30a9c4378f98137a73e603fccd18bc228cca5d017f0a7094] ...
	I0819 19:17:53.134069  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cdac290df2d44c9b30a9c4378f98137a73e603fccd18bc228cca5d017f0a7094"
	I0819 19:17:53.186159  438001 logs.go:123] Gathering logs for etcd [27d104597d0ca1b418bd0cab630536ff2d859717c314b48ea994680b21a5bd9a] ...
	I0819 19:17:53.186193  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27d104597d0ca1b418bd0cab630536ff2d859717c314b48ea994680b21a5bd9a"
	I0819 19:17:53.236918  438001 logs.go:123] Gathering logs for storage-provisioner [482a17643a2dedc658bdc88ca54e2ffb40166833acfc42adf452364226e51dc6] ...
	I0819 19:17:53.236949  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 482a17643a2dedc658bdc88ca54e2ffb40166833acfc42adf452364226e51dc6"
	I0819 19:17:53.275211  438001 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:17:53.275242  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:17:53.710352  438001 logs.go:123] Gathering logs for dmesg ...
	I0819 19:17:53.710396  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:17:53.726691  438001 logs.go:123] Gathering logs for coredns [6ad390cacd3d89ad9a5e7af71dab26d472a67971ffda086057b7cf0e0a9560aa] ...
	I0819 19:17:53.726731  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ad390cacd3d89ad9a5e7af71dab26d472a67971ffda086057b7cf0e0a9560aa"
	I0819 19:17:53.768322  438001 logs.go:123] Gathering logs for kube-proxy [236b4296ad713b251ca958489ebfc4ce41bd2cb64d538cf0cf5f72cc9243e94a] ...
	I0819 19:17:53.768361  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 236b4296ad713b251ca958489ebfc4ce41bd2cb64d538cf0cf5f72cc9243e94a"
	I0819 19:17:53.808546  438001 logs.go:123] Gathering logs for storage-provisioner [fd16c88623359ff9e44155c82c7e33b07dc040678d1d6f1915a25d80a5db0bbd] ...
	I0819 19:17:53.808577  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd16c88623359ff9e44155c82c7e33b07dc040678d1d6f1915a25d80a5db0bbd"
	I0819 19:17:56.362339  438001 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8443/healthz ...
	I0819 19:17:56.366636  438001 api_server.go:279] https://192.168.39.106:8443/healthz returned 200:
	ok
	I0819 19:17:56.367838  438001 api_server.go:141] control plane version: v1.31.0
	I0819 19:17:56.367867  438001 api_server.go:131] duration metric: took 3.944877317s to wait for apiserver health ...
	I0819 19:17:56.367891  438001 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 19:17:56.367925  438001 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:17:56.367991  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:17:56.412151  438001 cri.go:89] found id: "cdac290df2d44c9b30a9c4378f98137a73e603fccd18bc228cca5d017f0a7094"
	I0819 19:17:56.412179  438001 cri.go:89] found id: ""
	I0819 19:17:56.412187  438001 logs.go:276] 1 containers: [cdac290df2d44c9b30a9c4378f98137a73e603fccd18bc228cca5d017f0a7094]
	I0819 19:17:56.412247  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:56.416620  438001 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:17:56.416795  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:17:56.456888  438001 cri.go:89] found id: "27d104597d0ca1b418bd0cab630536ff2d859717c314b48ea994680b21a5bd9a"
	I0819 19:17:56.456918  438001 cri.go:89] found id: ""
	I0819 19:17:56.456927  438001 logs.go:276] 1 containers: [27d104597d0ca1b418bd0cab630536ff2d859717c314b48ea994680b21a5bd9a]
	I0819 19:17:56.456984  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:56.461563  438001 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:17:56.461667  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:17:56.506990  438001 cri.go:89] found id: "6ad390cacd3d89ad9a5e7af71dab26d472a67971ffda086057b7cf0e0a9560aa"
	I0819 19:17:56.507018  438001 cri.go:89] found id: ""
	I0819 19:17:56.507028  438001 logs.go:276] 1 containers: [6ad390cacd3d89ad9a5e7af71dab26d472a67971ffda086057b7cf0e0a9560aa]
	I0819 19:17:56.507099  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:56.511547  438001 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:17:56.511616  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:17:56.551734  438001 cri.go:89] found id: "123f84ccdc9cf1aa830891307b79d42c9166f018bff19b498a5107e428feb92f"
	I0819 19:17:56.551761  438001 cri.go:89] found id: ""
	I0819 19:17:56.551772  438001 logs.go:276] 1 containers: [123f84ccdc9cf1aa830891307b79d42c9166f018bff19b498a5107e428feb92f]
	I0819 19:17:56.551837  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:56.556963  438001 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:17:56.557039  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:17:56.601862  438001 cri.go:89] found id: "236b4296ad713b251ca958489ebfc4ce41bd2cb64d538cf0cf5f72cc9243e94a"
	I0819 19:17:56.601892  438001 cri.go:89] found id: ""
	I0819 19:17:56.601902  438001 logs.go:276] 1 containers: [236b4296ad713b251ca958489ebfc4ce41bd2cb64d538cf0cf5f72cc9243e94a]
	I0819 19:17:56.601971  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:56.606618  438001 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:17:56.606706  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:17:56.649476  438001 cri.go:89] found id: "390aeac356048873634022bb4093a927ddaf293b994b7316b79cfc2c4c329346"
	I0819 19:17:56.649501  438001 cri.go:89] found id: ""
	I0819 19:17:56.649510  438001 logs.go:276] 1 containers: [390aeac356048873634022bb4093a927ddaf293b994b7316b79cfc2c4c329346]
	I0819 19:17:56.649561  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:56.654009  438001 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:17:56.654071  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:17:56.707479  438001 cri.go:89] found id: ""
	I0819 19:17:56.707506  438001 logs.go:276] 0 containers: []
	W0819 19:17:56.707518  438001 logs.go:278] No container was found matching "kindnet"
	I0819 19:17:56.707527  438001 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 19:17:56.707585  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 19:17:56.749937  438001 cri.go:89] found id: "fd16c88623359ff9e44155c82c7e33b07dc040678d1d6f1915a25d80a5db0bbd"
	I0819 19:17:56.749961  438001 cri.go:89] found id: "482a17643a2dedc658bdc88ca54e2ffb40166833acfc42adf452364226e51dc6"
	I0819 19:17:56.749966  438001 cri.go:89] found id: ""
	I0819 19:17:56.749973  438001 logs.go:276] 2 containers: [fd16c88623359ff9e44155c82c7e33b07dc040678d1d6f1915a25d80a5db0bbd 482a17643a2dedc658bdc88ca54e2ffb40166833acfc42adf452364226e51dc6]
	I0819 19:17:56.750026  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:56.754791  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:56.758672  438001 logs.go:123] Gathering logs for etcd [27d104597d0ca1b418bd0cab630536ff2d859717c314b48ea994680b21a5bd9a] ...
	I0819 19:17:56.758700  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27d104597d0ca1b418bd0cab630536ff2d859717c314b48ea994680b21a5bd9a"
	I0819 19:17:56.811420  438001 logs.go:123] Gathering logs for kube-controller-manager [390aeac356048873634022bb4093a927ddaf293b994b7316b79cfc2c4c329346] ...
	I0819 19:17:56.811461  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 390aeac356048873634022bb4093a927ddaf293b994b7316b79cfc2c4c329346"
	I0819 19:17:56.871550  438001 logs.go:123] Gathering logs for storage-provisioner [482a17643a2dedc658bdc88ca54e2ffb40166833acfc42adf452364226e51dc6] ...
	I0819 19:17:56.871588  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 482a17643a2dedc658bdc88ca54e2ffb40166833acfc42adf452364226e51dc6"
	I0819 19:17:56.918183  438001 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:17:56.918224  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:17:57.297614  438001 logs.go:123] Gathering logs for container status ...
	I0819 19:17:57.297653  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:17:57.339092  438001 logs.go:123] Gathering logs for dmesg ...
	I0819 19:17:57.339127  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:17:57.355787  438001 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:17:57.355820  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 19:17:57.486287  438001 logs.go:123] Gathering logs for kube-apiserver [cdac290df2d44c9b30a9c4378f98137a73e603fccd18bc228cca5d017f0a7094] ...
	I0819 19:17:57.486328  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cdac290df2d44c9b30a9c4378f98137a73e603fccd18bc228cca5d017f0a7094"
	I0819 19:17:57.535864  438001 logs.go:123] Gathering logs for coredns [6ad390cacd3d89ad9a5e7af71dab26d472a67971ffda086057b7cf0e0a9560aa] ...
	I0819 19:17:57.535903  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ad390cacd3d89ad9a5e7af71dab26d472a67971ffda086057b7cf0e0a9560aa"
	I0819 19:17:57.577211  438001 logs.go:123] Gathering logs for kube-scheduler [123f84ccdc9cf1aa830891307b79d42c9166f018bff19b498a5107e428feb92f] ...
	I0819 19:17:57.577248  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 123f84ccdc9cf1aa830891307b79d42c9166f018bff19b498a5107e428feb92f"
	I0819 19:17:57.615928  438001 logs.go:123] Gathering logs for kube-proxy [236b4296ad713b251ca958489ebfc4ce41bd2cb64d538cf0cf5f72cc9243e94a] ...
	I0819 19:17:57.615962  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 236b4296ad713b251ca958489ebfc4ce41bd2cb64d538cf0cf5f72cc9243e94a"
	I0819 19:17:57.655413  438001 logs.go:123] Gathering logs for storage-provisioner [fd16c88623359ff9e44155c82c7e33b07dc040678d1d6f1915a25d80a5db0bbd] ...
	I0819 19:17:57.655445  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd16c88623359ff9e44155c82c7e33b07dc040678d1d6f1915a25d80a5db0bbd"
	I0819 19:17:57.704470  438001 logs.go:123] Gathering logs for kubelet ...
	I0819 19:17:57.704502  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:18:00.281191  438001 system_pods.go:59] 8 kube-system pods found
	I0819 19:18:00.281223  438001 system_pods.go:61] "coredns-6f6b679f8f-22lbt" [c8a5cabd-41d4-41cb-91c1-2db1f3471db3] Running
	I0819 19:18:00.281228  438001 system_pods.go:61] "etcd-no-preload-278232" [36d555a1-33e4-4c6c-b24e-2fee4fd84f2b] Running
	I0819 19:18:00.281232  438001 system_pods.go:61] "kube-apiserver-no-preload-278232" [af7173e5-c4ac-4ece-b8b9-bb81cb6b9bfd] Running
	I0819 19:18:00.281235  438001 system_pods.go:61] "kube-controller-manager-no-preload-278232" [2463d97a-5221-40ce-8fd7-08151165d6f7] Running
	I0819 19:18:00.281238  438001 system_pods.go:61] "kube-proxy-rcf49" [85d5814a-1ba9-46be-ab11-17bf40c0f029] Running
	I0819 19:18:00.281241  438001 system_pods.go:61] "kube-scheduler-no-preload-278232" [3b327704-f70c-4d6f-a774-15427a305472] Running
	I0819 19:18:00.281247  438001 system_pods.go:61] "metrics-server-6867b74b74-vxwrs" [e8b74128-b393-4f0f-90fe-e05f20d54acd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 19:18:00.281252  438001 system_pods.go:61] "storage-provisioner" [24766475-1a5b-4f1a-9350-3e891b5272cc] Running
	I0819 19:18:00.281260  438001 system_pods.go:74] duration metric: took 3.913361626s to wait for pod list to return data ...
	I0819 19:18:00.281267  438001 default_sa.go:34] waiting for default service account to be created ...
	I0819 19:18:00.283873  438001 default_sa.go:45] found service account: "default"
	I0819 19:18:00.283898  438001 default_sa.go:55] duration metric: took 2.625775ms for default service account to be created ...
	I0819 19:18:00.283907  438001 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 19:18:00.288985  438001 system_pods.go:86] 8 kube-system pods found
	I0819 19:18:00.289012  438001 system_pods.go:89] "coredns-6f6b679f8f-22lbt" [c8a5cabd-41d4-41cb-91c1-2db1f3471db3] Running
	I0819 19:18:00.289018  438001 system_pods.go:89] "etcd-no-preload-278232" [36d555a1-33e4-4c6c-b24e-2fee4fd84f2b] Running
	I0819 19:18:00.289022  438001 system_pods.go:89] "kube-apiserver-no-preload-278232" [af7173e5-c4ac-4ece-b8b9-bb81cb6b9bfd] Running
	I0819 19:18:00.289028  438001 system_pods.go:89] "kube-controller-manager-no-preload-278232" [2463d97a-5221-40ce-8fd7-08151165d6f7] Running
	I0819 19:18:00.289033  438001 system_pods.go:89] "kube-proxy-rcf49" [85d5814a-1ba9-46be-ab11-17bf40c0f029] Running
	I0819 19:18:00.289038  438001 system_pods.go:89] "kube-scheduler-no-preload-278232" [3b327704-f70c-4d6f-a774-15427a305472] Running
	I0819 19:18:00.289047  438001 system_pods.go:89] "metrics-server-6867b74b74-vxwrs" [e8b74128-b393-4f0f-90fe-e05f20d54acd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 19:18:00.289056  438001 system_pods.go:89] "storage-provisioner" [24766475-1a5b-4f1a-9350-3e891b5272cc] Running
	I0819 19:18:00.289067  438001 system_pods.go:126] duration metric: took 5.154385ms to wait for k8s-apps to be running ...
	I0819 19:18:00.289081  438001 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 19:18:00.289132  438001 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 19:18:00.307128  438001 system_svc.go:56] duration metric: took 18.036826ms WaitForService to wait for kubelet
	I0819 19:18:00.307160  438001 kubeadm.go:582] duration metric: took 4m24.706461383s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 19:18:00.307183  438001 node_conditions.go:102] verifying NodePressure condition ...
	I0819 19:18:00.309818  438001 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 19:18:00.309866  438001 node_conditions.go:123] node cpu capacity is 2
	I0819 19:18:00.309879  438001 node_conditions.go:105] duration metric: took 2.691554ms to run NodePressure ...
	I0819 19:18:00.309892  438001 start.go:241] waiting for startup goroutines ...
	I0819 19:18:00.309901  438001 start.go:246] waiting for cluster config update ...
	I0819 19:18:00.309918  438001 start.go:255] writing updated cluster config ...
	I0819 19:18:00.310268  438001 ssh_runner.go:195] Run: rm -f paused
	I0819 19:18:00.366211  438001 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 19:18:00.368280  438001 out.go:177] * Done! kubectl is now configured to use "no-preload-278232" cluster and "default" namespace by default
	I0819 19:17:58.890611  438716 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 19:17:58.890832  438716 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 19:18:18.891960  438716 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 19:18:18.892243  438716 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 19:18:58.894609  438716 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 19:18:58.894854  438716 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 19:18:58.894869  438716 kubeadm.go:310] 
	I0819 19:18:58.894912  438716 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0819 19:18:58.894967  438716 kubeadm.go:310] 		timed out waiting for the condition
	I0819 19:18:58.894981  438716 kubeadm.go:310] 
	I0819 19:18:58.895024  438716 kubeadm.go:310] 	This error is likely caused by:
	I0819 19:18:58.895072  438716 kubeadm.go:310] 		- The kubelet is not running
	I0819 19:18:58.895344  438716 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0819 19:18:58.895388  438716 kubeadm.go:310] 
	I0819 19:18:58.895518  438716 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0819 19:18:58.895613  438716 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0819 19:18:58.895668  438716 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0819 19:18:58.895695  438716 kubeadm.go:310] 
	I0819 19:18:58.895839  438716 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0819 19:18:58.895959  438716 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0819 19:18:58.895972  438716 kubeadm.go:310] 
	I0819 19:18:58.896072  438716 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0819 19:18:58.896154  438716 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0819 19:18:58.896220  438716 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0819 19:18:58.896284  438716 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0819 19:18:58.896314  438716 kubeadm.go:310] 
	I0819 19:18:58.896819  438716 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 19:18:58.896946  438716 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0819 19:18:58.897028  438716 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0819 19:18:58.897193  438716 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0819 19:18:58.897249  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 19:18:59.361073  438716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 19:18:59.375791  438716 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 19:18:59.387650  438716 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 19:18:59.387697  438716 kubeadm.go:157] found existing configuration files:
	
	I0819 19:18:59.387756  438716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 19:18:59.397345  438716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 19:18:59.397409  438716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 19:18:59.408060  438716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 19:18:59.417658  438716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 19:18:59.417731  438716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 19:18:59.427765  438716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 19:18:59.437636  438716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 19:18:59.437712  438716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 19:18:59.447506  438716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 19:18:59.457100  438716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 19:18:59.457165  438716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 19:18:59.467185  438716 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 19:18:59.540706  438716 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0819 19:18:59.541005  438716 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 19:18:59.694109  438716 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 19:18:59.694238  438716 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 19:18:59.694350  438716 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0819 19:18:59.874268  438716 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 19:18:59.876259  438716 out.go:235]   - Generating certificates and keys ...
	I0819 19:18:59.876362  438716 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 19:18:59.876441  438716 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 19:18:59.876569  438716 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 19:18:59.876654  438716 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 19:18:59.876751  438716 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 19:18:59.876824  438716 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 19:18:59.876900  438716 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 19:18:59.877076  438716 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 19:18:59.877571  438716 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 19:18:59.877997  438716 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 19:18:59.878139  438716 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 19:18:59.878241  438716 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 19:19:00.153380  438716 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 19:19:00.359863  438716 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 19:19:00.470797  438716 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 19:19:00.590041  438716 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 19:19:00.614332  438716 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 19:19:00.615415  438716 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 19:19:00.615473  438716 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 19:19:00.756167  438716 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 19:19:00.757737  438716 out.go:235]   - Booting up control plane ...
	I0819 19:19:00.757873  438716 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 19:19:00.761484  438716 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 19:19:00.762431  438716 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 19:19:00.763241  438716 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 19:19:00.766155  438716 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0819 19:19:40.770166  438716 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0819 19:19:40.770378  438716 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 19:19:40.770543  438716 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 19:19:45.771352  438716 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 19:19:45.771587  438716 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 19:19:55.772027  438716 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 19:19:55.772243  438716 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 19:20:15.773008  438716 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 19:20:15.773238  438716 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 19:20:55.771311  438716 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 19:20:55.771517  438716 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 19:20:55.771530  438716 kubeadm.go:310] 
	I0819 19:20:55.771578  438716 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0819 19:20:55.771750  438716 kubeadm.go:310] 		timed out waiting for the condition
	I0819 19:20:55.771784  438716 kubeadm.go:310] 
	I0819 19:20:55.771845  438716 kubeadm.go:310] 	This error is likely caused by:
	I0819 19:20:55.771891  438716 kubeadm.go:310] 		- The kubelet is not running
	I0819 19:20:55.772014  438716 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0819 19:20:55.772027  438716 kubeadm.go:310] 
	I0819 19:20:55.772125  438716 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0819 19:20:55.772162  438716 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0819 19:20:55.772188  438716 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0819 19:20:55.772196  438716 kubeadm.go:310] 
	I0819 19:20:55.772272  438716 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0819 19:20:55.772336  438716 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0819 19:20:55.772343  438716 kubeadm.go:310] 
	I0819 19:20:55.772439  438716 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0819 19:20:55.772520  438716 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0819 19:20:55.772581  438716 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0819 19:20:55.772637  438716 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0819 19:20:55.772645  438716 kubeadm.go:310] 
	I0819 19:20:55.773758  438716 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 19:20:55.773880  438716 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0819 19:20:55.773971  438716 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0819 19:20:55.774067  438716 kubeadm.go:394] duration metric: took 7m57.361589371s to StartCluster
	I0819 19:20:55.774157  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:20:55.774243  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:20:55.818428  438716 cri.go:89] found id: ""
	I0819 19:20:55.818460  438716 logs.go:276] 0 containers: []
	W0819 19:20:55.818468  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:20:55.818475  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:20:55.818535  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:20:55.857714  438716 cri.go:89] found id: ""
	I0819 19:20:55.857747  438716 logs.go:276] 0 containers: []
	W0819 19:20:55.857758  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:20:55.857766  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:20:55.857841  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:20:55.891917  438716 cri.go:89] found id: ""
	I0819 19:20:55.891948  438716 logs.go:276] 0 containers: []
	W0819 19:20:55.891967  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:20:55.891976  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:20:55.892046  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:20:55.930608  438716 cri.go:89] found id: ""
	I0819 19:20:55.930643  438716 logs.go:276] 0 containers: []
	W0819 19:20:55.930656  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:20:55.930665  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:20:55.930734  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:20:55.966563  438716 cri.go:89] found id: ""
	I0819 19:20:55.966591  438716 logs.go:276] 0 containers: []
	W0819 19:20:55.966600  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:20:55.966607  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:20:55.966670  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:20:56.010392  438716 cri.go:89] found id: ""
	I0819 19:20:56.010421  438716 logs.go:276] 0 containers: []
	W0819 19:20:56.010430  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:20:56.010436  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:20:56.010491  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:20:56.066940  438716 cri.go:89] found id: ""
	I0819 19:20:56.066973  438716 logs.go:276] 0 containers: []
	W0819 19:20:56.066985  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:20:56.066994  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:20:56.067062  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:20:56.118852  438716 cri.go:89] found id: ""
	I0819 19:20:56.118881  438716 logs.go:276] 0 containers: []
	W0819 19:20:56.118894  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:20:56.118909  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:20:56.118925  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:20:56.158224  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:20:56.158263  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:20:56.211882  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:20:56.211925  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:20:56.228082  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:20:56.228124  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:20:56.307857  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:20:56.307880  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:20:56.307893  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0819 19:20:56.414797  438716 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0819 19:20:56.414885  438716 out.go:270] * 
	W0819 19:20:56.415020  438716 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0819 19:20:56.415039  438716 out.go:270] * 
	W0819 19:20:56.416031  438716 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 19:20:56.419869  438716 out.go:201] 
	W0819 19:20:56.421262  438716 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0819 19:20:56.421319  438716 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0819 19:20:56.421351  438716 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0819 19:20:56.422942  438716 out.go:201] 
	
	
	==> CRI-O <==
	Aug 19 19:27:02 no-preload-278232 crio[730]: time="2024-08-19 19:27:02.414017064Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095622413986343,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0be2c1d6-0b96-4f5a-8c48-6769803a2208 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:27:02 no-preload-278232 crio[730]: time="2024-08-19 19:27:02.414509755Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cdb885a3-dc07-473c-b9e4-8b9cf1e7ac72 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:27:02 no-preload-278232 crio[730]: time="2024-08-19 19:27:02.414577514Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cdb885a3-dc07-473c-b9e4-8b9cf1e7ac72 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:27:02 no-preload-278232 crio[730]: time="2024-08-19 19:27:02.414880084Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fd16c88623359ff9e44155c82c7e33b07dc040678d1d6f1915a25d80a5db0bbd,PodSandboxId:0a0904912f9d1ec30f183f6dc2a4a978a812a54d7567d6009ba727db55d1bdd0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724094844169322731,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24766475-1a5b-4f1a-9350-3e891b5272cc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddf310788bc301171c712e1c8fa8d1e15b7f3597213ff1831e1df21f82a06aad,PodSandboxId:ddcc63d3b2d0261556759c2a90de5c2a60a41c054dab36b627f123bde0c70a7f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724094823934448867,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1cfd7b93-e926-4ffd-93f3-8e0f9d0d382c,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ad390cacd3d89ad9a5e7af71dab26d472a67971ffda086057b7cf0e0a9560aa,PodSandboxId:483740644dca99e5dad0d73df753462357782d8dce4e00f5f128e873a5ed1857,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724094820764207455,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-22lbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8a5cabd-41d4-41cb-91c1-2db1f3471db3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:482a17643a2dedc658bdc88ca54e2ffb40166833acfc42adf452364226e51dc6,PodSandboxId:0a0904912f9d1ec30f183f6dc2a4a978a812a54d7567d6009ba727db55d1bdd0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724094813357247280,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2
4766475-1a5b-4f1a-9350-3e891b5272cc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:236b4296ad713b251ca958489ebfc4ce41bd2cb64d538cf0cf5f72cc9243e94a,PodSandboxId:d12040956306fe1996c8fd63d665b3fa8ef5971ae8f159bfb02265f834d22f6e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724094813417123420,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rcf49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85d5814a-1ba9-46be-ab11-17bf40c0f0
29,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:123f84ccdc9cf1aa830891307b79d42c9166f018bff19b498a5107e428feb92f,PodSandboxId:147c748ad560c6509d9c63140135061c77a73543e173fefb595b86c17686ee3a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724094809606945404,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-278232,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b7b93e2ee261f2b15c9a4518a7a53db,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27d104597d0ca1b418bd0cab630536ff2d859717c314b48ea994680b21a5bd9a,PodSandboxId:a45488cfda6169d18ff350bcc851621c0f1ffa780fa5e78cc370b1cfd51871c4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724094809639535991,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-278232,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47880a4872cf6261f8f118c958bba0f1,},Annotations:map[string]string{io.kubernetes.containe
r.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdac290df2d44c9b30a9c4378f98137a73e603fccd18bc228cca5d017f0a7094,PodSandboxId:f5d20a4943041665d7b7508782190955db5244b6ccd2d33c0939c602f6543c81,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724094809579476955,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-278232,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc2710bbd7c397cccb826f5bab023f24,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0
944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:390aeac356048873634022bb4093a927ddaf293b994b7316b79cfc2c4c329346,PodSandboxId:b5702869c384335cd8f5ac98c625b2d394a097bdbf336d29a84450ae213a4c7f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724094809574956402,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-278232,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fb7d810b3d18af9a02af1eab5fdf39a,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cdb885a3-dc07-473c-b9e4-8b9cf1e7ac72 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:27:02 no-preload-278232 crio[730]: time="2024-08-19 19:27:02.453731044Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=06b157c8-e93c-4e57-b763-ce2eb485027d name=/runtime.v1.RuntimeService/Version
	Aug 19 19:27:02 no-preload-278232 crio[730]: time="2024-08-19 19:27:02.453821216Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=06b157c8-e93c-4e57-b763-ce2eb485027d name=/runtime.v1.RuntimeService/Version
	Aug 19 19:27:02 no-preload-278232 crio[730]: time="2024-08-19 19:27:02.455223682Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d2020372-8607-4aa7-891e-7ac662ef93f1 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:27:02 no-preload-278232 crio[730]: time="2024-08-19 19:27:02.455558555Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095622455536503,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d2020372-8607-4aa7-891e-7ac662ef93f1 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:27:02 no-preload-278232 crio[730]: time="2024-08-19 19:27:02.456151176Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b3d84fb3-383a-4fa6-abbf-b0dbec7ae9ab name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:27:02 no-preload-278232 crio[730]: time="2024-08-19 19:27:02.456205231Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b3d84fb3-383a-4fa6-abbf-b0dbec7ae9ab name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:27:02 no-preload-278232 crio[730]: time="2024-08-19 19:27:02.456410851Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fd16c88623359ff9e44155c82c7e33b07dc040678d1d6f1915a25d80a5db0bbd,PodSandboxId:0a0904912f9d1ec30f183f6dc2a4a978a812a54d7567d6009ba727db55d1bdd0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724094844169322731,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24766475-1a5b-4f1a-9350-3e891b5272cc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddf310788bc301171c712e1c8fa8d1e15b7f3597213ff1831e1df21f82a06aad,PodSandboxId:ddcc63d3b2d0261556759c2a90de5c2a60a41c054dab36b627f123bde0c70a7f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724094823934448867,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1cfd7b93-e926-4ffd-93f3-8e0f9d0d382c,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ad390cacd3d89ad9a5e7af71dab26d472a67971ffda086057b7cf0e0a9560aa,PodSandboxId:483740644dca99e5dad0d73df753462357782d8dce4e00f5f128e873a5ed1857,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724094820764207455,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-22lbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8a5cabd-41d4-41cb-91c1-2db1f3471db3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:482a17643a2dedc658bdc88ca54e2ffb40166833acfc42adf452364226e51dc6,PodSandboxId:0a0904912f9d1ec30f183f6dc2a4a978a812a54d7567d6009ba727db55d1bdd0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724094813357247280,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2
4766475-1a5b-4f1a-9350-3e891b5272cc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:236b4296ad713b251ca958489ebfc4ce41bd2cb64d538cf0cf5f72cc9243e94a,PodSandboxId:d12040956306fe1996c8fd63d665b3fa8ef5971ae8f159bfb02265f834d22f6e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724094813417123420,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rcf49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85d5814a-1ba9-46be-ab11-17bf40c0f0
29,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:123f84ccdc9cf1aa830891307b79d42c9166f018bff19b498a5107e428feb92f,PodSandboxId:147c748ad560c6509d9c63140135061c77a73543e173fefb595b86c17686ee3a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724094809606945404,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-278232,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b7b93e2ee261f2b15c9a4518a7a53db,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27d104597d0ca1b418bd0cab630536ff2d859717c314b48ea994680b21a5bd9a,PodSandboxId:a45488cfda6169d18ff350bcc851621c0f1ffa780fa5e78cc370b1cfd51871c4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724094809639535991,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-278232,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47880a4872cf6261f8f118c958bba0f1,},Annotations:map[string]string{io.kubernetes.containe
r.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdac290df2d44c9b30a9c4378f98137a73e603fccd18bc228cca5d017f0a7094,PodSandboxId:f5d20a4943041665d7b7508782190955db5244b6ccd2d33c0939c602f6543c81,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724094809579476955,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-278232,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc2710bbd7c397cccb826f5bab023f24,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0
944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:390aeac356048873634022bb4093a927ddaf293b994b7316b79cfc2c4c329346,PodSandboxId:b5702869c384335cd8f5ac98c625b2d394a097bdbf336d29a84450ae213a4c7f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724094809574956402,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-278232,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fb7d810b3d18af9a02af1eab5fdf39a,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b3d84fb3-383a-4fa6-abbf-b0dbec7ae9ab name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:27:02 no-preload-278232 crio[730]: time="2024-08-19 19:27:02.494828333Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=37f7ee3f-8b44-456c-b1df-318de22c2c2e name=/runtime.v1.RuntimeService/Version
	Aug 19 19:27:02 no-preload-278232 crio[730]: time="2024-08-19 19:27:02.494922574Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=37f7ee3f-8b44-456c-b1df-318de22c2c2e name=/runtime.v1.RuntimeService/Version
	Aug 19 19:27:02 no-preload-278232 crio[730]: time="2024-08-19 19:27:02.499277702Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1e1b0354-6f50-4fb3-ad26-23f3c403fc38 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:27:02 no-preload-278232 crio[730]: time="2024-08-19 19:27:02.499721812Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095622499602425,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1e1b0354-6f50-4fb3-ad26-23f3c403fc38 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:27:02 no-preload-278232 crio[730]: time="2024-08-19 19:27:02.500373258Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cbbd0e2e-f4c8-43f8-af31-9facba329b0e name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:27:02 no-preload-278232 crio[730]: time="2024-08-19 19:27:02.500425611Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cbbd0e2e-f4c8-43f8-af31-9facba329b0e name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:27:02 no-preload-278232 crio[730]: time="2024-08-19 19:27:02.500776133Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fd16c88623359ff9e44155c82c7e33b07dc040678d1d6f1915a25d80a5db0bbd,PodSandboxId:0a0904912f9d1ec30f183f6dc2a4a978a812a54d7567d6009ba727db55d1bdd0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724094844169322731,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24766475-1a5b-4f1a-9350-3e891b5272cc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddf310788bc301171c712e1c8fa8d1e15b7f3597213ff1831e1df21f82a06aad,PodSandboxId:ddcc63d3b2d0261556759c2a90de5c2a60a41c054dab36b627f123bde0c70a7f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724094823934448867,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1cfd7b93-e926-4ffd-93f3-8e0f9d0d382c,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ad390cacd3d89ad9a5e7af71dab26d472a67971ffda086057b7cf0e0a9560aa,PodSandboxId:483740644dca99e5dad0d73df753462357782d8dce4e00f5f128e873a5ed1857,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724094820764207455,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-22lbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8a5cabd-41d4-41cb-91c1-2db1f3471db3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:482a17643a2dedc658bdc88ca54e2ffb40166833acfc42adf452364226e51dc6,PodSandboxId:0a0904912f9d1ec30f183f6dc2a4a978a812a54d7567d6009ba727db55d1bdd0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724094813357247280,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2
4766475-1a5b-4f1a-9350-3e891b5272cc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:236b4296ad713b251ca958489ebfc4ce41bd2cb64d538cf0cf5f72cc9243e94a,PodSandboxId:d12040956306fe1996c8fd63d665b3fa8ef5971ae8f159bfb02265f834d22f6e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724094813417123420,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rcf49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85d5814a-1ba9-46be-ab11-17bf40c0f0
29,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:123f84ccdc9cf1aa830891307b79d42c9166f018bff19b498a5107e428feb92f,PodSandboxId:147c748ad560c6509d9c63140135061c77a73543e173fefb595b86c17686ee3a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724094809606945404,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-278232,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b7b93e2ee261f2b15c9a4518a7a53db,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27d104597d0ca1b418bd0cab630536ff2d859717c314b48ea994680b21a5bd9a,PodSandboxId:a45488cfda6169d18ff350bcc851621c0f1ffa780fa5e78cc370b1cfd51871c4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724094809639535991,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-278232,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47880a4872cf6261f8f118c958bba0f1,},Annotations:map[string]string{io.kubernetes.containe
r.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdac290df2d44c9b30a9c4378f98137a73e603fccd18bc228cca5d017f0a7094,PodSandboxId:f5d20a4943041665d7b7508782190955db5244b6ccd2d33c0939c602f6543c81,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724094809579476955,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-278232,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc2710bbd7c397cccb826f5bab023f24,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0
944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:390aeac356048873634022bb4093a927ddaf293b994b7316b79cfc2c4c329346,PodSandboxId:b5702869c384335cd8f5ac98c625b2d394a097bdbf336d29a84450ae213a4c7f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724094809574956402,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-278232,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fb7d810b3d18af9a02af1eab5fdf39a,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cbbd0e2e-f4c8-43f8-af31-9facba329b0e name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:27:02 no-preload-278232 crio[730]: time="2024-08-19 19:27:02.535896128Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8a7acd9c-dbaa-4336-9b23-c36e1d892d84 name=/runtime.v1.RuntimeService/Version
	Aug 19 19:27:02 no-preload-278232 crio[730]: time="2024-08-19 19:27:02.535998180Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8a7acd9c-dbaa-4336-9b23-c36e1d892d84 name=/runtime.v1.RuntimeService/Version
	Aug 19 19:27:02 no-preload-278232 crio[730]: time="2024-08-19 19:27:02.537332642Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=17be3195-3a1a-4f3f-94d8-9dbcc31c83fb name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:27:02 no-preload-278232 crio[730]: time="2024-08-19 19:27:02.537960844Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095622537936008,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=17be3195-3a1a-4f3f-94d8-9dbcc31c83fb name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:27:02 no-preload-278232 crio[730]: time="2024-08-19 19:27:02.539118373Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5ddee826-fb24-4368-b2d9-4d8de2a00dba name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:27:02 no-preload-278232 crio[730]: time="2024-08-19 19:27:02.539170725Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5ddee826-fb24-4368-b2d9-4d8de2a00dba name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:27:02 no-preload-278232 crio[730]: time="2024-08-19 19:27:02.539432777Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fd16c88623359ff9e44155c82c7e33b07dc040678d1d6f1915a25d80a5db0bbd,PodSandboxId:0a0904912f9d1ec30f183f6dc2a4a978a812a54d7567d6009ba727db55d1bdd0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724094844169322731,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24766475-1a5b-4f1a-9350-3e891b5272cc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddf310788bc301171c712e1c8fa8d1e15b7f3597213ff1831e1df21f82a06aad,PodSandboxId:ddcc63d3b2d0261556759c2a90de5c2a60a41c054dab36b627f123bde0c70a7f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724094823934448867,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1cfd7b93-e926-4ffd-93f3-8e0f9d0d382c,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ad390cacd3d89ad9a5e7af71dab26d472a67971ffda086057b7cf0e0a9560aa,PodSandboxId:483740644dca99e5dad0d73df753462357782d8dce4e00f5f128e873a5ed1857,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724094820764207455,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-22lbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8a5cabd-41d4-41cb-91c1-2db1f3471db3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:482a17643a2dedc658bdc88ca54e2ffb40166833acfc42adf452364226e51dc6,PodSandboxId:0a0904912f9d1ec30f183f6dc2a4a978a812a54d7567d6009ba727db55d1bdd0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724094813357247280,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2
4766475-1a5b-4f1a-9350-3e891b5272cc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:236b4296ad713b251ca958489ebfc4ce41bd2cb64d538cf0cf5f72cc9243e94a,PodSandboxId:d12040956306fe1996c8fd63d665b3fa8ef5971ae8f159bfb02265f834d22f6e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724094813417123420,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rcf49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85d5814a-1ba9-46be-ab11-17bf40c0f0
29,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:123f84ccdc9cf1aa830891307b79d42c9166f018bff19b498a5107e428feb92f,PodSandboxId:147c748ad560c6509d9c63140135061c77a73543e173fefb595b86c17686ee3a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724094809606945404,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-278232,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b7b93e2ee261f2b15c9a4518a7a53db,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27d104597d0ca1b418bd0cab630536ff2d859717c314b48ea994680b21a5bd9a,PodSandboxId:a45488cfda6169d18ff350bcc851621c0f1ffa780fa5e78cc370b1cfd51871c4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724094809639535991,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-278232,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47880a4872cf6261f8f118c958bba0f1,},Annotations:map[string]string{io.kubernetes.containe
r.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdac290df2d44c9b30a9c4378f98137a73e603fccd18bc228cca5d017f0a7094,PodSandboxId:f5d20a4943041665d7b7508782190955db5244b6ccd2d33c0939c602f6543c81,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724094809579476955,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-278232,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc2710bbd7c397cccb826f5bab023f24,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0
944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:390aeac356048873634022bb4093a927ddaf293b994b7316b79cfc2c4c329346,PodSandboxId:b5702869c384335cd8f5ac98c625b2d394a097bdbf336d29a84450ae213a4c7f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724094809574956402,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-278232,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fb7d810b3d18af9a02af1eab5fdf39a,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5ddee826-fb24-4368-b2d9-4d8de2a00dba name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	fd16c88623359       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   0a0904912f9d1       storage-provisioner
	ddf310788bc30       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   ddcc63d3b2d02       busybox
	6ad390cacd3d8       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   1                   483740644dca9       coredns-6f6b679f8f-22lbt
	236b4296ad713       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      13 minutes ago      Running             kube-proxy                1                   d12040956306f       kube-proxy-rcf49
	482a17643a2de       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   0a0904912f9d1       storage-provisioner
	27d104597d0ca       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago      Running             etcd                      1                   a45488cfda616       etcd-no-preload-278232
	123f84ccdc9cf       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      13 minutes ago      Running             kube-scheduler            1                   147c748ad560c       kube-scheduler-no-preload-278232
	cdac290df2d44       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      13 minutes ago      Running             kube-apiserver            1                   f5d20a4943041       kube-apiserver-no-preload-278232
	390aeac356048       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      13 minutes ago      Running             kube-controller-manager   1                   b5702869c3843       kube-controller-manager-no-preload-278232
	
	
	==> coredns [6ad390cacd3d89ad9a5e7af71dab26d472a67971ffda086057b7cf0e0a9560aa] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:42720 - 61003 "HINFO IN 4589887553472215587.3096284654120628867. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01082479s
	
	
	==> describe nodes <==
	Name:               no-preload-278232
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-278232
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9c2db9d51ec33b5c53a86e9ba3d384ee332e3411
	                    minikube.k8s.io/name=no-preload-278232
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T19_03_44_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 19:03:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-278232
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 19:26:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 19:24:14 +0000   Mon, 19 Aug 2024 19:03:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 19:24:14 +0000   Mon, 19 Aug 2024 19:03:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 19:24:14 +0000   Mon, 19 Aug 2024 19:03:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 19:24:14 +0000   Mon, 19 Aug 2024 19:13:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.106
	  Hostname:    no-preload-278232
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a659604399814453bc7f22780393e1fd
	  System UUID:                a6596043-9981-4453-bc7f-22780393e1fd
	  Boot ID:                    1511af4e-0834-4565-8331-154ab7841607
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 coredns-6f6b679f8f-22lbt                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     23m
	  kube-system                 etcd-no-preload-278232                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         23m
	  kube-system                 kube-apiserver-no-preload-278232             250m (12%)    0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-controller-manager-no-preload-278232    200m (10%)    0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-proxy-rcf49                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-scheduler-no-preload-278232             100m (5%)     0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 metrics-server-6867b74b74-vxwrs              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         22m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 23m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasNoDiskPressure    23m (x8 over 23m)  kubelet          Node no-preload-278232 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  23m (x8 over 23m)  kubelet          Node no-preload-278232 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     23m (x7 over 23m)  kubelet          Node no-preload-278232 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     23m                kubelet          Node no-preload-278232 status is now: NodeHasSufficientPID
	  Normal  Starting                 23m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  23m                kubelet          Node no-preload-278232 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23m                kubelet          Node no-preload-278232 status is now: NodeHasNoDiskPressure
	  Normal  NodeReady                23m                kubelet          Node no-preload-278232 status is now: NodeReady
	  Normal  RegisteredNode           23m                node-controller  Node no-preload-278232 event: Registered Node no-preload-278232 in Controller
	  Normal  CIDRAssignmentFailed     23m                cidrAllocator    Node no-preload-278232 status is now: CIDRAssignmentFailed
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node no-preload-278232 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node no-preload-278232 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node no-preload-278232 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m                node-controller  Node no-preload-278232 event: Registered Node no-preload-278232 in Controller
	
	
	==> dmesg <==
	[Aug19 19:12] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052265] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041214] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.094927] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Aug19 19:13] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.604362] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.743009] systemd-fstab-generator[647]: Ignoring "noauto" option for root device
	[  +0.062647] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.055382] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +0.182332] systemd-fstab-generator[673]: Ignoring "noauto" option for root device
	[  +0.130167] systemd-fstab-generator[685]: Ignoring "noauto" option for root device
	[  +0.283984] systemd-fstab-generator[714]: Ignoring "noauto" option for root device
	[ +15.882927] systemd-fstab-generator[1314]: Ignoring "noauto" option for root device
	[  +0.068974] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.829138] systemd-fstab-generator[1434]: Ignoring "noauto" option for root device
	[  +4.079921] kauditd_printk_skb: 97 callbacks suppressed
	[  +2.938027] systemd-fstab-generator[2064]: Ignoring "noauto" option for root device
	[  +3.313906] kauditd_printk_skb: 61 callbacks suppressed
	[Aug19 19:14] kauditd_printk_skb: 46 callbacks suppressed
	
	
	==> etcd [27d104597d0ca1b418bd0cab630536ff2d859717c314b48ea994680b21a5bd9a] <==
	{"level":"info","ts":"2024-08-19T19:13:30.040149Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T19:13:30.047097Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-19T19:13:30.047184Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.106:2380"}
	{"level":"info","ts":"2024-08-19T19:13:30.047304Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.106:2380"}
	{"level":"info","ts":"2024-08-19T19:13:30.047782Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"133f99d1dc1797cc","initial-advertise-peer-urls":["https://192.168.39.106:2380"],"listen-peer-urls":["https://192.168.39.106:2380"],"advertise-client-urls":["https://192.168.39.106:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.106:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-19T19:13:30.047833Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-19T19:13:30.853753Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"133f99d1dc1797cc is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-19T19:13:30.854349Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"133f99d1dc1797cc became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-19T19:13:30.854394Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"133f99d1dc1797cc received MsgPreVoteResp from 133f99d1dc1797cc at term 2"}
	{"level":"info","ts":"2024-08-19T19:13:30.854433Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"133f99d1dc1797cc became candidate at term 3"}
	{"level":"info","ts":"2024-08-19T19:13:30.854458Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"133f99d1dc1797cc received MsgVoteResp from 133f99d1dc1797cc at term 3"}
	{"level":"info","ts":"2024-08-19T19:13:30.854488Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"133f99d1dc1797cc became leader at term 3"}
	{"level":"info","ts":"2024-08-19T19:13:30.854514Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 133f99d1dc1797cc elected leader 133f99d1dc1797cc at term 3"}
	{"level":"info","ts":"2024-08-19T19:13:30.901288Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"133f99d1dc1797cc","local-member-attributes":"{Name:no-preload-278232 ClientURLs:[https://192.168.39.106:2379]}","request-path":"/0/members/133f99d1dc1797cc/attributes","cluster-id":"db63b0e3647a827","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-19T19:13:30.901565Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T19:13:30.902191Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T19:13:30.903751Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-19T19:13:30.903785Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-19T19:13:30.904937Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T19:13:30.905250Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T19:13:30.909131Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-19T19:13:30.910180Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.106:2379"}
	{"level":"info","ts":"2024-08-19T19:23:30.941511Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":881}
	{"level":"info","ts":"2024-08-19T19:23:30.953209Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":881,"took":"10.773333ms","hash":1012546118,"current-db-size-bytes":2748416,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":2748416,"current-db-size-in-use":"2.7 MB"}
	{"level":"info","ts":"2024-08-19T19:23:30.953324Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1012546118,"revision":881,"compact-revision":-1}
	
	
	==> kernel <==
	 19:27:02 up 14 min,  0 users,  load average: 0.00, 0.18, 0.20
	Linux no-preload-278232 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [cdac290df2d44c9b30a9c4378f98137a73e603fccd18bc228cca5d017f0a7094] <==
	W0819 19:23:33.518247       1 handler_proxy.go:99] no RequestInfo found in the context
	E0819 19:23:33.518442       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0819 19:23:33.519564       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0819 19:23:33.519594       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0819 19:24:33.520405       1 handler_proxy.go:99] no RequestInfo found in the context
	E0819 19:24:33.520740       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0819 19:24:33.520822       1 handler_proxy.go:99] no RequestInfo found in the context
	E0819 19:24:33.520873       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0819 19:24:33.522222       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0819 19:24:33.522303       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0819 19:26:33.523161       1 handler_proxy.go:99] no RequestInfo found in the context
	E0819 19:26:33.523496       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0819 19:26:33.523610       1 handler_proxy.go:99] no RequestInfo found in the context
	E0819 19:26:33.523740       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0819 19:26:33.524727       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0819 19:26:33.524842       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [390aeac356048873634022bb4093a927ddaf293b994b7316b79cfc2c4c329346] <==
	E0819 19:21:36.059305       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 19:21:36.566603       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 19:22:06.065610       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 19:22:06.575041       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 19:22:36.072778       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 19:22:36.582227       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 19:23:06.079418       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 19:23:06.590091       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 19:23:36.090715       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 19:23:36.597437       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 19:24:06.098428       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 19:24:06.604786       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0819 19:24:14.889683       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-278232"
	E0819 19:24:36.105354       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 19:24:36.612537       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0819 19:24:52.983393       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="305.181µs"
	E0819 19:25:06.110866       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 19:25:06.621989       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0819 19:25:06.982701       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="207.059µs"
	E0819 19:25:36.119453       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 19:25:36.629319       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 19:26:06.130117       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 19:26:06.636801       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 19:26:36.139419       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 19:26:36.647089       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [236b4296ad713b251ca958489ebfc4ce41bd2cb64d538cf0cf5f72cc9243e94a] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 19:13:33.696193       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0819 19:13:33.706203       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.106"]
	E0819 19:13:33.706278       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 19:13:33.745044       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 19:13:33.745097       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 19:13:33.745126       1 server_linux.go:169] "Using iptables Proxier"
	I0819 19:13:33.747538       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 19:13:33.747880       1 server.go:483] "Version info" version="v1.31.0"
	I0819 19:13:33.747920       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 19:13:33.749618       1 config.go:197] "Starting service config controller"
	I0819 19:13:33.749695       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 19:13:33.749714       1 config.go:104] "Starting endpoint slice config controller"
	I0819 19:13:33.749718       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 19:13:33.751397       1 config.go:326] "Starting node config controller"
	I0819 19:13:33.751453       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 19:13:33.850707       1 shared_informer.go:320] Caches are synced for service config
	I0819 19:13:33.850840       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0819 19:13:33.851817       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [123f84ccdc9cf1aa830891307b79d42c9166f018bff19b498a5107e428feb92f] <==
	I0819 19:13:30.715440       1 serving.go:386] Generated self-signed cert in-memory
	W0819 19:13:32.423973       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0819 19:13:32.424161       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0819 19:13:32.424253       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0819 19:13:32.424287       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0819 19:13:32.502350       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0819 19:13:32.502478       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 19:13:32.504583       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0819 19:13:32.506966       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0819 19:13:32.509711       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0819 19:13:32.506985       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0819 19:13:32.610988       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 19 19:25:49 no-preload-278232 kubelet[1441]: E0819 19:25:49.139352    1441 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095549138434765,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:25:59 no-preload-278232 kubelet[1441]: E0819 19:25:59.143143    1441 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095559142159634,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:25:59 no-preload-278232 kubelet[1441]: E0819 19:25:59.143733    1441 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095559142159634,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:25:59 no-preload-278232 kubelet[1441]: E0819 19:25:59.966398    1441 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-vxwrs" podUID="e8b74128-b393-4f0f-90fe-e05f20d54acd"
	Aug 19 19:26:09 no-preload-278232 kubelet[1441]: E0819 19:26:09.147215    1441 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095569146399194,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:26:09 no-preload-278232 kubelet[1441]: E0819 19:26:09.147349    1441 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095569146399194,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:26:14 no-preload-278232 kubelet[1441]: E0819 19:26:14.966417    1441 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-vxwrs" podUID="e8b74128-b393-4f0f-90fe-e05f20d54acd"
	Aug 19 19:26:19 no-preload-278232 kubelet[1441]: E0819 19:26:19.149272    1441 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095579148986090,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:26:19 no-preload-278232 kubelet[1441]: E0819 19:26:19.149313    1441 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095579148986090,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:26:27 no-preload-278232 kubelet[1441]: E0819 19:26:27.966487    1441 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-vxwrs" podUID="e8b74128-b393-4f0f-90fe-e05f20d54acd"
	Aug 19 19:26:28 no-preload-278232 kubelet[1441]: E0819 19:26:28.987540    1441 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 19:26:28 no-preload-278232 kubelet[1441]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 19:26:28 no-preload-278232 kubelet[1441]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 19:26:28 no-preload-278232 kubelet[1441]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 19:26:28 no-preload-278232 kubelet[1441]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 19:26:29 no-preload-278232 kubelet[1441]: E0819 19:26:29.153126    1441 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095589152244706,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:26:29 no-preload-278232 kubelet[1441]: E0819 19:26:29.153174    1441 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095589152244706,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:26:38 no-preload-278232 kubelet[1441]: E0819 19:26:38.966155    1441 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-vxwrs" podUID="e8b74128-b393-4f0f-90fe-e05f20d54acd"
	Aug 19 19:26:39 no-preload-278232 kubelet[1441]: E0819 19:26:39.154516    1441 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095599154283799,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:26:39 no-preload-278232 kubelet[1441]: E0819 19:26:39.154606    1441 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095599154283799,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:26:49 no-preload-278232 kubelet[1441]: E0819 19:26:49.156160    1441 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095609155713896,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:26:49 no-preload-278232 kubelet[1441]: E0819 19:26:49.156685    1441 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095609155713896,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:26:53 no-preload-278232 kubelet[1441]: E0819 19:26:53.965227    1441 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-vxwrs" podUID="e8b74128-b393-4f0f-90fe-e05f20d54acd"
	Aug 19 19:26:59 no-preload-278232 kubelet[1441]: E0819 19:26:59.158375    1441 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095619158076884,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:26:59 no-preload-278232 kubelet[1441]: E0819 19:26:59.158418    1441 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095619158076884,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [482a17643a2dedc658bdc88ca54e2ffb40166833acfc42adf452364226e51dc6] <==
	I0819 19:13:33.624392       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0819 19:14:03.629608       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [fd16c88623359ff9e44155c82c7e33b07dc040678d1d6f1915a25d80a5db0bbd] <==
	I0819 19:14:04.276605       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0819 19:14:04.286381       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0819 19:14:04.286480       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0819 19:14:21.692349       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0819 19:14:21.692512       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-278232_5fb11e2d-1d74-4fc5-a305-ed2c8e2d8a63!
	I0819 19:14:21.693776       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"68e9d6a8-f7ee-4060-9564-5e9b63dc1edd", APIVersion:"v1", ResourceVersion:"661", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-278232_5fb11e2d-1d74-4fc5-a305-ed2c8e2d8a63 became leader
	I0819 19:14:21.793062       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-278232_5fb11e2d-1d74-4fc5-a305-ed2c8e2d8a63!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-278232 -n no-preload-278232
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-278232 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-vxwrs
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-278232 describe pod metrics-server-6867b74b74-vxwrs
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-278232 describe pod metrics-server-6867b74b74-vxwrs: exit status 1 (67.855553ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-vxwrs" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-278232 describe pod metrics-server-6867b74b74-vxwrs: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
E0819 19:21:07.728816  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/kindnet-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:21:07.808267  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/calico-571803/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
E0819 19:21:29.164711  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/custom-flannel-571803/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
E0819 19:21:57.573785  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/flannel-571803/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
E0819 19:22:08.513761  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/bridge-571803/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
E0819 19:22:10.114992  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
E0819 19:22:30.871932  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/calico-571803/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
E0819 19:22:52.230993  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/custom-flannel-571803/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
E0819 19:23:20.639061  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/flannel-571803/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
E0819 19:23:21.654687  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/enable-default-cni-571803/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
E0819 19:23:31.580359  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/bridge-571803/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
E0819 19:24:14.028585  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/auto-571803/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
E0819 19:24:44.664255  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/kindnet-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:24:44.720754  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/enable-default-cni-571803/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
E0819 19:25:24.364942  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/functional-499773/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
E0819 19:26:07.808581  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/calico-571803/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
E0819 19:26:29.164667  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/custom-flannel-571803/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
E0819 19:26:57.573529  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/flannel-571803/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
E0819 19:27:08.514395  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/bridge-571803/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
E0819 19:27:10.114408  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
E0819 19:28:21.653885  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/enable-default-cni-571803/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
E0819 19:28:27.439940  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/functional-499773/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
E0819 19:29:14.028773  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/auto-571803/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
E0819 19:29:44.664110  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/kindnet-571803/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-104669 -n old-k8s-version-104669
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-104669 -n old-k8s-version-104669: exit status 2 (247.394019ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-104669" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-104669 -n old-k8s-version-104669
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-104669 -n old-k8s-version-104669: exit status 2 (241.733419ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-104669 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-104669 logs -n 25: (1.664712967s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-571803                           | enable-default-cni-571803    | jenkins | v1.33.1 | 19 Aug 24 19:03 UTC | 19 Aug 24 19:03 UTC |
	|         | sudo cat                                               |                              |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-571803                           | enable-default-cni-571803    | jenkins | v1.33.1 | 19 Aug 24 19:03 UTC | 19 Aug 24 19:03 UTC |
	|         | sudo containerd config dump                            |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-571803                           | enable-default-cni-571803    | jenkins | v1.33.1 | 19 Aug 24 19:03 UTC | 19 Aug 24 19:03 UTC |
	|         | sudo systemctl status crio                             |                              |         |         |                     |                     |
	|         | --all --full --no-pager                                |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-571803                           | enable-default-cni-571803    | jenkins | v1.33.1 | 19 Aug 24 19:03 UTC | 19 Aug 24 19:03 UTC |
	|         | sudo systemctl cat crio                                |                              |         |         |                     |                     |
	|         | --no-pager                                             |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-571803                           | enable-default-cni-571803    | jenkins | v1.33.1 | 19 Aug 24 19:03 UTC | 19 Aug 24 19:03 UTC |
	|         | sudo find /etc/crio -type f                            |                              |         |         |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                          |                              |         |         |                     |                     |
	|         | \;                                                     |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-571803                           | enable-default-cni-571803    | jenkins | v1.33.1 | 19 Aug 24 19:03 UTC | 19 Aug 24 19:03 UTC |
	|         | sudo crio config                                       |                              |         |         |                     |                     |
	| delete  | -p enable-default-cni-571803                           | enable-default-cni-571803    | jenkins | v1.33.1 | 19 Aug 24 19:03 UTC | 19 Aug 24 19:03 UTC |
	| delete  | -p                                                     | disable-driver-mounts-737091 | jenkins | v1.33.1 | 19 Aug 24 19:03 UTC | 19 Aug 24 19:03 UTC |
	|         | disable-driver-mounts-737091                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-982795 | jenkins | v1.33.1 | 19 Aug 24 19:03 UTC | 19 Aug 24 19:04 UTC |
	|         | default-k8s-diff-port-982795                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-278232             | no-preload-278232            | jenkins | v1.33.1 | 19 Aug 24 19:04 UTC | 19 Aug 24 19:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-278232                                   | no-preload-278232            | jenkins | v1.33.1 | 19 Aug 24 19:04 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-982795  | default-k8s-diff-port-982795 | jenkins | v1.33.1 | 19 Aug 24 19:04 UTC | 19 Aug 24 19:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-982795 | jenkins | v1.33.1 | 19 Aug 24 19:04 UTC |                     |
	|         | default-k8s-diff-port-982795                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-024748            | embed-certs-024748           | jenkins | v1.33.1 | 19 Aug 24 19:04 UTC | 19 Aug 24 19:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-024748                                  | embed-certs-024748           | jenkins | v1.33.1 | 19 Aug 24 19:04 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-104669        | old-k8s-version-104669       | jenkins | v1.33.1 | 19 Aug 24 19:06 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-278232                  | no-preload-278232            | jenkins | v1.33.1 | 19 Aug 24 19:07 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-278232                                   | no-preload-278232            | jenkins | v1.33.1 | 19 Aug 24 19:07 UTC | 19 Aug 24 19:18 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-982795       | default-k8s-diff-port-982795 | jenkins | v1.33.1 | 19 Aug 24 19:07 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-024748                 | embed-certs-024748           | jenkins | v1.33.1 | 19 Aug 24 19:07 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-982795 | jenkins | v1.33.1 | 19 Aug 24 19:07 UTC | 19 Aug 24 19:17 UTC |
	|         | default-k8s-diff-port-982795                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-024748                                  | embed-certs-024748           | jenkins | v1.33.1 | 19 Aug 24 19:07 UTC | 19 Aug 24 19:17 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-104669                              | old-k8s-version-104669       | jenkins | v1.33.1 | 19 Aug 24 19:08 UTC | 19 Aug 24 19:08 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-104669             | old-k8s-version-104669       | jenkins | v1.33.1 | 19 Aug 24 19:08 UTC | 19 Aug 24 19:08 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-104669                              | old-k8s-version-104669       | jenkins | v1.33.1 | 19 Aug 24 19:08 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 19:08:30
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 19:08:30.532545  438716 out.go:345] Setting OutFile to fd 1 ...
	I0819 19:08:30.532649  438716 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:08:30.532657  438716 out.go:358] Setting ErrFile to fd 2...
	I0819 19:08:30.532661  438716 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:08:30.532811  438716 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19468-372744/.minikube/bin
	I0819 19:08:30.533379  438716 out.go:352] Setting JSON to false
	I0819 19:08:30.534373  438716 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":10253,"bootTime":1724084257,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 19:08:30.534451  438716 start.go:139] virtualization: kvm guest
	I0819 19:08:30.536658  438716 out.go:177] * [old-k8s-version-104669] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 19:08:30.537921  438716 out.go:177]   - MINIKUBE_LOCATION=19468
	I0819 19:08:30.537959  438716 notify.go:220] Checking for updates...
	I0819 19:08:30.540501  438716 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 19:08:30.541864  438716 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19468-372744/kubeconfig
	I0819 19:08:30.543170  438716 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19468-372744/.minikube
	I0819 19:08:30.544395  438716 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 19:08:30.545614  438716 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 19:08:30.547072  438716 config.go:182] Loaded profile config "old-k8s-version-104669": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0819 19:08:30.547468  438716 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:08:30.547570  438716 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:08:30.563059  438716 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34139
	I0819 19:08:30.563506  438716 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:08:30.564068  438716 main.go:141] libmachine: Using API Version  1
	I0819 19:08:30.564091  438716 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:08:30.564474  438716 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:08:30.564719  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .DriverName
	I0819 19:08:30.566599  438716 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0819 19:08:30.568124  438716 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 19:08:30.568503  438716 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:08:30.568541  438716 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:08:30.583805  438716 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35313
	I0819 19:08:30.584314  438716 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:08:30.584805  438716 main.go:141] libmachine: Using API Version  1
	I0819 19:08:30.584827  438716 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:08:30.585131  438716 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:08:30.585320  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .DriverName
	I0819 19:08:30.621020  438716 out.go:177] * Using the kvm2 driver based on existing profile
	I0819 19:08:30.622137  438716 start.go:297] selected driver: kvm2
	I0819 19:08:30.622158  438716 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-104669 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-104669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.32 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 19:08:30.622252  438716 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 19:08:30.622998  438716 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 19:08:30.623082  438716 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19468-372744/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 19:08:30.638616  438716 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 19:08:30.638998  438716 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 19:08:30.639047  438716 cni.go:84] Creating CNI manager for ""
	I0819 19:08:30.639059  438716 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 19:08:30.639097  438716 start.go:340] cluster config:
	{Name:old-k8s-version-104669 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-104669 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.32 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 19:08:30.639243  438716 iso.go:125] acquiring lock: {Name:mk4c0ac1c3202b1a296739df622960e7a0bd8566 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 19:08:30.641823  438716 out.go:177] * Starting "old-k8s-version-104669" primary control-plane node in "old-k8s-version-104669" cluster
	I0819 19:08:30.915976  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:08:30.643167  438716 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0819 19:08:30.643197  438716 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0819 19:08:30.643205  438716 cache.go:56] Caching tarball of preloaded images
	I0819 19:08:30.643300  438716 preload.go:172] Found /home/jenkins/minikube-integration/19468-372744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 19:08:30.643311  438716 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0819 19:08:30.643409  438716 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/config.json ...
	I0819 19:08:30.643583  438716 start.go:360] acquireMachinesLock for old-k8s-version-104669: {Name:mk24ba67a747357e9ce40f1e460d2bb0bc59cc75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 19:08:33.988031  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:08:40.067999  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:08:43.140051  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:08:49.219991  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:08:52.292013  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:08:58.371952  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:09:01.444061  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:09:07.523958  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:09:10.595977  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:09:16.675955  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:09:19.748037  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:09:25.828064  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:09:28.899972  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:09:34.980044  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:09:38.052066  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:09:44.131960  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:09:47.203926  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:09:53.283992  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:09:56.355952  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:10:02.435994  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:10:05.508042  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:10:11.587960  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:10:14.660027  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:10:20.740007  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:10:23.811991  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:10:29.891998  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:10:32.963959  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:10:39.043942  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:10:42.116029  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:10:48.195984  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:10:51.267954  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:10:57.347922  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:11:00.419952  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:11:06.499978  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:11:09.572013  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:11:15.652066  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:11:18.724012  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:11:24.804001  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:11:27.875961  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:11:33.956046  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:11:37.027998  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:11:43.108014  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:11:46.179987  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:11:49.184190  438245 start.go:364] duration metric: took 4m21.835882225s to acquireMachinesLock for "default-k8s-diff-port-982795"
	I0819 19:11:49.184280  438245 start.go:96] Skipping create...Using existing machine configuration
	I0819 19:11:49.184296  438245 fix.go:54] fixHost starting: 
	I0819 19:11:49.184628  438245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:11:49.184661  438245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:11:49.200544  438245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38241
	I0819 19:11:49.200994  438245 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:11:49.201530  438245 main.go:141] libmachine: Using API Version  1
	I0819 19:11:49.201560  438245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:11:49.201953  438245 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:11:49.202151  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .DriverName
	I0819 19:11:49.202296  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetState
	I0819 19:11:49.203841  438245 fix.go:112] recreateIfNeeded on default-k8s-diff-port-982795: state=Stopped err=<nil>
	I0819 19:11:49.203875  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .DriverName
	W0819 19:11:49.204042  438245 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 19:11:49.205721  438245 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-982795" ...
	I0819 19:11:49.181717  438001 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 19:11:49.181755  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetMachineName
	I0819 19:11:49.182097  438001 buildroot.go:166] provisioning hostname "no-preload-278232"
	I0819 19:11:49.182131  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetMachineName
	I0819 19:11:49.182392  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHHostname
	I0819 19:11:49.184006  438001 machine.go:96] duration metric: took 4m37.423775019s to provisionDockerMachine
	I0819 19:11:49.184078  438001 fix.go:56] duration metric: took 4m37.445408913s for fixHost
	I0819 19:11:49.184091  438001 start.go:83] releasing machines lock for "no-preload-278232", held for 4m37.44544277s
	W0819 19:11:49.184116  438001 start.go:714] error starting host: provision: host is not running
	W0819 19:11:49.184274  438001 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0819 19:11:49.184288  438001 start.go:729] Will try again in 5 seconds ...
	I0819 19:11:49.206739  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .Start
	I0819 19:11:49.206892  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Ensuring networks are active...
	I0819 19:11:49.207586  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Ensuring network default is active
	I0819 19:11:49.207947  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Ensuring network mk-default-k8s-diff-port-982795 is active
	I0819 19:11:49.208368  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Getting domain xml...
	I0819 19:11:49.209114  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Creating domain...
	I0819 19:11:50.421290  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting to get IP...
	I0819 19:11:50.422082  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:11:50.422490  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | unable to find current IP address of domain default-k8s-diff-port-982795 in network mk-default-k8s-diff-port-982795
	I0819 19:11:50.422562  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | I0819 19:11:50.422473  439403 retry.go:31] will retry after 273.434317ms: waiting for machine to come up
	I0819 19:11:50.698167  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:11:50.698598  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | unable to find current IP address of domain default-k8s-diff-port-982795 in network mk-default-k8s-diff-port-982795
	I0819 19:11:50.698635  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | I0819 19:11:50.698569  439403 retry.go:31] will retry after 367.841325ms: waiting for machine to come up
	I0819 19:11:51.068401  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:11:51.068996  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | unable to find current IP address of domain default-k8s-diff-port-982795 in network mk-default-k8s-diff-port-982795
	I0819 19:11:51.069019  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | I0819 19:11:51.068942  439403 retry.go:31] will retry after 460.053559ms: waiting for machine to come up
	I0819 19:11:51.530228  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:11:51.530700  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | unable to find current IP address of domain default-k8s-diff-port-982795 in network mk-default-k8s-diff-port-982795
	I0819 19:11:51.530730  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | I0819 19:11:51.530636  439403 retry.go:31] will retry after 498.222116ms: waiting for machine to come up
	I0819 19:11:52.030322  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:11:52.030771  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | unable to find current IP address of domain default-k8s-diff-port-982795 in network mk-default-k8s-diff-port-982795
	I0819 19:11:52.030808  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | I0819 19:11:52.030710  439403 retry.go:31] will retry after 750.75175ms: waiting for machine to come up
	I0819 19:11:54.186765  438001 start.go:360] acquireMachinesLock for no-preload-278232: {Name:mk24ba67a747357e9ce40f1e460d2bb0bc59cc75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 19:11:52.782638  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:11:52.783001  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | unable to find current IP address of domain default-k8s-diff-port-982795 in network mk-default-k8s-diff-port-982795
	I0819 19:11:52.783027  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | I0819 19:11:52.782952  439403 retry.go:31] will retry after 576.883195ms: waiting for machine to come up
	I0819 19:11:53.361702  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:11:53.362105  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | unable to find current IP address of domain default-k8s-diff-port-982795 in network mk-default-k8s-diff-port-982795
	I0819 19:11:53.362138  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | I0819 19:11:53.362035  439403 retry.go:31] will retry after 900.512446ms: waiting for machine to come up
	I0819 19:11:54.264656  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:11:54.265032  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | unable to find current IP address of domain default-k8s-diff-port-982795 in network mk-default-k8s-diff-port-982795
	I0819 19:11:54.265052  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | I0819 19:11:54.264984  439403 retry.go:31] will retry after 1.339005367s: waiting for machine to come up
	I0819 19:11:55.605816  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:11:55.606348  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | unable to find current IP address of domain default-k8s-diff-port-982795 in network mk-default-k8s-diff-port-982795
	I0819 19:11:55.606378  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | I0819 19:11:55.606304  439403 retry.go:31] will retry after 1.517824531s: waiting for machine to come up
	I0819 19:11:57.126027  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:11:57.126400  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | unable to find current IP address of domain default-k8s-diff-port-982795 in network mk-default-k8s-diff-port-982795
	I0819 19:11:57.126426  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | I0819 19:11:57.126340  439403 retry.go:31] will retry after 2.220939365s: waiting for machine to come up
	I0819 19:11:59.348649  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:11:59.349041  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | unable to find current IP address of domain default-k8s-diff-port-982795 in network mk-default-k8s-diff-port-982795
	I0819 19:11:59.349072  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | I0819 19:11:59.348987  439403 retry.go:31] will retry after 2.830298687s: waiting for machine to come up
	I0819 19:12:02.182934  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:02.183398  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | unable to find current IP address of domain default-k8s-diff-port-982795 in network mk-default-k8s-diff-port-982795
	I0819 19:12:02.183422  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | I0819 19:12:02.183348  439403 retry.go:31] will retry after 2.302725829s: waiting for machine to come up
	I0819 19:12:04.487648  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:04.488074  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | unable to find current IP address of domain default-k8s-diff-port-982795 in network mk-default-k8s-diff-port-982795
	I0819 19:12:04.488108  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | I0819 19:12:04.488016  439403 retry.go:31] will retry after 2.932250361s: waiting for machine to come up
	I0819 19:12:08.736669  438295 start.go:364] duration metric: took 4m39.596501254s to acquireMachinesLock for "embed-certs-024748"
	I0819 19:12:08.736755  438295 start.go:96] Skipping create...Using existing machine configuration
	I0819 19:12:08.736776  438295 fix.go:54] fixHost starting: 
	I0819 19:12:08.737277  438295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:08.737326  438295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:08.754873  438295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36829
	I0819 19:12:08.755301  438295 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:08.755839  438295 main.go:141] libmachine: Using API Version  1
	I0819 19:12:08.755866  438295 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:08.756184  438295 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:08.756383  438295 main.go:141] libmachine: (embed-certs-024748) Calling .DriverName
	I0819 19:12:08.756525  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetState
	I0819 19:12:08.758092  438295 fix.go:112] recreateIfNeeded on embed-certs-024748: state=Stopped err=<nil>
	I0819 19:12:08.758134  438295 main.go:141] libmachine: (embed-certs-024748) Calling .DriverName
	W0819 19:12:08.758299  438295 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 19:12:08.760922  438295 out.go:177] * Restarting existing kvm2 VM for "embed-certs-024748" ...
	I0819 19:12:08.762335  438295 main.go:141] libmachine: (embed-certs-024748) Calling .Start
	I0819 19:12:08.762509  438295 main.go:141] libmachine: (embed-certs-024748) Ensuring networks are active...
	I0819 19:12:08.763274  438295 main.go:141] libmachine: (embed-certs-024748) Ensuring network default is active
	I0819 19:12:08.763647  438295 main.go:141] libmachine: (embed-certs-024748) Ensuring network mk-embed-certs-024748 is active
	I0819 19:12:08.764057  438295 main.go:141] libmachine: (embed-certs-024748) Getting domain xml...
	I0819 19:12:08.764765  438295 main.go:141] libmachine: (embed-certs-024748) Creating domain...
	I0819 19:12:07.424132  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.424589  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Found IP for machine: 192.168.61.48
	I0819 19:12:07.424615  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Reserving static IP address...
	I0819 19:12:07.424634  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has current primary IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.425178  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Reserved static IP address: 192.168.61.48
	I0819 19:12:07.425205  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for SSH to be available...
	I0819 19:12:07.425237  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-982795", mac: "52:54:00:d4:19:cd", ip: "192.168.61.48"} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:07.425283  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | skip adding static IP to network mk-default-k8s-diff-port-982795 - found existing host DHCP lease matching {name: "default-k8s-diff-port-982795", mac: "52:54:00:d4:19:cd", ip: "192.168.61.48"}
	I0819 19:12:07.425304  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | Getting to WaitForSSH function...
	I0819 19:12:07.427600  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.427969  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:07.428001  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.428179  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | Using SSH client type: external
	I0819 19:12:07.428245  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | Using SSH private key: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/default-k8s-diff-port-982795/id_rsa (-rw-------)
	I0819 19:12:07.428297  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.48 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19468-372744/.minikube/machines/default-k8s-diff-port-982795/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 19:12:07.428321  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | About to run SSH command:
	I0819 19:12:07.428339  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | exit 0
	I0819 19:12:07.547727  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | SSH cmd err, output: <nil>: 
	I0819 19:12:07.548095  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetConfigRaw
	I0819 19:12:07.548741  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetIP
	I0819 19:12:07.551308  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.551700  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:07.551733  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.551967  438245 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/default-k8s-diff-port-982795/config.json ...
	I0819 19:12:07.552164  438245 machine.go:93] provisionDockerMachine start ...
	I0819 19:12:07.552186  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .DriverName
	I0819 19:12:07.552427  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHHostname
	I0819 19:12:07.554782  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.555062  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:07.555080  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.555219  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHPort
	I0819 19:12:07.555427  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:12:07.555586  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:12:07.555767  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHUsername
	I0819 19:12:07.555912  438245 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:07.556152  438245 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.48 22 <nil> <nil>}
	I0819 19:12:07.556168  438245 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 19:12:07.655996  438245 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 19:12:07.656027  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetMachineName
	I0819 19:12:07.656301  438245 buildroot.go:166] provisioning hostname "default-k8s-diff-port-982795"
	I0819 19:12:07.656329  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetMachineName
	I0819 19:12:07.656530  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHHostname
	I0819 19:12:07.658956  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.659311  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:07.659344  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.659439  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHPort
	I0819 19:12:07.659617  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:12:07.659813  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:12:07.659937  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHUsername
	I0819 19:12:07.660112  438245 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:07.660291  438245 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.48 22 <nil> <nil>}
	I0819 19:12:07.660302  438245 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-982795 && echo "default-k8s-diff-port-982795" | sudo tee /etc/hostname
	I0819 19:12:07.773590  438245 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-982795
	
	I0819 19:12:07.773615  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHHostname
	I0819 19:12:07.776994  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.777360  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:07.777399  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.777580  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHPort
	I0819 19:12:07.777860  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:12:07.778060  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:12:07.778273  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHUsername
	I0819 19:12:07.778457  438245 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:07.778665  438245 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.48 22 <nil> <nil>}
	I0819 19:12:07.778687  438245 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-982795' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-982795/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-982795' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 19:12:07.884662  438245 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 19:12:07.884718  438245 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19468-372744/.minikube CaCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19468-372744/.minikube}
	I0819 19:12:07.884751  438245 buildroot.go:174] setting up certificates
	I0819 19:12:07.884768  438245 provision.go:84] configureAuth start
	I0819 19:12:07.884782  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetMachineName
	I0819 19:12:07.885101  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetIP
	I0819 19:12:07.887844  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.888262  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:07.888293  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.888439  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHHostname
	I0819 19:12:07.890581  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.890977  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:07.891005  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.891136  438245 provision.go:143] copyHostCerts
	I0819 19:12:07.891219  438245 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem, removing ...
	I0819 19:12:07.891240  438245 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem
	I0819 19:12:07.891306  438245 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem (1082 bytes)
	I0819 19:12:07.891398  438245 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem, removing ...
	I0819 19:12:07.891406  438245 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem
	I0819 19:12:07.891430  438245 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem (1123 bytes)
	I0819 19:12:07.891487  438245 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem, removing ...
	I0819 19:12:07.891494  438245 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem
	I0819 19:12:07.891517  438245 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem (1675 bytes)
	I0819 19:12:07.891570  438245 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-982795 san=[127.0.0.1 192.168.61.48 default-k8s-diff-port-982795 localhost minikube]
	I0819 19:12:08.083963  438245 provision.go:177] copyRemoteCerts
	I0819 19:12:08.084024  438245 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 19:12:08.084086  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHHostname
	I0819 19:12:08.086637  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:08.086961  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:08.087005  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:08.087144  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHPort
	I0819 19:12:08.087357  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:12:08.087507  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHUsername
	I0819 19:12:08.087694  438245 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/default-k8s-diff-port-982795/id_rsa Username:docker}
	I0819 19:12:08.166312  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 19:12:08.194124  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0819 19:12:08.221817  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 19:12:08.249674  438245 provision.go:87] duration metric: took 364.885827ms to configureAuth
	I0819 19:12:08.249709  438245 buildroot.go:189] setting minikube options for container-runtime
	I0819 19:12:08.249891  438245 config.go:182] Loaded profile config "default-k8s-diff-port-982795": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:12:08.249983  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHHostname
	I0819 19:12:08.253045  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:08.253438  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:08.253469  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:08.253647  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHPort
	I0819 19:12:08.253856  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:12:08.254071  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:12:08.254266  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHUsername
	I0819 19:12:08.254481  438245 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:08.254700  438245 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.48 22 <nil> <nil>}
	I0819 19:12:08.254722  438245 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 19:12:08.508775  438245 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 19:12:08.508808  438245 machine.go:96] duration metric: took 956.629475ms to provisionDockerMachine
	I0819 19:12:08.508824  438245 start.go:293] postStartSetup for "default-k8s-diff-port-982795" (driver="kvm2")
	I0819 19:12:08.508838  438245 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 19:12:08.508868  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .DriverName
	I0819 19:12:08.509214  438245 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 19:12:08.509259  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHHostname
	I0819 19:12:08.512004  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:08.512341  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:08.512378  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:08.512517  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHPort
	I0819 19:12:08.512688  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:12:08.512867  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHUsername
	I0819 19:12:08.513059  438245 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/default-k8s-diff-port-982795/id_rsa Username:docker}
	I0819 19:12:08.594287  438245 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 19:12:08.598742  438245 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 19:12:08.598774  438245 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-372744/.minikube/addons for local assets ...
	I0819 19:12:08.598849  438245 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-372744/.minikube/files for local assets ...
	I0819 19:12:08.598943  438245 filesync.go:149] local asset: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem -> 3800092.pem in /etc/ssl/certs
	I0819 19:12:08.599029  438245 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 19:12:08.608416  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem --> /etc/ssl/certs/3800092.pem (1708 bytes)
	I0819 19:12:08.633880  438245 start.go:296] duration metric: took 125.036785ms for postStartSetup
	I0819 19:12:08.633930  438245 fix.go:56] duration metric: took 19.449641939s for fixHost
	I0819 19:12:08.633955  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHHostname
	I0819 19:12:08.636729  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:08.637006  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:08.637030  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:08.637248  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHPort
	I0819 19:12:08.637483  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:12:08.637672  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:12:08.637791  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHUsername
	I0819 19:12:08.637954  438245 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:08.638170  438245 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.48 22 <nil> <nil>}
	I0819 19:12:08.638186  438245 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 19:12:08.736519  438245 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724094728.710064462
	
	I0819 19:12:08.736540  438245 fix.go:216] guest clock: 1724094728.710064462
	I0819 19:12:08.736548  438245 fix.go:229] Guest: 2024-08-19 19:12:08.710064462 +0000 UTC Remote: 2024-08-19 19:12:08.633934039 +0000 UTC m=+281.422189217 (delta=76.130423ms)
	I0819 19:12:08.736568  438245 fix.go:200] guest clock delta is within tolerance: 76.130423ms
	I0819 19:12:08.736580  438245 start.go:83] releasing machines lock for "default-k8s-diff-port-982795", held for 19.552337255s
	I0819 19:12:08.736604  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .DriverName
	I0819 19:12:08.736918  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetIP
	I0819 19:12:08.739570  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:08.740030  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:08.740057  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:08.740222  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .DriverName
	I0819 19:12:08.740762  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .DriverName
	I0819 19:12:08.740960  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .DriverName
	I0819 19:12:08.741037  438245 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 19:12:08.741100  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHHostname
	I0819 19:12:08.741185  438245 ssh_runner.go:195] Run: cat /version.json
	I0819 19:12:08.741206  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHHostname
	I0819 19:12:08.743899  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:08.744037  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:08.744282  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:08.744304  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:08.744439  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHPort
	I0819 19:12:08.744576  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:08.744599  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:12:08.744607  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:08.744689  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHPort
	I0819 19:12:08.744786  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHUsername
	I0819 19:12:08.744858  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:12:08.744923  438245 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/default-k8s-diff-port-982795/id_rsa Username:docker}
	I0819 19:12:08.744997  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHUsername
	I0819 19:12:08.745143  438245 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/default-k8s-diff-port-982795/id_rsa Username:docker}
	I0819 19:12:08.820672  438245 ssh_runner.go:195] Run: systemctl --version
	I0819 19:12:08.847046  438245 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 19:12:08.989725  438245 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 19:12:08.996607  438245 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 19:12:08.996680  438245 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 19:12:09.013017  438245 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 19:12:09.013067  438245 start.go:495] detecting cgroup driver to use...
	I0819 19:12:09.013144  438245 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 19:12:09.030338  438245 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 19:12:09.044580  438245 docker.go:217] disabling cri-docker service (if available) ...
	I0819 19:12:09.044635  438245 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 19:12:09.058825  438245 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 19:12:09.073358  438245 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 19:12:09.194611  438245 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 19:12:09.333368  438245 docker.go:233] disabling docker service ...
	I0819 19:12:09.333446  438245 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 19:12:09.348775  438245 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 19:12:09.362911  438245 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 19:12:09.503015  438245 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 19:12:09.621246  438245 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 19:12:09.638480  438245 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 19:12:09.659346  438245 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 19:12:09.659406  438245 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:09.672088  438245 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 19:12:09.672166  438245 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:09.683704  438245 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:09.694847  438245 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:09.706339  438245 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 19:12:09.718658  438245 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:09.730645  438245 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:09.750843  438245 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:09.762551  438245 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 19:12:09.772960  438245 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 19:12:09.773037  438245 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 19:12:09.788362  438245 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 19:12:09.798695  438245 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:12:09.923389  438245 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 19:12:10.063317  438245 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 19:12:10.063413  438245 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 19:12:10.068449  438245 start.go:563] Will wait 60s for crictl version
	I0819 19:12:10.068540  438245 ssh_runner.go:195] Run: which crictl
	I0819 19:12:10.072807  438245 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 19:12:10.114058  438245 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 19:12:10.114151  438245 ssh_runner.go:195] Run: crio --version
	I0819 19:12:10.147919  438245 ssh_runner.go:195] Run: crio --version
	I0819 19:12:10.180009  438245 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 19:12:10.181218  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetIP
	I0819 19:12:10.184626  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:10.185015  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:10.185049  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:10.185243  438245 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0819 19:12:10.189653  438245 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 19:12:10.203439  438245 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-982795 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-982795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.48 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 19:12:10.203608  438245 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 19:12:10.203668  438245 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 19:12:10.241427  438245 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0819 19:12:10.241511  438245 ssh_runner.go:195] Run: which lz4
	I0819 19:12:10.245734  438245 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 19:12:10.250082  438245 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 19:12:10.250112  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0819 19:12:11.694285  438245 crio.go:462] duration metric: took 1.448590086s to copy over tarball
	I0819 19:12:11.694371  438245 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 19:12:10.028225  438295 main.go:141] libmachine: (embed-certs-024748) Waiting to get IP...
	I0819 19:12:10.029208  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:10.029696  438295 main.go:141] libmachine: (embed-certs-024748) DBG | unable to find current IP address of domain embed-certs-024748 in network mk-embed-certs-024748
	I0819 19:12:10.029752  438295 main.go:141] libmachine: (embed-certs-024748) DBG | I0819 19:12:10.029666  439540 retry.go:31] will retry after 276.66184ms: waiting for machine to come up
	I0819 19:12:10.308339  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:10.308762  438295 main.go:141] libmachine: (embed-certs-024748) DBG | unable to find current IP address of domain embed-certs-024748 in network mk-embed-certs-024748
	I0819 19:12:10.308804  438295 main.go:141] libmachine: (embed-certs-024748) DBG | I0819 19:12:10.308710  439540 retry.go:31] will retry after 279.376198ms: waiting for machine to come up
	I0819 19:12:10.590326  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:10.591084  438295 main.go:141] libmachine: (embed-certs-024748) DBG | unable to find current IP address of domain embed-certs-024748 in network mk-embed-certs-024748
	I0819 19:12:10.591117  438295 main.go:141] libmachine: (embed-certs-024748) DBG | I0819 19:12:10.590861  439540 retry.go:31] will retry after 364.735563ms: waiting for machine to come up
	I0819 19:12:10.957592  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:10.958075  438295 main.go:141] libmachine: (embed-certs-024748) DBG | unable to find current IP address of domain embed-certs-024748 in network mk-embed-certs-024748
	I0819 19:12:10.958100  438295 main.go:141] libmachine: (embed-certs-024748) DBG | I0819 19:12:10.958033  439540 retry.go:31] will retry after 384.275284ms: waiting for machine to come up
	I0819 19:12:11.343631  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:11.344169  438295 main.go:141] libmachine: (embed-certs-024748) DBG | unable to find current IP address of domain embed-certs-024748 in network mk-embed-certs-024748
	I0819 19:12:11.344192  438295 main.go:141] libmachine: (embed-certs-024748) DBG | I0819 19:12:11.344125  439540 retry.go:31] will retry after 572.182522ms: waiting for machine to come up
	I0819 19:12:11.917660  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:11.918150  438295 main.go:141] libmachine: (embed-certs-024748) DBG | unable to find current IP address of domain embed-certs-024748 in network mk-embed-certs-024748
	I0819 19:12:11.918179  438295 main.go:141] libmachine: (embed-certs-024748) DBG | I0819 19:12:11.918093  439540 retry.go:31] will retry after 767.807058ms: waiting for machine to come up
	I0819 19:12:12.687256  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:12.687782  438295 main.go:141] libmachine: (embed-certs-024748) DBG | unable to find current IP address of domain embed-certs-024748 in network mk-embed-certs-024748
	I0819 19:12:12.687815  438295 main.go:141] libmachine: (embed-certs-024748) DBG | I0819 19:12:12.687728  439540 retry.go:31] will retry after 715.897037ms: waiting for machine to come up
	I0819 19:12:13.406041  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:13.406653  438295 main.go:141] libmachine: (embed-certs-024748) DBG | unable to find current IP address of domain embed-certs-024748 in network mk-embed-certs-024748
	I0819 19:12:13.406690  438295 main.go:141] libmachine: (embed-certs-024748) DBG | I0819 19:12:13.406577  439540 retry.go:31] will retry after 1.301579737s: waiting for machine to come up
	I0819 19:12:13.847779  438245 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.153373496s)
	I0819 19:12:13.847810  438245 crio.go:469] duration metric: took 2.153488101s to extract the tarball
	I0819 19:12:13.847817  438245 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 19:12:13.885520  438245 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 19:12:13.929775  438245 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 19:12:13.929809  438245 cache_images.go:84] Images are preloaded, skipping loading
	I0819 19:12:13.929838  438245 kubeadm.go:934] updating node { 192.168.61.48 8444 v1.31.0 crio true true} ...
	I0819 19:12:13.930019  438245 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-982795 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.48
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-982795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 19:12:13.930113  438245 ssh_runner.go:195] Run: crio config
	I0819 19:12:13.977098  438245 cni.go:84] Creating CNI manager for ""
	I0819 19:12:13.977123  438245 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 19:12:13.977136  438245 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 19:12:13.977176  438245 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.48 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-982795 NodeName:default-k8s-diff-port-982795 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.48"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.48 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 19:12:13.977382  438245 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.48
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-982795"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.48
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.48"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 19:12:13.977461  438245 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 19:12:13.987276  438245 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 19:12:13.987381  438245 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 19:12:13.996666  438245 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0819 19:12:14.013822  438245 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 19:12:14.030936  438245 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0819 19:12:14.048575  438245 ssh_runner.go:195] Run: grep 192.168.61.48	control-plane.minikube.internal$ /etc/hosts
	I0819 19:12:14.052809  438245 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.48	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 19:12:14.065177  438245 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:12:14.185159  438245 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 19:12:14.202906  438245 certs.go:68] Setting up /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/default-k8s-diff-port-982795 for IP: 192.168.61.48
	I0819 19:12:14.202934  438245 certs.go:194] generating shared ca certs ...
	I0819 19:12:14.202966  438245 certs.go:226] acquiring lock for ca certs: {Name:mk639e03f593e0bccac045f6e9f5ba3b96cc81e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:12:14.203184  438245 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key
	I0819 19:12:14.203266  438245 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key
	I0819 19:12:14.203282  438245 certs.go:256] generating profile certs ...
	I0819 19:12:14.203399  438245 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/default-k8s-diff-port-982795/client.key
	I0819 19:12:14.203487  438245 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/default-k8s-diff-port-982795/apiserver.key.a3c7a519
	I0819 19:12:14.203552  438245 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/default-k8s-diff-port-982795/proxy-client.key
	I0819 19:12:14.203757  438245 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem (1338 bytes)
	W0819 19:12:14.203820  438245 certs.go:480] ignoring /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009_empty.pem, impossibly tiny 0 bytes
	I0819 19:12:14.203834  438245 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 19:12:14.203866  438245 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem (1082 bytes)
	I0819 19:12:14.203899  438245 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem (1123 bytes)
	I0819 19:12:14.203929  438245 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem (1675 bytes)
	I0819 19:12:14.203994  438245 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem (1708 bytes)
	I0819 19:12:14.205025  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 19:12:14.258243  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 19:12:14.295380  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 19:12:14.330511  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 19:12:14.358547  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/default-k8s-diff-port-982795/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0819 19:12:14.386938  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/default-k8s-diff-port-982795/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 19:12:14.415021  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/default-k8s-diff-port-982795/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 19:12:14.439531  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/default-k8s-diff-port-982795/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 19:12:14.463969  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 19:12:14.487638  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem --> /usr/share/ca-certificates/380009.pem (1338 bytes)
	I0819 19:12:14.511571  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem --> /usr/share/ca-certificates/3800092.pem (1708 bytes)
	I0819 19:12:14.535223  438245 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 19:12:14.552922  438245 ssh_runner.go:195] Run: openssl version
	I0819 19:12:14.559078  438245 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 19:12:14.570605  438245 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:12:14.575411  438245 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:12:14.575484  438245 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:12:14.581714  438245 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 19:12:14.592896  438245 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/380009.pem && ln -fs /usr/share/ca-certificates/380009.pem /etc/ssl/certs/380009.pem"
	I0819 19:12:14.604306  438245 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/380009.pem
	I0819 19:12:14.609139  438245 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 17:56 /usr/share/ca-certificates/380009.pem
	I0819 19:12:14.609212  438245 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/380009.pem
	I0819 19:12:14.615160  438245 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/380009.pem /etc/ssl/certs/51391683.0"
	I0819 19:12:14.626010  438245 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3800092.pem && ln -fs /usr/share/ca-certificates/3800092.pem /etc/ssl/certs/3800092.pem"
	I0819 19:12:14.636821  438245 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3800092.pem
	I0819 19:12:14.641308  438245 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 17:56 /usr/share/ca-certificates/3800092.pem
	I0819 19:12:14.641358  438245 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3800092.pem
	I0819 19:12:14.646898  438245 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3800092.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 19:12:14.657905  438245 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 19:12:14.662780  438245 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 19:12:14.668934  438245 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 19:12:14.674693  438245 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 19:12:14.680683  438245 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 19:12:14.686689  438245 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 19:12:14.692678  438245 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 19:12:14.698784  438245 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-982795 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-982795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.48 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 19:12:14.698930  438245 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 19:12:14.699006  438245 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 19:12:14.740881  438245 cri.go:89] found id: ""
	I0819 19:12:14.740964  438245 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 19:12:14.751589  438245 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 19:12:14.751613  438245 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 19:12:14.751665  438245 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 19:12:14.761837  438245 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 19:12:14.762870  438245 kubeconfig.go:125] found "default-k8s-diff-port-982795" server: "https://192.168.61.48:8444"
	I0819 19:12:14.765176  438245 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 19:12:14.775114  438245 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.48
	I0819 19:12:14.775147  438245 kubeadm.go:1160] stopping kube-system containers ...
	I0819 19:12:14.775161  438245 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0819 19:12:14.775228  438245 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 19:12:14.811373  438245 cri.go:89] found id: ""
	I0819 19:12:14.811442  438245 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0819 19:12:14.829656  438245 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 19:12:14.840215  438245 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 19:12:14.840236  438245 kubeadm.go:157] found existing configuration files:
	
	I0819 19:12:14.840288  438245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0819 19:12:14.850017  438245 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 19:12:14.850075  438245 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 19:12:14.860060  438245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0819 19:12:14.869589  438245 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 19:12:14.869645  438245 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 19:12:14.879249  438245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0819 19:12:14.888475  438245 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 19:12:14.888532  438245 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 19:12:14.898151  438245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0819 19:12:14.907628  438245 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 19:12:14.907737  438245 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 19:12:14.917581  438245 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 19:12:14.927119  438245 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:12:15.037162  438245 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:12:16.355430  438245 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.318225023s)
	I0819 19:12:16.355461  438245 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:12:16.566565  438245 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:12:16.649402  438245 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:12:16.775956  438245 api_server.go:52] waiting for apiserver process to appear ...
	I0819 19:12:16.776067  438245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:12:14.709988  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:14.710397  438295 main.go:141] libmachine: (embed-certs-024748) DBG | unable to find current IP address of domain embed-certs-024748 in network mk-embed-certs-024748
	I0819 19:12:14.710429  438295 main.go:141] libmachine: (embed-certs-024748) DBG | I0819 19:12:14.710338  439540 retry.go:31] will retry after 1.420823505s: waiting for machine to come up
	I0819 19:12:16.133160  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:16.133558  438295 main.go:141] libmachine: (embed-certs-024748) DBG | unable to find current IP address of domain embed-certs-024748 in network mk-embed-certs-024748
	I0819 19:12:16.133587  438295 main.go:141] libmachine: (embed-certs-024748) DBG | I0819 19:12:16.133531  439540 retry.go:31] will retry after 1.71697779s: waiting for machine to come up
	I0819 19:12:17.852342  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:17.852884  438295 main.go:141] libmachine: (embed-certs-024748) DBG | unable to find current IP address of domain embed-certs-024748 in network mk-embed-certs-024748
	I0819 19:12:17.852922  438295 main.go:141] libmachine: (embed-certs-024748) DBG | I0819 19:12:17.852836  439540 retry.go:31] will retry after 2.816782354s: waiting for machine to come up
	I0819 19:12:17.277067  438245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:12:17.777027  438245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:12:17.797513  438245 api_server.go:72] duration metric: took 1.021572879s to wait for apiserver process to appear ...
	I0819 19:12:17.797554  438245 api_server.go:88] waiting for apiserver healthz status ...
	I0819 19:12:17.797596  438245 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8444/healthz ...
	I0819 19:12:17.798191  438245 api_server.go:269] stopped: https://192.168.61.48:8444/healthz: Get "https://192.168.61.48:8444/healthz": dial tcp 192.168.61.48:8444: connect: connection refused
	I0819 19:12:18.297907  438245 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8444/healthz ...
	I0819 19:12:20.177305  438245 api_server.go:279] https://192.168.61.48:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 19:12:20.177345  438245 api_server.go:103] status: https://192.168.61.48:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 19:12:20.177367  438245 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8444/healthz ...
	I0819 19:12:20.244091  438245 api_server.go:279] https://192.168.61.48:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 19:12:20.244140  438245 api_server.go:103] status: https://192.168.61.48:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 19:12:20.298403  438245 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8444/healthz ...
	I0819 19:12:20.304289  438245 api_server.go:279] https://192.168.61.48:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 19:12:20.304325  438245 api_server.go:103] status: https://192.168.61.48:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 19:12:20.797876  438245 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8444/healthz ...
	I0819 19:12:20.803894  438245 api_server.go:279] https://192.168.61.48:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 19:12:20.803935  438245 api_server.go:103] status: https://192.168.61.48:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 19:12:21.298284  438245 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8444/healthz ...
	I0819 19:12:21.320292  438245 api_server.go:279] https://192.168.61.48:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 19:12:21.320320  438245 api_server.go:103] status: https://192.168.61.48:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 19:12:21.797829  438245 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8444/healthz ...
	I0819 19:12:21.802183  438245 api_server.go:279] https://192.168.61.48:8444/healthz returned 200:
	ok
	I0819 19:12:21.809866  438245 api_server.go:141] control plane version: v1.31.0
	I0819 19:12:21.809902  438245 api_server.go:131] duration metric: took 4.012339897s to wait for apiserver health ...
	I0819 19:12:21.809914  438245 cni.go:84] Creating CNI manager for ""
	I0819 19:12:21.809944  438245 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 19:12:21.811668  438245 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 19:12:21.813183  438245 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 19:12:21.826170  438245 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 19:12:21.850473  438245 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 19:12:21.865379  438245 system_pods.go:59] 8 kube-system pods found
	I0819 19:12:21.865422  438245 system_pods.go:61] "coredns-6f6b679f8f-dwbnt" [9b8d7ee3-15ca-475b-b659-d5c3b10890fe] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0819 19:12:21.865442  438245 system_pods.go:61] "etcd-default-k8s-diff-port-982795" [6686e6f6-485d-4c57-89a1-af4f27b6216e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0819 19:12:21.865455  438245 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-982795" [fcfb5a0d-6d6c-4c30-a17f-43106f3dd5ae] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0819 19:12:21.865475  438245 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-982795" [346bf3b5-57e7-4f30-a6ed-959dc9e8941d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0819 19:12:21.865485  438245 system_pods.go:61] "kube-proxy-wrczx" [acabdc8e-5397-4531-afcb-57a8f4c48618] Running
	I0819 19:12:21.865493  438245 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-982795" [82de0c57-e712-4c0c-b751-a17cb0dd75b2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0819 19:12:21.865503  438245 system_pods.go:61] "metrics-server-6867b74b74-5hlnx" [394c87af-a198-4fea-8a30-32a8c3e80884] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 19:12:21.865522  438245 system_pods.go:61] "storage-provisioner" [35f70989-846d-4ec5-b879-a22625ee94ce] Running
	I0819 19:12:21.865534  438245 system_pods.go:74] duration metric: took 15.035147ms to wait for pod list to return data ...
	I0819 19:12:21.865545  438245 node_conditions.go:102] verifying NodePressure condition ...
	I0819 19:12:21.870314  438245 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 19:12:21.870350  438245 node_conditions.go:123] node cpu capacity is 2
	I0819 19:12:21.870366  438245 node_conditions.go:105] duration metric: took 4.813819ms to run NodePressure ...
	I0819 19:12:21.870390  438245 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:12:22.130916  438245 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0819 19:12:22.134889  438245 kubeadm.go:739] kubelet initialised
	I0819 19:12:22.134912  438245 kubeadm.go:740] duration metric: took 3.970465ms waiting for restarted kubelet to initialise ...
	I0819 19:12:22.134920  438245 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 19:12:22.139345  438245 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-dwbnt" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:20.672189  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:20.672655  438295 main.go:141] libmachine: (embed-certs-024748) DBG | unable to find current IP address of domain embed-certs-024748 in network mk-embed-certs-024748
	I0819 19:12:20.672682  438295 main.go:141] libmachine: (embed-certs-024748) DBG | I0819 19:12:20.672613  439540 retry.go:31] will retry after 2.76896974s: waiting for machine to come up
	I0819 19:12:23.442804  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:23.443223  438295 main.go:141] libmachine: (embed-certs-024748) DBG | unable to find current IP address of domain embed-certs-024748 in network mk-embed-certs-024748
	I0819 19:12:23.443268  438295 main.go:141] libmachine: (embed-certs-024748) DBG | I0819 19:12:23.443170  439540 retry.go:31] will retry after 4.199459292s: waiting for machine to come up
	I0819 19:12:24.145329  438245 pod_ready.go:103] pod "coredns-6f6b679f8f-dwbnt" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:26.645695  438245 pod_ready.go:103] pod "coredns-6f6b679f8f-dwbnt" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:27.644842  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:27.645376  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has current primary IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:27.645403  438295 main.go:141] libmachine: (embed-certs-024748) Found IP for machine: 192.168.72.96
	I0819 19:12:27.645417  438295 main.go:141] libmachine: (embed-certs-024748) Reserving static IP address...
	I0819 19:12:27.645874  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "embed-certs-024748", mac: "52:54:00:f0:8b:43", ip: "192.168.72.96"} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:27.645902  438295 main.go:141] libmachine: (embed-certs-024748) Reserved static IP address: 192.168.72.96
	I0819 19:12:27.645919  438295 main.go:141] libmachine: (embed-certs-024748) DBG | skip adding static IP to network mk-embed-certs-024748 - found existing host DHCP lease matching {name: "embed-certs-024748", mac: "52:54:00:f0:8b:43", ip: "192.168.72.96"}
	I0819 19:12:27.645952  438295 main.go:141] libmachine: (embed-certs-024748) Waiting for SSH to be available...
	I0819 19:12:27.645974  438295 main.go:141] libmachine: (embed-certs-024748) DBG | Getting to WaitForSSH function...
	I0819 19:12:27.648195  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:27.648471  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:27.648496  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:27.648717  438295 main.go:141] libmachine: (embed-certs-024748) DBG | Using SSH client type: external
	I0819 19:12:27.648744  438295 main.go:141] libmachine: (embed-certs-024748) DBG | Using SSH private key: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/embed-certs-024748/id_rsa (-rw-------)
	I0819 19:12:27.648773  438295 main.go:141] libmachine: (embed-certs-024748) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.96 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19468-372744/.minikube/machines/embed-certs-024748/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 19:12:27.648792  438295 main.go:141] libmachine: (embed-certs-024748) DBG | About to run SSH command:
	I0819 19:12:27.648808  438295 main.go:141] libmachine: (embed-certs-024748) DBG | exit 0
	I0819 19:12:27.775964  438295 main.go:141] libmachine: (embed-certs-024748) DBG | SSH cmd err, output: <nil>: 
	I0819 19:12:27.776344  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetConfigRaw
	I0819 19:12:27.777100  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetIP
	I0819 19:12:27.780096  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:27.780535  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:27.780570  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:27.780936  438295 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/embed-certs-024748/config.json ...
	I0819 19:12:27.781721  438295 machine.go:93] provisionDockerMachine start ...
	I0819 19:12:27.781748  438295 main.go:141] libmachine: (embed-certs-024748) Calling .DriverName
	I0819 19:12:27.781974  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHHostname
	I0819 19:12:27.784482  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:27.784838  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:27.784868  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:27.785066  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHPort
	I0819 19:12:27.785254  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:27.785452  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:27.785617  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHUsername
	I0819 19:12:27.785789  438295 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:27.786038  438295 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.96 22 <nil> <nil>}
	I0819 19:12:27.786059  438295 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 19:12:27.904337  438295 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 19:12:27.904375  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetMachineName
	I0819 19:12:27.904675  438295 buildroot.go:166] provisioning hostname "embed-certs-024748"
	I0819 19:12:27.904711  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetMachineName
	I0819 19:12:27.904932  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHHostname
	I0819 19:12:27.907960  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:27.908325  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:27.908354  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:27.908446  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHPort
	I0819 19:12:27.908659  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:27.908825  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:27.909012  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHUsername
	I0819 19:12:27.909234  438295 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:27.909441  438295 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.96 22 <nil> <nil>}
	I0819 19:12:27.909458  438295 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-024748 && echo "embed-certs-024748" | sudo tee /etc/hostname
	I0819 19:12:28.036564  438295 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-024748
	
	I0819 19:12:28.036597  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHHostname
	I0819 19:12:28.039385  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:28.039798  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:28.039827  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:28.040071  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHPort
	I0819 19:12:28.040327  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:28.040493  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:28.040652  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHUsername
	I0819 19:12:28.040882  438295 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:28.041113  438295 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.96 22 <nil> <nil>}
	I0819 19:12:28.041138  438295 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-024748' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-024748/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-024748' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 19:12:28.162311  438295 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 19:12:28.162348  438295 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19468-372744/.minikube CaCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19468-372744/.minikube}
	I0819 19:12:28.162368  438295 buildroot.go:174] setting up certificates
	I0819 19:12:28.162376  438295 provision.go:84] configureAuth start
	I0819 19:12:28.162385  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetMachineName
	I0819 19:12:28.162703  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetIP
	I0819 19:12:28.165171  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:28.165563  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:28.165593  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:28.165727  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHHostname
	I0819 19:12:28.167917  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:28.168199  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:28.168221  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:28.168411  438295 provision.go:143] copyHostCerts
	I0819 19:12:28.168469  438295 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem, removing ...
	I0819 19:12:28.168491  438295 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem
	I0819 19:12:28.168560  438295 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem (1082 bytes)
	I0819 19:12:28.168693  438295 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem, removing ...
	I0819 19:12:28.168704  438295 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem
	I0819 19:12:28.168736  438295 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem (1123 bytes)
	I0819 19:12:28.168814  438295 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem, removing ...
	I0819 19:12:28.168824  438295 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem
	I0819 19:12:28.168853  438295 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem (1675 bytes)
	I0819 19:12:28.168942  438295 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem org=jenkins.embed-certs-024748 san=[127.0.0.1 192.168.72.96 embed-certs-024748 localhost minikube]
	I0819 19:12:28.447064  438295 provision.go:177] copyRemoteCerts
	I0819 19:12:28.447129  438295 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 19:12:28.447158  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHHostname
	I0819 19:12:28.449851  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:28.450138  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:28.450163  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:28.450344  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHPort
	I0819 19:12:28.450541  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:28.450713  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHUsername
	I0819 19:12:28.450832  438295 sshutil.go:53] new ssh client: &{IP:192.168.72.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/embed-certs-024748/id_rsa Username:docker}
	I0819 19:12:28.537815  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 19:12:28.562408  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0819 19:12:28.586728  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 19:12:28.611119  438295 provision.go:87] duration metric: took 448.726133ms to configureAuth
	I0819 19:12:28.611158  438295 buildroot.go:189] setting minikube options for container-runtime
	I0819 19:12:28.611351  438295 config.go:182] Loaded profile config "embed-certs-024748": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:12:28.611428  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHHostname
	I0819 19:12:28.614168  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:28.614543  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:28.614571  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:28.614736  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHPort
	I0819 19:12:28.614941  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:28.615083  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:28.615192  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHUsername
	I0819 19:12:28.615302  438295 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:28.615454  438295 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.96 22 <nil> <nil>}
	I0819 19:12:28.615469  438295 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 19:12:28.890054  438295 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 19:12:28.890086  438295 machine.go:96] duration metric: took 1.10834874s to provisionDockerMachine
	I0819 19:12:28.890100  438295 start.go:293] postStartSetup for "embed-certs-024748" (driver="kvm2")
	I0819 19:12:28.890120  438295 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 19:12:28.890146  438295 main.go:141] libmachine: (embed-certs-024748) Calling .DriverName
	I0819 19:12:28.890469  438295 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 19:12:28.890499  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHHostname
	I0819 19:12:28.893251  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:28.893579  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:28.893605  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:28.893733  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHPort
	I0819 19:12:28.893895  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:28.894102  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHUsername
	I0819 19:12:28.894220  438295 sshutil.go:53] new ssh client: &{IP:192.168.72.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/embed-certs-024748/id_rsa Username:docker}
	I0819 19:12:28.979381  438295 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 19:12:28.983921  438295 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 19:12:28.983952  438295 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-372744/.minikube/addons for local assets ...
	I0819 19:12:28.984048  438295 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-372744/.minikube/files for local assets ...
	I0819 19:12:28.984156  438295 filesync.go:149] local asset: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem -> 3800092.pem in /etc/ssl/certs
	I0819 19:12:28.984250  438295 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 19:12:28.994964  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem --> /etc/ssl/certs/3800092.pem (1708 bytes)
	I0819 19:12:29.018801  438295 start.go:296] duration metric: took 128.685446ms for postStartSetup
	I0819 19:12:29.018843  438295 fix.go:56] duration metric: took 20.282076509s for fixHost
	I0819 19:12:29.018870  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHHostname
	I0819 19:12:29.021554  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:29.021848  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:29.021875  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:29.022066  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHPort
	I0819 19:12:29.022261  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:29.022428  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:29.022526  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHUsername
	I0819 19:12:29.022678  438295 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:29.022900  438295 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.96 22 <nil> <nil>}
	I0819 19:12:29.022915  438295 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 19:12:29.132976  438716 start.go:364] duration metric: took 3m58.489348567s to acquireMachinesLock for "old-k8s-version-104669"
	I0819 19:12:29.133047  438716 start.go:96] Skipping create...Using existing machine configuration
	I0819 19:12:29.133055  438716 fix.go:54] fixHost starting: 
	I0819 19:12:29.133485  438716 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:29.133524  438716 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:29.151330  438716 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39213
	I0819 19:12:29.151778  438716 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:29.152271  438716 main.go:141] libmachine: Using API Version  1
	I0819 19:12:29.152301  438716 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:29.152682  438716 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:29.152883  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .DriverName
	I0819 19:12:29.153065  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetState
	I0819 19:12:29.154399  438716 fix.go:112] recreateIfNeeded on old-k8s-version-104669: state=Stopped err=<nil>
	I0819 19:12:29.154444  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .DriverName
	W0819 19:12:29.154684  438716 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 19:12:29.156349  438716 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-104669" ...
	I0819 19:12:29.157631  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .Start
	I0819 19:12:29.157825  438716 main.go:141] libmachine: (old-k8s-version-104669) Ensuring networks are active...
	I0819 19:12:29.158635  438716 main.go:141] libmachine: (old-k8s-version-104669) Ensuring network default is active
	I0819 19:12:29.159041  438716 main.go:141] libmachine: (old-k8s-version-104669) Ensuring network mk-old-k8s-version-104669 is active
	I0819 19:12:29.159509  438716 main.go:141] libmachine: (old-k8s-version-104669) Getting domain xml...
	I0819 19:12:29.160383  438716 main.go:141] libmachine: (old-k8s-version-104669) Creating domain...
	I0819 19:12:30.452488  438716 main.go:141] libmachine: (old-k8s-version-104669) Waiting to get IP...
	I0819 19:12:30.453743  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:30.454237  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:30.454323  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:30.454193  439728 retry.go:31] will retry after 197.440033ms: waiting for machine to come up
	I0819 19:12:29.132812  438295 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724094749.105537362
	
	I0819 19:12:29.132839  438295 fix.go:216] guest clock: 1724094749.105537362
	I0819 19:12:29.132850  438295 fix.go:229] Guest: 2024-08-19 19:12:29.105537362 +0000 UTC Remote: 2024-08-19 19:12:29.018848957 +0000 UTC m=+300.015027560 (delta=86.688405ms)
	I0819 19:12:29.132877  438295 fix.go:200] guest clock delta is within tolerance: 86.688405ms
	I0819 19:12:29.132884  438295 start.go:83] releasing machines lock for "embed-certs-024748", held for 20.396159242s
	I0819 19:12:29.132912  438295 main.go:141] libmachine: (embed-certs-024748) Calling .DriverName
	I0819 19:12:29.133179  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetIP
	I0819 19:12:29.136110  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:29.136532  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:29.136565  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:29.136750  438295 main.go:141] libmachine: (embed-certs-024748) Calling .DriverName
	I0819 19:12:29.137307  438295 main.go:141] libmachine: (embed-certs-024748) Calling .DriverName
	I0819 19:12:29.137532  438295 main.go:141] libmachine: (embed-certs-024748) Calling .DriverName
	I0819 19:12:29.137616  438295 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 19:12:29.137690  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHHostname
	I0819 19:12:29.137758  438295 ssh_runner.go:195] Run: cat /version.json
	I0819 19:12:29.137781  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHHostname
	I0819 19:12:29.140500  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:29.140820  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:29.140870  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:29.140903  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:29.141067  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHPort
	I0819 19:12:29.141266  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:29.141385  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:29.141430  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:29.141443  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHUsername
	I0819 19:12:29.141586  438295 sshutil.go:53] new ssh client: &{IP:192.168.72.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/embed-certs-024748/id_rsa Username:docker}
	I0819 19:12:29.141639  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHPort
	I0819 19:12:29.141790  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:29.141957  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHUsername
	I0819 19:12:29.142123  438295 sshutil.go:53] new ssh client: &{IP:192.168.72.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/embed-certs-024748/id_rsa Username:docker}
	I0819 19:12:29.242886  438295 ssh_runner.go:195] Run: systemctl --version
	I0819 19:12:29.249276  438295 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 19:12:29.393872  438295 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 19:12:29.401874  438295 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 19:12:29.401954  438295 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 19:12:29.421973  438295 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 19:12:29.422004  438295 start.go:495] detecting cgroup driver to use...
	I0819 19:12:29.422081  438295 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 19:12:29.442823  438295 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 19:12:29.462663  438295 docker.go:217] disabling cri-docker service (if available) ...
	I0819 19:12:29.462720  438295 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 19:12:29.477896  438295 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 19:12:29.492591  438295 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 19:12:29.613759  438295 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 19:12:29.770719  438295 docker.go:233] disabling docker service ...
	I0819 19:12:29.770805  438295 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 19:12:29.785787  438295 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 19:12:29.802879  438295 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 19:12:29.947633  438295 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 19:12:30.082602  438295 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 19:12:30.097628  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 19:12:30.118671  438295 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 19:12:30.118735  438295 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:30.131287  438295 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 19:12:30.131354  438295 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:30.143008  438295 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:30.156358  438295 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:30.172123  438295 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 19:12:30.188196  438295 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:30.201487  438295 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:30.219887  438295 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:30.235685  438295 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 19:12:30.246112  438295 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 19:12:30.246202  438295 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 19:12:30.259732  438295 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 19:12:30.269866  438295 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:12:30.397522  438295 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 19:12:30.545249  438295 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 19:12:30.545349  438295 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 19:12:30.550473  438295 start.go:563] Will wait 60s for crictl version
	I0819 19:12:30.550528  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:12:30.554782  438295 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 19:12:30.597634  438295 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 19:12:30.597736  438295 ssh_runner.go:195] Run: crio --version
	I0819 19:12:30.628137  438295 ssh_runner.go:195] Run: crio --version
	I0819 19:12:30.660912  438295 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 19:12:29.146475  438245 pod_ready.go:103] pod "coredns-6f6b679f8f-dwbnt" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:31.147618  438245 pod_ready.go:93] pod "coredns-6f6b679f8f-dwbnt" in "kube-system" namespace has status "Ready":"True"
	I0819 19:12:31.147651  438245 pod_ready.go:82] duration metric: took 9.00827926s for pod "coredns-6f6b679f8f-dwbnt" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:31.147665  438245 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:31.153305  438245 pod_ready.go:93] pod "etcd-default-k8s-diff-port-982795" in "kube-system" namespace has status "Ready":"True"
	I0819 19:12:31.153331  438245 pod_ready.go:82] duration metric: took 5.657625ms for pod "etcd-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:31.153347  438245 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:31.159009  438245 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-982795" in "kube-system" namespace has status "Ready":"True"
	I0819 19:12:31.159037  438245 pod_ready.go:82] duration metric: took 5.680194ms for pod "kube-apiserver-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:31.159050  438245 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:31.165478  438245 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-982795" in "kube-system" namespace has status "Ready":"True"
	I0819 19:12:31.165504  438245 pod_ready.go:82] duration metric: took 6.444529ms for pod "kube-controller-manager-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:31.165517  438245 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-wrczx" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:31.180293  438245 pod_ready.go:93] pod "kube-proxy-wrczx" in "kube-system" namespace has status "Ready":"True"
	I0819 19:12:31.180324  438245 pod_ready.go:82] duration metric: took 14.798883ms for pod "kube-proxy-wrczx" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:31.180337  438245 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:30.662168  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetIP
	I0819 19:12:30.665057  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:30.665455  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:30.665486  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:30.665660  438295 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0819 19:12:30.669911  438295 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 19:12:30.682755  438295 kubeadm.go:883] updating cluster {Name:embed-certs-024748 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-024748 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.96 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 19:12:30.682883  438295 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 19:12:30.682936  438295 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 19:12:30.724160  438295 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0819 19:12:30.724233  438295 ssh_runner.go:195] Run: which lz4
	I0819 19:12:30.728710  438295 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 19:12:30.733279  438295 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 19:12:30.733317  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0819 19:12:32.178568  438295 crio.go:462] duration metric: took 1.449881121s to copy over tarball
	I0819 19:12:32.178642  438295 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 19:12:30.653917  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:30.654521  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:30.654566  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:30.654436  439728 retry.go:31] will retry after 317.038756ms: waiting for machine to come up
	I0819 19:12:30.973003  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:30.973530  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:30.973560  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:30.973487  439728 retry.go:31] will retry after 486.945032ms: waiting for machine to come up
	I0819 19:12:31.461937  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:31.462438  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:31.462470  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:31.462389  439728 retry.go:31] will retry after 441.288745ms: waiting for machine to come up
	I0819 19:12:31.904947  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:31.905564  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:31.905617  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:31.905472  439728 retry.go:31] will retry after 752.583403ms: waiting for machine to come up
	I0819 19:12:32.659642  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:32.660175  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:32.660207  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:32.660128  439728 retry.go:31] will retry after 932.705928ms: waiting for machine to come up
	I0819 19:12:33.594983  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:33.595529  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:33.595556  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:33.595466  439728 retry.go:31] will retry after 936.558157ms: waiting for machine to come up
	I0819 19:12:34.533158  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:34.533717  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:34.533743  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:34.533656  439728 retry.go:31] will retry after 1.435945188s: waiting for machine to come up
	I0819 19:12:33.186835  438245 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-982795" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:35.187500  438245 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-982795" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:35.686905  438245 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-982795" in "kube-system" namespace has status "Ready":"True"
	I0819 19:12:35.686932  438245 pod_ready.go:82] duration metric: took 4.50658625s for pod "kube-scheduler-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:35.686945  438245 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:34.321347  438295 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.14267077s)
	I0819 19:12:34.321379  438295 crio.go:469] duration metric: took 2.142777016s to extract the tarball
	I0819 19:12:34.321390  438295 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 19:12:34.357670  438295 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 19:12:34.403313  438295 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 19:12:34.403344  438295 cache_images.go:84] Images are preloaded, skipping loading
	I0819 19:12:34.403358  438295 kubeadm.go:934] updating node { 192.168.72.96 8443 v1.31.0 crio true true} ...
	I0819 19:12:34.403495  438295 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-024748 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.96
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-024748 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 19:12:34.403576  438295 ssh_runner.go:195] Run: crio config
	I0819 19:12:34.450415  438295 cni.go:84] Creating CNI manager for ""
	I0819 19:12:34.450443  438295 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 19:12:34.450461  438295 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 19:12:34.450490  438295 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.96 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-024748 NodeName:embed-certs-024748 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.96"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.96 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 19:12:34.450646  438295 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.96
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-024748"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.96
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.96"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 19:12:34.450723  438295 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 19:12:34.461183  438295 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 19:12:34.461313  438295 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 19:12:34.470516  438295 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0819 19:12:34.488844  438295 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 19:12:34.505450  438295 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0819 19:12:34.522456  438295 ssh_runner.go:195] Run: grep 192.168.72.96	control-plane.minikube.internal$ /etc/hosts
	I0819 19:12:34.526272  438295 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.96	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 19:12:34.539079  438295 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:12:34.665665  438295 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 19:12:34.683237  438295 certs.go:68] Setting up /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/embed-certs-024748 for IP: 192.168.72.96
	I0819 19:12:34.683265  438295 certs.go:194] generating shared ca certs ...
	I0819 19:12:34.683287  438295 certs.go:226] acquiring lock for ca certs: {Name:mk639e03f593e0bccac045f6e9f5ba3b96cc81e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:12:34.683471  438295 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key
	I0819 19:12:34.683536  438295 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key
	I0819 19:12:34.683550  438295 certs.go:256] generating profile certs ...
	I0819 19:12:34.683687  438295 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/embed-certs-024748/client.key
	I0819 19:12:34.683776  438295 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/embed-certs-024748/apiserver.key.89193d03
	I0819 19:12:34.683828  438295 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/embed-certs-024748/proxy-client.key
	I0819 19:12:34.683991  438295 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem (1338 bytes)
	W0819 19:12:34.684035  438295 certs.go:480] ignoring /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009_empty.pem, impossibly tiny 0 bytes
	I0819 19:12:34.684047  438295 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 19:12:34.684074  438295 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem (1082 bytes)
	I0819 19:12:34.684112  438295 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem (1123 bytes)
	I0819 19:12:34.684159  438295 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem (1675 bytes)
	I0819 19:12:34.684224  438295 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem (1708 bytes)
	I0819 19:12:34.685127  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 19:12:34.718591  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 19:12:34.758439  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 19:12:34.790143  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 19:12:34.828113  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/embed-certs-024748/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0819 19:12:34.860389  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/embed-certs-024748/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 19:12:34.898361  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/embed-certs-024748/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 19:12:34.924677  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/embed-certs-024748/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 19:12:34.951630  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem --> /usr/share/ca-certificates/3800092.pem (1708 bytes)
	I0819 19:12:34.977435  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 19:12:35.002048  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem --> /usr/share/ca-certificates/380009.pem (1338 bytes)
	I0819 19:12:35.026934  438295 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 19:12:35.044476  438295 ssh_runner.go:195] Run: openssl version
	I0819 19:12:35.050174  438295 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3800092.pem && ln -fs /usr/share/ca-certificates/3800092.pem /etc/ssl/certs/3800092.pem"
	I0819 19:12:35.061299  438295 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3800092.pem
	I0819 19:12:35.065978  438295 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 17:56 /usr/share/ca-certificates/3800092.pem
	I0819 19:12:35.066047  438295 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3800092.pem
	I0819 19:12:35.072572  438295 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3800092.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 19:12:35.083760  438295 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 19:12:35.094492  438295 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:12:35.099152  438295 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:12:35.099229  438295 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:12:35.105124  438295 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 19:12:35.115950  438295 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/380009.pem && ln -fs /usr/share/ca-certificates/380009.pem /etc/ssl/certs/380009.pem"
	I0819 19:12:35.126845  438295 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/380009.pem
	I0819 19:12:35.131568  438295 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 17:56 /usr/share/ca-certificates/380009.pem
	I0819 19:12:35.131650  438295 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/380009.pem
	I0819 19:12:35.137851  438295 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/380009.pem /etc/ssl/certs/51391683.0"
	I0819 19:12:35.148818  438295 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 19:12:35.153800  438295 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 19:12:35.159720  438295 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 19:12:35.165740  438295 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 19:12:35.171705  438295 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 19:12:35.177574  438295 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 19:12:35.183935  438295 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 19:12:35.192681  438295 kubeadm.go:392] StartCluster: {Name:embed-certs-024748 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-024748 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.96 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 19:12:35.192845  438295 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 19:12:35.192908  438295 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 19:12:35.231688  438295 cri.go:89] found id: ""
	I0819 19:12:35.231791  438295 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 19:12:35.242835  438295 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 19:12:35.242859  438295 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 19:12:35.242944  438295 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 19:12:35.255695  438295 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 19:12:35.257036  438295 kubeconfig.go:125] found "embed-certs-024748" server: "https://192.168.72.96:8443"
	I0819 19:12:35.259422  438295 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 19:12:35.271730  438295 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.96
	I0819 19:12:35.271758  438295 kubeadm.go:1160] stopping kube-system containers ...
	I0819 19:12:35.271772  438295 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0819 19:12:35.271820  438295 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 19:12:35.321065  438295 cri.go:89] found id: ""
	I0819 19:12:35.321155  438295 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0819 19:12:35.337802  438295 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 19:12:35.347699  438295 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 19:12:35.347726  438295 kubeadm.go:157] found existing configuration files:
	
	I0819 19:12:35.347785  438295 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 19:12:35.357108  438295 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 19:12:35.357178  438295 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 19:12:35.366805  438295 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 19:12:35.376864  438295 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 19:12:35.376938  438295 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 19:12:35.387018  438295 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 19:12:35.396966  438295 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 19:12:35.397045  438295 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 19:12:35.406192  438295 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 19:12:35.415325  438295 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 19:12:35.415401  438295 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 19:12:35.424450  438295 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 19:12:35.433931  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:12:35.549294  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:12:36.306930  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:12:36.517086  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:12:36.587680  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:12:36.680728  438295 api_server.go:52] waiting for apiserver process to appear ...
	I0819 19:12:36.680825  438295 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:12:37.181054  438295 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:12:37.681059  438295 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:12:38.181588  438295 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:12:38.197155  438295 api_server.go:72] duration metric: took 1.516436456s to wait for apiserver process to appear ...
	I0819 19:12:38.197184  438295 api_server.go:88] waiting for apiserver healthz status ...
	I0819 19:12:38.197212  438295 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0819 19:12:35.971138  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:35.971576  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:35.971607  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:35.971514  439728 retry.go:31] will retry after 1.521077744s: waiting for machine to come up
	I0819 19:12:37.493931  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:37.494389  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:37.494415  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:37.494361  439728 retry.go:31] will retry after 1.632508579s: waiting for machine to come up
	I0819 19:12:39.128939  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:39.129429  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:39.129456  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:39.129392  439728 retry.go:31] will retry after 2.634061376s: waiting for machine to come up
	I0819 19:12:40.567608  438295 api_server.go:279] https://192.168.72.96:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 19:12:40.567654  438295 api_server.go:103] status: https://192.168.72.96:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 19:12:40.567669  438295 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0819 19:12:40.593405  438295 api_server.go:279] https://192.168.72.96:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 19:12:40.593456  438295 api_server.go:103] status: https://192.168.72.96:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 19:12:40.697607  438295 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0819 19:12:40.713767  438295 api_server.go:279] https://192.168.72.96:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 19:12:40.713806  438295 api_server.go:103] status: https://192.168.72.96:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 19:12:41.197299  438295 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0819 19:12:41.203307  438295 api_server.go:279] https://192.168.72.96:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 19:12:41.203338  438295 api_server.go:103] status: https://192.168.72.96:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 19:12:41.697903  438295 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0819 19:12:41.705142  438295 api_server.go:279] https://192.168.72.96:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 19:12:41.705174  438295 api_server.go:103] status: https://192.168.72.96:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 19:12:42.197361  438295 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0819 19:12:42.202272  438295 api_server.go:279] https://192.168.72.96:8443/healthz returned 200:
	ok
	I0819 19:12:42.209788  438295 api_server.go:141] control plane version: v1.31.0
	I0819 19:12:42.209819  438295 api_server.go:131] duration metric: took 4.012627755s to wait for apiserver health ...
	I0819 19:12:42.209829  438295 cni.go:84] Creating CNI manager for ""
	I0819 19:12:42.209836  438295 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 19:12:42.211612  438295 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 19:12:37.693171  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:39.693397  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:41.693523  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:42.212889  438295 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 19:12:42.223277  438295 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 19:12:42.242392  438295 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 19:12:42.256273  438295 system_pods.go:59] 8 kube-system pods found
	I0819 19:12:42.256321  438295 system_pods.go:61] "coredns-6f6b679f8f-7ww4z" [bbde00d4-6027-4d8d-b51e-bd68915da166] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0819 19:12:42.256331  438295 system_pods.go:61] "etcd-embed-certs-024748" [846ff0f0-5399-43fd-8e7b-1f64997cd291] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0819 19:12:42.256348  438295 system_pods.go:61] "kube-apiserver-embed-certs-024748" [3ff558d6-e82e-47a0-bb81-15244bee6470] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0819 19:12:42.256366  438295 system_pods.go:61] "kube-controller-manager-embed-certs-024748" [993b82ba-e8e7-4896-a06b-87c4f08d5985] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0819 19:12:42.256383  438295 system_pods.go:61] "kube-proxy-bmmbh" [1f77f152-f5f4-40f6-9632-1eaa36b9ea31] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0819 19:12:42.256393  438295 system_pods.go:61] "kube-scheduler-embed-certs-024748" [34684d4c-2479-45c5-883b-158cf9f974f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0819 19:12:42.256403  438295 system_pods.go:61] "metrics-server-6867b74b74-kxcwh" [15f86629-d916-4fdc-9ecf-9cb1b6c83f85] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 19:12:42.256409  438295 system_pods.go:61] "storage-provisioner" [7acb6ce1-21b6-4cdd-a5cb-76d694fc0a38] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0819 19:12:42.256418  438295 system_pods.go:74] duration metric: took 14.004598ms to wait for pod list to return data ...
	I0819 19:12:42.256428  438295 node_conditions.go:102] verifying NodePressure condition ...
	I0819 19:12:42.263308  438295 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 19:12:42.263340  438295 node_conditions.go:123] node cpu capacity is 2
	I0819 19:12:42.263354  438295 node_conditions.go:105] duration metric: took 6.920993ms to run NodePressure ...
	I0819 19:12:42.263376  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:12:42.533917  438295 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0819 19:12:42.545853  438295 kubeadm.go:739] kubelet initialised
	I0819 19:12:42.545886  438295 kubeadm.go:740] duration metric: took 11.931664ms waiting for restarted kubelet to initialise ...
	I0819 19:12:42.545899  438295 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 19:12:42.553125  438295 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-7ww4z" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:42.559120  438295 pod_ready.go:98] node "embed-certs-024748" hosting pod "coredns-6f6b679f8f-7ww4z" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:42.559148  438295 pod_ready.go:82] duration metric: took 5.984169ms for pod "coredns-6f6b679f8f-7ww4z" in "kube-system" namespace to be "Ready" ...
	E0819 19:12:42.559158  438295 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-024748" hosting pod "coredns-6f6b679f8f-7ww4z" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:42.559164  438295 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:42.564830  438295 pod_ready.go:98] node "embed-certs-024748" hosting pod "etcd-embed-certs-024748" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:42.564852  438295 pod_ready.go:82] duration metric: took 5.681326ms for pod "etcd-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	E0819 19:12:42.564860  438295 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-024748" hosting pod "etcd-embed-certs-024748" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:42.564867  438295 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:42.571982  438295 pod_ready.go:98] node "embed-certs-024748" hosting pod "kube-apiserver-embed-certs-024748" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:42.572027  438295 pod_ready.go:82] duration metric: took 7.150945ms for pod "kube-apiserver-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	E0819 19:12:42.572038  438295 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-024748" hosting pod "kube-apiserver-embed-certs-024748" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:42.572045  438295 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:42.648692  438295 pod_ready.go:98] node "embed-certs-024748" hosting pod "kube-controller-manager-embed-certs-024748" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:42.648721  438295 pod_ready.go:82] duration metric: took 76.665633ms for pod "kube-controller-manager-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	E0819 19:12:42.648730  438295 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-024748" hosting pod "kube-controller-manager-embed-certs-024748" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:42.648737  438295 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-bmmbh" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:43.045619  438295 pod_ready.go:98] node "embed-certs-024748" hosting pod "kube-proxy-bmmbh" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:43.045648  438295 pod_ready.go:82] duration metric: took 396.90414ms for pod "kube-proxy-bmmbh" in "kube-system" namespace to be "Ready" ...
	E0819 19:12:43.045658  438295 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-024748" hosting pod "kube-proxy-bmmbh" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:43.045665  438295 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:43.446302  438295 pod_ready.go:98] node "embed-certs-024748" hosting pod "kube-scheduler-embed-certs-024748" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:43.446331  438295 pod_ready.go:82] duration metric: took 400.658861ms for pod "kube-scheduler-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	E0819 19:12:43.446342  438295 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-024748" hosting pod "kube-scheduler-embed-certs-024748" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:43.446359  438295 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:43.845457  438295 pod_ready.go:98] node "embed-certs-024748" hosting pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:43.845488  438295 pod_ready.go:82] duration metric: took 399.120328ms for pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace to be "Ready" ...
	E0819 19:12:43.845499  438295 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-024748" hosting pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:43.845506  438295 pod_ready.go:39] duration metric: took 1.299593775s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 19:12:43.845526  438295 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 19:12:43.864357  438295 ops.go:34] apiserver oom_adj: -16
	I0819 19:12:43.864384  438295 kubeadm.go:597] duration metric: took 8.621518076s to restartPrimaryControlPlane
	I0819 19:12:43.864394  438295 kubeadm.go:394] duration metric: took 8.671725617s to StartCluster
	I0819 19:12:43.864414  438295 settings.go:142] acquiring lock: {Name:mk396fcf49a1d0e69583cf37ff3c819e37118163 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:12:43.864495  438295 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19468-372744/kubeconfig
	I0819 19:12:43.866775  438295 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/kubeconfig: {Name:mk8e7b4e1bb7da665111d2acd83eb48882c66853 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:12:43.867073  438295 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.96 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 19:12:43.867296  438295 config.go:182] Loaded profile config "embed-certs-024748": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:12:43.867195  438295 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 19:12:43.867354  438295 addons.go:69] Setting metrics-server=true in profile "embed-certs-024748"
	I0819 19:12:43.867362  438295 addons.go:69] Setting default-storageclass=true in profile "embed-certs-024748"
	I0819 19:12:43.867397  438295 addons.go:234] Setting addon metrics-server=true in "embed-certs-024748"
	I0819 19:12:43.867402  438295 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-024748"
	W0819 19:12:43.867409  438295 addons.go:243] addon metrics-server should already be in state true
	I0819 19:12:43.867437  438295 host.go:66] Checking if "embed-certs-024748" exists ...
	I0819 19:12:43.867354  438295 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-024748"
	I0819 19:12:43.867502  438295 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-024748"
	W0819 19:12:43.867514  438295 addons.go:243] addon storage-provisioner should already be in state true
	I0819 19:12:43.867538  438295 host.go:66] Checking if "embed-certs-024748" exists ...
	I0819 19:12:43.867761  438295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:43.867796  438295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:43.867839  438295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:43.867873  438295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:43.867889  438295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:43.867908  438295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:43.869989  438295 out.go:177] * Verifying Kubernetes components...
	I0819 19:12:43.871464  438295 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:12:43.883655  438295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33557
	I0819 19:12:43.883871  438295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33763
	I0819 19:12:43.884279  438295 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:43.884323  438295 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:43.884790  438295 main.go:141] libmachine: Using API Version  1
	I0819 19:12:43.884809  438295 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:43.884935  438295 main.go:141] libmachine: Using API Version  1
	I0819 19:12:43.884953  438295 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:43.885204  438295 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:43.885275  438295 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:43.885380  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetState
	I0819 19:12:43.885886  438295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:43.885928  438295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:43.886840  438295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40467
	I0819 19:12:43.887309  438295 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:43.887792  438295 main.go:141] libmachine: Using API Version  1
	I0819 19:12:43.887802  438295 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:43.888109  438295 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:43.888670  438295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:43.888697  438295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:43.888973  438295 addons.go:234] Setting addon default-storageclass=true in "embed-certs-024748"
	W0819 19:12:43.888988  438295 addons.go:243] addon default-storageclass should already be in state true
	I0819 19:12:43.889020  438295 host.go:66] Checking if "embed-certs-024748" exists ...
	I0819 19:12:43.889270  438295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:43.889304  438295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:43.905278  438295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40907
	I0819 19:12:43.905278  438295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41133
	I0819 19:12:43.905734  438295 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:43.905877  438295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33393
	I0819 19:12:43.905983  438295 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:43.906299  438295 main.go:141] libmachine: Using API Version  1
	I0819 19:12:43.906320  438295 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:43.906366  438295 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:43.906443  438295 main.go:141] libmachine: Using API Version  1
	I0819 19:12:43.906457  438295 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:43.906822  438295 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:43.906898  438295 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:43.906995  438295 main.go:141] libmachine: Using API Version  1
	I0819 19:12:43.907006  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetState
	I0819 19:12:43.907012  438295 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:43.907371  438295 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:43.907473  438295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:43.907523  438295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:43.907534  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetState
	I0819 19:12:43.909443  438295 main.go:141] libmachine: (embed-certs-024748) Calling .DriverName
	I0819 19:12:43.909529  438295 main.go:141] libmachine: (embed-certs-024748) Calling .DriverName
	I0819 19:12:43.911431  438295 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0819 19:12:43.911437  438295 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:12:43.913061  438295 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0819 19:12:43.913090  438295 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0819 19:12:43.913115  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHHostname
	I0819 19:12:43.913180  438295 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 19:12:43.913199  438295 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 19:12:43.913216  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHHostname
	I0819 19:12:43.916642  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:43.916813  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:43.917110  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:43.917135  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:43.917166  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:43.917193  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:43.917463  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHPort
	I0819 19:12:43.917668  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHPort
	I0819 19:12:43.917671  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:43.917846  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:43.917867  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHUsername
	I0819 19:12:43.918014  438295 sshutil.go:53] new ssh client: &{IP:192.168.72.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/embed-certs-024748/id_rsa Username:docker}
	I0819 19:12:43.918032  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHUsername
	I0819 19:12:43.918148  438295 sshutil.go:53] new ssh client: &{IP:192.168.72.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/embed-certs-024748/id_rsa Username:docker}
	I0819 19:12:43.926337  438295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46687
	I0819 19:12:43.926813  438295 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:43.927333  438295 main.go:141] libmachine: Using API Version  1
	I0819 19:12:43.927354  438295 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:43.927762  438295 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:43.927965  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetState
	I0819 19:12:43.929591  438295 main.go:141] libmachine: (embed-certs-024748) Calling .DriverName
	I0819 19:12:43.929910  438295 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 19:12:43.929926  438295 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 19:12:43.929942  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHHostname
	I0819 19:12:43.933032  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:43.933387  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:43.933406  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:43.933626  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHPort
	I0819 19:12:43.933850  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:43.933992  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHUsername
	I0819 19:12:43.934118  438295 sshutil.go:53] new ssh client: &{IP:192.168.72.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/embed-certs-024748/id_rsa Username:docker}
	I0819 19:12:44.078901  438295 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 19:12:44.098542  438295 node_ready.go:35] waiting up to 6m0s for node "embed-certs-024748" to be "Ready" ...
	I0819 19:12:44.180050  438295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 19:12:44.196186  438295 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0819 19:12:44.196210  438295 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0819 19:12:44.220001  438295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 19:12:44.231145  438295 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0819 19:12:44.231180  438295 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0819 19:12:44.267800  438295 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 19:12:44.267831  438295 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0819 19:12:44.323078  438295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 19:12:45.276298  438295 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.096199779s)
	I0819 19:12:45.276336  438295 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.056298773s)
	I0819 19:12:45.276383  438295 main.go:141] libmachine: Making call to close driver server
	I0819 19:12:45.276395  438295 main.go:141] libmachine: (embed-certs-024748) Calling .Close
	I0819 19:12:45.276385  438295 main.go:141] libmachine: Making call to close driver server
	I0819 19:12:45.276462  438295 main.go:141] libmachine: (embed-certs-024748) Calling .Close
	I0819 19:12:45.276714  438295 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:12:45.276757  438295 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:12:45.276777  438295 main.go:141] libmachine: Making call to close driver server
	I0819 19:12:45.276793  438295 main.go:141] libmachine: (embed-certs-024748) Calling .Close
	I0819 19:12:45.276860  438295 main.go:141] libmachine: (embed-certs-024748) DBG | Closing plugin on server side
	I0819 19:12:45.276874  438295 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:12:45.276940  438295 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:12:45.276956  438295 main.go:141] libmachine: Making call to close driver server
	I0819 19:12:45.276964  438295 main.go:141] libmachine: (embed-certs-024748) Calling .Close
	I0819 19:12:45.277134  438295 main.go:141] libmachine: (embed-certs-024748) DBG | Closing plugin on server side
	I0819 19:12:45.277195  438295 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:12:45.277239  438295 main.go:141] libmachine: (embed-certs-024748) DBG | Closing plugin on server side
	I0819 19:12:45.277258  438295 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:12:45.277277  438295 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:12:45.277304  438295 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:12:45.284982  438295 main.go:141] libmachine: Making call to close driver server
	I0819 19:12:45.285007  438295 main.go:141] libmachine: (embed-certs-024748) Calling .Close
	I0819 19:12:45.285304  438295 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:12:45.285324  438295 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:12:45.293973  438295 main.go:141] libmachine: Making call to close driver server
	I0819 19:12:45.293994  438295 main.go:141] libmachine: (embed-certs-024748) Calling .Close
	I0819 19:12:45.294247  438295 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:12:45.294265  438295 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:12:45.294274  438295 main.go:141] libmachine: Making call to close driver server
	I0819 19:12:45.294282  438295 main.go:141] libmachine: (embed-certs-024748) Calling .Close
	I0819 19:12:45.295704  438295 main.go:141] libmachine: (embed-certs-024748) DBG | Closing plugin on server side
	I0819 19:12:45.295787  438295 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:12:45.295813  438295 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:12:45.295828  438295 addons.go:475] Verifying addon metrics-server=true in "embed-certs-024748"
	I0819 19:12:45.297684  438295 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0819 19:12:41.765706  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:41.766129  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:41.766182  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:41.766093  439728 retry.go:31] will retry after 3.464758587s: waiting for machine to come up
	I0819 19:12:45.232640  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:45.233118  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:45.233151  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:45.233066  439728 retry.go:31] will retry after 3.551527195s: waiting for machine to come up
	I0819 19:12:43.694387  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:46.194627  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:45.298844  438295 addons.go:510] duration metric: took 1.431699078s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0819 19:12:46.103096  438295 node_ready.go:53] node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:48.603205  438295 node_ready.go:53] node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:50.084809  438001 start.go:364] duration metric: took 55.89796214s to acquireMachinesLock for "no-preload-278232"
	I0819 19:12:50.084884  438001 start.go:96] Skipping create...Using existing machine configuration
	I0819 19:12:50.084895  438001 fix.go:54] fixHost starting: 
	I0819 19:12:50.085416  438001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:50.085459  438001 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:50.103796  438001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41569
	I0819 19:12:50.104278  438001 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:50.104900  438001 main.go:141] libmachine: Using API Version  1
	I0819 19:12:50.104934  438001 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:50.105335  438001 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:50.105544  438001 main.go:141] libmachine: (no-preload-278232) Calling .DriverName
	I0819 19:12:50.105703  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetState
	I0819 19:12:50.107422  438001 fix.go:112] recreateIfNeeded on no-preload-278232: state=Stopped err=<nil>
	I0819 19:12:50.107444  438001 main.go:141] libmachine: (no-preload-278232) Calling .DriverName
	W0819 19:12:50.107602  438001 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 19:12:50.109328  438001 out.go:177] * Restarting existing kvm2 VM for "no-preload-278232" ...
	I0819 19:12:48.787197  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:48.787586  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has current primary IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:48.787611  438716 main.go:141] libmachine: (old-k8s-version-104669) Found IP for machine: 192.168.50.32
	I0819 19:12:48.787625  438716 main.go:141] libmachine: (old-k8s-version-104669) Reserving static IP address...
	I0819 19:12:48.788104  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "old-k8s-version-104669", mac: "52:54:00:8c:ff:a3", ip: "192.168.50.32"} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:48.788140  438716 main.go:141] libmachine: (old-k8s-version-104669) Reserved static IP address: 192.168.50.32
	I0819 19:12:48.788164  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | skip adding static IP to network mk-old-k8s-version-104669 - found existing host DHCP lease matching {name: "old-k8s-version-104669", mac: "52:54:00:8c:ff:a3", ip: "192.168.50.32"}
	I0819 19:12:48.788186  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | Getting to WaitForSSH function...
	I0819 19:12:48.788202  438716 main.go:141] libmachine: (old-k8s-version-104669) Waiting for SSH to be available...
	I0819 19:12:48.790365  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:48.790765  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:48.790793  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:48.790994  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | Using SSH client type: external
	I0819 19:12:48.791034  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | Using SSH private key: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/old-k8s-version-104669/id_rsa (-rw-------)
	I0819 19:12:48.791073  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.32 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19468-372744/.minikube/machines/old-k8s-version-104669/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 19:12:48.791087  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | About to run SSH command:
	I0819 19:12:48.791103  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | exit 0
	I0819 19:12:48.920087  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | SSH cmd err, output: <nil>: 
	I0819 19:12:48.920464  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetConfigRaw
	I0819 19:12:48.921105  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetIP
	I0819 19:12:48.923637  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:48.924022  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:48.924053  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:48.924242  438716 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/config.json ...
	I0819 19:12:48.924429  438716 machine.go:93] provisionDockerMachine start ...
	I0819 19:12:48.924447  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .DriverName
	I0819 19:12:48.924655  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHHostname
	I0819 19:12:48.926885  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:48.927345  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:48.927376  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:48.927527  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHPort
	I0819 19:12:48.927723  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:48.927846  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:48.927968  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHUsername
	I0819 19:12:48.928241  438716 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:48.928453  438716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I0819 19:12:48.928475  438716 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 19:12:49.039908  438716 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 19:12:49.039944  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetMachineName
	I0819 19:12:49.040200  438716 buildroot.go:166] provisioning hostname "old-k8s-version-104669"
	I0819 19:12:49.040236  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetMachineName
	I0819 19:12:49.040454  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHHostname
	I0819 19:12:49.043462  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.043860  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:49.043892  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.044061  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHPort
	I0819 19:12:49.044256  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:49.044472  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:49.044613  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHUsername
	I0819 19:12:49.044837  438716 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:49.045014  438716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I0819 19:12:49.045027  438716 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-104669 && echo "old-k8s-version-104669" | sudo tee /etc/hostname
	I0819 19:12:49.170660  438716 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-104669
	
	I0819 19:12:49.170695  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHHostname
	I0819 19:12:49.173564  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.173855  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:49.173882  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.174059  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHPort
	I0819 19:12:49.174239  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:49.174432  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:49.174564  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHUsername
	I0819 19:12:49.174732  438716 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:49.174923  438716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I0819 19:12:49.174941  438716 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-104669' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-104669/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-104669' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 19:12:49.298689  438716 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 19:12:49.298731  438716 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19468-372744/.minikube CaCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19468-372744/.minikube}
	I0819 19:12:49.298764  438716 buildroot.go:174] setting up certificates
	I0819 19:12:49.298778  438716 provision.go:84] configureAuth start
	I0819 19:12:49.298793  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetMachineName
	I0819 19:12:49.299157  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetIP
	I0819 19:12:49.301897  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.302290  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:49.302326  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.302462  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHHostname
	I0819 19:12:49.304592  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.304960  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:49.304987  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.305150  438716 provision.go:143] copyHostCerts
	I0819 19:12:49.305219  438716 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem, removing ...
	I0819 19:12:49.305243  438716 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem
	I0819 19:12:49.305310  438716 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem (1082 bytes)
	I0819 19:12:49.305437  438716 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem, removing ...
	I0819 19:12:49.305449  438716 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem
	I0819 19:12:49.305477  438716 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem (1123 bytes)
	I0819 19:12:49.305571  438716 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem, removing ...
	I0819 19:12:49.305583  438716 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem
	I0819 19:12:49.305612  438716 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem (1675 bytes)
	I0819 19:12:49.305699  438716 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-104669 san=[127.0.0.1 192.168.50.32 localhost minikube old-k8s-version-104669]
	I0819 19:12:49.394004  438716 provision.go:177] copyRemoteCerts
	I0819 19:12:49.394074  438716 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 19:12:49.394112  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHHostname
	I0819 19:12:49.396645  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.396906  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:49.396951  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.397108  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHPort
	I0819 19:12:49.397321  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:49.397504  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHUsername
	I0819 19:12:49.397709  438716 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/old-k8s-version-104669/id_rsa Username:docker}
	I0819 19:12:49.483061  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 19:12:49.508297  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 19:12:49.533821  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0819 19:12:49.560064  438716 provision.go:87] duration metric: took 261.270909ms to configureAuth
	I0819 19:12:49.560093  438716 buildroot.go:189] setting minikube options for container-runtime
	I0819 19:12:49.560310  438716 config.go:182] Loaded profile config "old-k8s-version-104669": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0819 19:12:49.560409  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHHostname
	I0819 19:12:49.563173  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.563604  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:49.563633  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.563882  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHPort
	I0819 19:12:49.564075  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:49.564274  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:49.564479  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHUsername
	I0819 19:12:49.564707  438716 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:49.564925  438716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I0819 19:12:49.564948  438716 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 19:12:49.837237  438716 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 19:12:49.837267  438716 machine.go:96] duration metric: took 912.825625ms to provisionDockerMachine
	I0819 19:12:49.837281  438716 start.go:293] postStartSetup for "old-k8s-version-104669" (driver="kvm2")
	I0819 19:12:49.837297  438716 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 19:12:49.837341  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .DriverName
	I0819 19:12:49.837716  438716 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 19:12:49.837757  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHHostname
	I0819 19:12:49.840409  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.840759  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:49.840789  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.840988  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHPort
	I0819 19:12:49.841183  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:49.841345  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHUsername
	I0819 19:12:49.841473  438716 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/old-k8s-version-104669/id_rsa Username:docker}
	I0819 19:12:49.931067  438716 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 19:12:49.935562  438716 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 19:12:49.935590  438716 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-372744/.minikube/addons for local assets ...
	I0819 19:12:49.935694  438716 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-372744/.minikube/files for local assets ...
	I0819 19:12:49.935815  438716 filesync.go:149] local asset: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem -> 3800092.pem in /etc/ssl/certs
	I0819 19:12:49.935941  438716 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 19:12:49.945418  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem --> /etc/ssl/certs/3800092.pem (1708 bytes)
	I0819 19:12:49.969454  438716 start.go:296] duration metric: took 132.15677ms for postStartSetup
	I0819 19:12:49.969494  438716 fix.go:56] duration metric: took 20.836438665s for fixHost
	I0819 19:12:49.969517  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHHostname
	I0819 19:12:49.972127  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.972502  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:49.972542  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.972758  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHPort
	I0819 19:12:49.973000  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:49.973190  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:49.973355  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHUsername
	I0819 19:12:49.973548  438716 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:49.973753  438716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I0819 19:12:49.973766  438716 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 19:12:50.084645  438716 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724094770.056929881
	
	I0819 19:12:50.084672  438716 fix.go:216] guest clock: 1724094770.056929881
	I0819 19:12:50.084681  438716 fix.go:229] Guest: 2024-08-19 19:12:50.056929881 +0000 UTC Remote: 2024-08-19 19:12:49.969497734 +0000 UTC m=+259.472837552 (delta=87.432147ms)
	I0819 19:12:50.084711  438716 fix.go:200] guest clock delta is within tolerance: 87.432147ms
	I0819 19:12:50.084718  438716 start.go:83] releasing machines lock for "old-k8s-version-104669", held for 20.951701853s
	I0819 19:12:50.084752  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .DriverName
	I0819 19:12:50.085050  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetIP
	I0819 19:12:50.087976  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:50.088363  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:50.088391  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:50.088572  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .DriverName
	I0819 19:12:50.089141  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .DriverName
	I0819 19:12:50.089360  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .DriverName
	I0819 19:12:50.089460  438716 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 19:12:50.089526  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHHostname
	I0819 19:12:50.089572  438716 ssh_runner.go:195] Run: cat /version.json
	I0819 19:12:50.089599  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHHostname
	I0819 19:12:50.092427  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:50.092591  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:50.092772  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:50.092797  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:50.092933  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:50.092965  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:50.092965  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHPort
	I0819 19:12:50.093147  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:50.093248  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHPort
	I0819 19:12:50.093328  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHUsername
	I0819 19:12:50.093409  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:50.093503  438716 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/old-k8s-version-104669/id_rsa Username:docker}
	I0819 19:12:50.093532  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHUsername
	I0819 19:12:50.093650  438716 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/old-k8s-version-104669/id_rsa Username:docker}
	I0819 19:12:50.177322  438716 ssh_runner.go:195] Run: systemctl --version
	I0819 19:12:50.200999  438716 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 19:12:50.349276  438716 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 19:12:50.357011  438716 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 19:12:50.357090  438716 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 19:12:50.377691  438716 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 19:12:50.377721  438716 start.go:495] detecting cgroup driver to use...
	I0819 19:12:50.377790  438716 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 19:12:50.394502  438716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 19:12:50.408481  438716 docker.go:217] disabling cri-docker service (if available) ...
	I0819 19:12:50.408556  438716 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 19:12:50.421818  438716 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 19:12:50.434899  438716 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 19:12:50.559399  438716 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 19:12:50.708621  438716 docker.go:233] disabling docker service ...
	I0819 19:12:50.708695  438716 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 19:12:50.726699  438716 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 19:12:50.740605  438716 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 19:12:50.896815  438716 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 19:12:51.037560  438716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 19:12:51.052554  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 19:12:51.072292  438716 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0819 19:12:51.072360  438716 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:51.083248  438716 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 19:12:51.083334  438716 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:51.093721  438716 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:51.105212  438716 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:51.119349  438716 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 19:12:51.134647  438716 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 19:12:51.144553  438716 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 19:12:51.144598  438716 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 19:12:51.159151  438716 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 19:12:51.171260  438716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:12:51.328931  438716 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 19:12:51.500761  438716 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 19:12:51.500831  438716 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 19:12:51.505982  438716 start.go:563] Will wait 60s for crictl version
	I0819 19:12:51.506057  438716 ssh_runner.go:195] Run: which crictl
	I0819 19:12:51.510447  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 19:12:51.552892  438716 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 19:12:51.552982  438716 ssh_runner.go:195] Run: crio --version
	I0819 19:12:51.581931  438716 ssh_runner.go:195] Run: crio --version
	I0819 19:12:51.614565  438716 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0819 19:12:50.110718  438001 main.go:141] libmachine: (no-preload-278232) Calling .Start
	I0819 19:12:50.110888  438001 main.go:141] libmachine: (no-preload-278232) Ensuring networks are active...
	I0819 19:12:50.111809  438001 main.go:141] libmachine: (no-preload-278232) Ensuring network default is active
	I0819 19:12:50.112149  438001 main.go:141] libmachine: (no-preload-278232) Ensuring network mk-no-preload-278232 is active
	I0819 19:12:50.112709  438001 main.go:141] libmachine: (no-preload-278232) Getting domain xml...
	I0819 19:12:50.113441  438001 main.go:141] libmachine: (no-preload-278232) Creating domain...
	I0819 19:12:51.494803  438001 main.go:141] libmachine: (no-preload-278232) Waiting to get IP...
	I0819 19:12:51.495733  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:12:51.496203  438001 main.go:141] libmachine: (no-preload-278232) DBG | unable to find current IP address of domain no-preload-278232 in network mk-no-preload-278232
	I0819 19:12:51.496302  438001 main.go:141] libmachine: (no-preload-278232) DBG | I0819 19:12:51.496187  439925 retry.go:31] will retry after 190.334257ms: waiting for machine to come up
	I0819 19:12:48.694017  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:50.694533  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:51.102764  438295 node_ready.go:49] node "embed-certs-024748" has status "Ready":"True"
	I0819 19:12:51.102791  438295 node_ready.go:38] duration metric: took 7.004204889s for node "embed-certs-024748" to be "Ready" ...
	I0819 19:12:51.102814  438295 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 19:12:51.109122  438295 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-7ww4z" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:51.114649  438295 pod_ready.go:93] pod "coredns-6f6b679f8f-7ww4z" in "kube-system" namespace has status "Ready":"True"
	I0819 19:12:51.114679  438295 pod_ready.go:82] duration metric: took 5.529339ms for pod "coredns-6f6b679f8f-7ww4z" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:51.114692  438295 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:51.121699  438295 pod_ready.go:93] pod "etcd-embed-certs-024748" in "kube-system" namespace has status "Ready":"True"
	I0819 19:12:51.121729  438295 pod_ready.go:82] duration metric: took 7.027906ms for pod "etcd-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:51.121742  438295 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:51.129040  438295 pod_ready.go:93] pod "kube-apiserver-embed-certs-024748" in "kube-system" namespace has status "Ready":"True"
	I0819 19:12:51.129066  438295 pod_ready.go:82] duration metric: took 7.315166ms for pod "kube-apiserver-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:51.129078  438295 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:51.636173  438295 pod_ready.go:93] pod "kube-controller-manager-embed-certs-024748" in "kube-system" namespace has status "Ready":"True"
	I0819 19:12:51.636226  438295 pod_ready.go:82] duration metric: took 507.130455ms for pod "kube-controller-manager-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:51.636243  438295 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bmmbh" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:51.904734  438295 pod_ready.go:93] pod "kube-proxy-bmmbh" in "kube-system" namespace has status "Ready":"True"
	I0819 19:12:51.904776  438295 pod_ready.go:82] duration metric: took 268.522999ms for pod "kube-proxy-bmmbh" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:51.904806  438295 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:53.911857  438295 pod_ready.go:103] pod "kube-scheduler-embed-certs-024748" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:51.615865  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetIP
	I0819 19:12:51.618782  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:51.619238  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:51.619268  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:51.619508  438716 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0819 19:12:51.624020  438716 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 19:12:51.640765  438716 kubeadm.go:883] updating cluster {Name:old-k8s-version-104669 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-104669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.32 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 19:12:51.640905  438716 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0819 19:12:51.640982  438716 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 19:12:51.696872  438716 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0819 19:12:51.696931  438716 ssh_runner.go:195] Run: which lz4
	I0819 19:12:51.702194  438716 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 19:12:51.707228  438716 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 19:12:51.707265  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0819 19:12:53.435062  438716 crio.go:462] duration metric: took 1.732918912s to copy over tarball
	I0819 19:12:53.435149  438716 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 19:12:51.688680  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:12:51.689287  438001 main.go:141] libmachine: (no-preload-278232) DBG | unable to find current IP address of domain no-preload-278232 in network mk-no-preload-278232
	I0819 19:12:51.689326  438001 main.go:141] libmachine: (no-preload-278232) DBG | I0819 19:12:51.689222  439925 retry.go:31] will retry after 351.943478ms: waiting for machine to come up
	I0819 19:12:52.042810  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:12:52.043142  438001 main.go:141] libmachine: (no-preload-278232) DBG | unable to find current IP address of domain no-preload-278232 in network mk-no-preload-278232
	I0819 19:12:52.043163  438001 main.go:141] libmachine: (no-preload-278232) DBG | I0819 19:12:52.043070  439925 retry.go:31] will retry after 332.731922ms: waiting for machine to come up
	I0819 19:12:52.377750  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:12:52.378418  438001 main.go:141] libmachine: (no-preload-278232) DBG | unable to find current IP address of domain no-preload-278232 in network mk-no-preload-278232
	I0819 19:12:52.378442  438001 main.go:141] libmachine: (no-preload-278232) DBG | I0819 19:12:52.378377  439925 retry.go:31] will retry after 601.079013ms: waiting for machine to come up
	I0819 19:12:52.980930  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:12:52.981446  438001 main.go:141] libmachine: (no-preload-278232) DBG | unable to find current IP address of domain no-preload-278232 in network mk-no-preload-278232
	I0819 19:12:52.981474  438001 main.go:141] libmachine: (no-preload-278232) DBG | I0819 19:12:52.981396  439925 retry.go:31] will retry after 621.686612ms: waiting for machine to come up
	I0819 19:12:53.605240  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:12:53.605716  438001 main.go:141] libmachine: (no-preload-278232) DBG | unable to find current IP address of domain no-preload-278232 in network mk-no-preload-278232
	I0819 19:12:53.605751  438001 main.go:141] libmachine: (no-preload-278232) DBG | I0819 19:12:53.605666  439925 retry.go:31] will retry after 627.115747ms: waiting for machine to come up
	I0819 19:12:54.234095  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:12:54.234590  438001 main.go:141] libmachine: (no-preload-278232) DBG | unable to find current IP address of domain no-preload-278232 in network mk-no-preload-278232
	I0819 19:12:54.234613  438001 main.go:141] libmachine: (no-preload-278232) DBG | I0819 19:12:54.234541  439925 retry.go:31] will retry after 1.137953362s: waiting for machine to come up
	I0819 19:12:55.373941  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:12:55.374412  438001 main.go:141] libmachine: (no-preload-278232) DBG | unable to find current IP address of domain no-preload-278232 in network mk-no-preload-278232
	I0819 19:12:55.374440  438001 main.go:141] libmachine: (no-preload-278232) DBG | I0819 19:12:55.374368  439925 retry.go:31] will retry after 1.437610965s: waiting for machine to come up
	I0819 19:12:52.696277  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:54.704463  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:57.195001  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:55.412162  438295 pod_ready.go:93] pod "kube-scheduler-embed-certs-024748" in "kube-system" namespace has status "Ready":"True"
	I0819 19:12:55.412198  438295 pod_ready.go:82] duration metric: took 3.507380249s for pod "kube-scheduler-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:55.412214  438295 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:57.419600  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:56.399941  438716 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.96472478s)
	I0819 19:12:56.399971  438716 crio.go:469] duration metric: took 2.964877539s to extract the tarball
	I0819 19:12:56.399986  438716 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 19:12:56.447075  438716 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 19:12:56.491773  438716 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0819 19:12:56.491800  438716 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0819 19:12:56.491876  438716 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:12:56.491876  438716 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0819 19:12:56.491956  438716 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0819 19:12:56.491961  438716 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 19:12:56.492041  438716 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0819 19:12:56.492059  438716 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0819 19:12:56.492280  438716 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0819 19:12:56.492494  438716 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0819 19:12:56.493750  438716 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 19:12:56.493762  438716 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0819 19:12:56.493756  438716 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:12:56.493762  438716 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0819 19:12:56.493765  438716 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0819 19:12:56.493831  438716 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0819 19:12:56.493806  438716 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0819 19:12:56.494099  438716 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0819 19:12:56.694872  438716 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0819 19:12:56.711504  438716 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0819 19:12:56.754045  438716 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0819 19:12:56.754096  438716 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0819 19:12:56.754136  438716 ssh_runner.go:195] Run: which crictl
	I0819 19:12:56.770451  438716 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0819 19:12:56.770510  438716 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0819 19:12:56.770574  438716 ssh_runner.go:195] Run: which crictl
	I0819 19:12:56.770573  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0819 19:12:56.804839  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0819 19:12:56.804872  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0819 19:12:56.825837  438716 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0819 19:12:56.832063  438716 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0819 19:12:56.834072  438716 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0819 19:12:56.837029  438716 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0819 19:12:56.837697  438716 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 19:12:56.902843  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0819 19:12:56.902930  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0819 19:12:57.020902  438716 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0819 19:12:57.020962  438716 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0819 19:12:57.020988  438716 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0819 19:12:57.021017  438716 ssh_runner.go:195] Run: which crictl
	I0819 19:12:57.021025  438716 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0819 19:12:57.021098  438716 ssh_runner.go:195] Run: which crictl
	I0819 19:12:57.023363  438716 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0819 19:12:57.023411  438716 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0819 19:12:57.023457  438716 ssh_runner.go:195] Run: which crictl
	I0819 19:12:57.023541  438716 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0819 19:12:57.023569  438716 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0819 19:12:57.023605  438716 ssh_runner.go:195] Run: which crictl
	I0819 19:12:57.034648  438716 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0819 19:12:57.034698  438716 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 19:12:57.034719  438716 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0819 19:12:57.034748  438716 ssh_runner.go:195] Run: which crictl
	I0819 19:12:57.039577  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0819 19:12:57.039648  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0819 19:12:57.039715  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0819 19:12:57.041644  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0819 19:12:57.041983  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0819 19:12:57.045383  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 19:12:57.149677  438716 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0819 19:12:57.164701  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0819 19:12:57.164821  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0819 19:12:57.202353  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0819 19:12:57.202434  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0819 19:12:57.202465  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 19:12:57.258824  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0819 19:12:57.258858  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0819 19:12:57.285756  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0819 19:12:57.326148  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 19:12:57.326237  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0819 19:12:57.378322  438716 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0819 19:12:57.378369  438716 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0819 19:12:57.390369  438716 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0819 19:12:57.419554  438716 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0819 19:12:57.419627  438716 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0819 19:12:57.438485  438716 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:12:57.583634  438716 cache_images.go:92] duration metric: took 1.091812972s to LoadCachedImages
	W0819 19:12:57.583757  438716 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0819 19:12:57.583777  438716 kubeadm.go:934] updating node { 192.168.50.32 8443 v1.20.0 crio true true} ...
	I0819 19:12:57.583915  438716 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-104669 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.32
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-104669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 19:12:57.584007  438716 ssh_runner.go:195] Run: crio config
	I0819 19:12:57.636714  438716 cni.go:84] Creating CNI manager for ""
	I0819 19:12:57.636738  438716 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 19:12:57.636752  438716 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 19:12:57.636776  438716 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.32 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-104669 NodeName:old-k8s-version-104669 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.32"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.32 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0819 19:12:57.636951  438716 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.32
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-104669"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.32
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.32"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 19:12:57.637028  438716 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0819 19:12:57.648002  438716 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 19:12:57.648093  438716 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 19:12:57.658889  438716 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0819 19:12:57.677316  438716 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 19:12:57.695825  438716 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0819 19:12:57.715396  438716 ssh_runner.go:195] Run: grep 192.168.50.32	control-plane.minikube.internal$ /etc/hosts
	I0819 19:12:57.719886  438716 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.32	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 19:12:57.733179  438716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:12:57.854139  438716 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 19:12:57.871590  438716 certs.go:68] Setting up /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669 for IP: 192.168.50.32
	I0819 19:12:57.871619  438716 certs.go:194] generating shared ca certs ...
	I0819 19:12:57.871642  438716 certs.go:226] acquiring lock for ca certs: {Name:mk639e03f593e0bccac045f6e9f5ba3b96cc81e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:12:57.871850  438716 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key
	I0819 19:12:57.871916  438716 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key
	I0819 19:12:57.871930  438716 certs.go:256] generating profile certs ...
	I0819 19:12:57.872060  438716 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/client.key
	I0819 19:12:57.872131  438716 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/apiserver.key.7101f8a0
	I0819 19:12:57.872197  438716 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/proxy-client.key
	I0819 19:12:57.872336  438716 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem (1338 bytes)
	W0819 19:12:57.872365  438716 certs.go:480] ignoring /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009_empty.pem, impossibly tiny 0 bytes
	I0819 19:12:57.872371  438716 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 19:12:57.872390  438716 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem (1082 bytes)
	I0819 19:12:57.872419  438716 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem (1123 bytes)
	I0819 19:12:57.872441  438716 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem (1675 bytes)
	I0819 19:12:57.872488  438716 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem (1708 bytes)
	I0819 19:12:57.873259  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 19:12:57.907576  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 19:12:57.943535  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 19:12:57.977770  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 19:12:58.021213  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0819 19:12:58.051043  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 19:12:58.080442  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 19:12:58.110888  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 19:12:58.158635  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 19:12:58.184168  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem --> /usr/share/ca-certificates/380009.pem (1338 bytes)
	I0819 19:12:58.210064  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem --> /usr/share/ca-certificates/3800092.pem (1708 bytes)
	I0819 19:12:58.235366  438716 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 19:12:58.254667  438716 ssh_runner.go:195] Run: openssl version
	I0819 19:12:58.260977  438716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3800092.pem && ln -fs /usr/share/ca-certificates/3800092.pem /etc/ssl/certs/3800092.pem"
	I0819 19:12:58.272995  438716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3800092.pem
	I0819 19:12:58.278056  438716 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 17:56 /usr/share/ca-certificates/3800092.pem
	I0819 19:12:58.278154  438716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3800092.pem
	I0819 19:12:58.284420  438716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3800092.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 19:12:58.296945  438716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 19:12:58.309288  438716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:12:58.314695  438716 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:12:58.314774  438716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:12:58.321016  438716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 19:12:58.332728  438716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/380009.pem && ln -fs /usr/share/ca-certificates/380009.pem /etc/ssl/certs/380009.pem"
	I0819 19:12:58.344766  438716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/380009.pem
	I0819 19:12:58.349610  438716 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 17:56 /usr/share/ca-certificates/380009.pem
	I0819 19:12:58.349681  438716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/380009.pem
	I0819 19:12:58.355942  438716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/380009.pem /etc/ssl/certs/51391683.0"
	I0819 19:12:58.368869  438716 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 19:12:58.373681  438716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 19:12:58.380415  438716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 19:12:58.386741  438716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 19:12:58.393362  438716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 19:12:58.399665  438716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 19:12:58.406108  438716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 19:12:58.412486  438716 kubeadm.go:392] StartCluster: {Name:old-k8s-version-104669 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-104669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.32 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 19:12:58.412606  438716 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 19:12:58.412655  438716 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 19:12:58.462379  438716 cri.go:89] found id: ""
	I0819 19:12:58.462463  438716 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 19:12:58.474029  438716 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 19:12:58.474054  438716 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 19:12:58.474112  438716 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 19:12:58.485755  438716 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 19:12:58.486762  438716 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-104669" does not appear in /home/jenkins/minikube-integration/19468-372744/kubeconfig
	I0819 19:12:58.487464  438716 kubeconfig.go:62] /home/jenkins/minikube-integration/19468-372744/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-104669" cluster setting kubeconfig missing "old-k8s-version-104669" context setting]
	I0819 19:12:58.489361  438716 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/kubeconfig: {Name:mk8e7b4e1bb7da665111d2acd83eb48882c66853 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:12:58.508865  438716 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 19:12:58.520577  438716 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.32
	I0819 19:12:58.520622  438716 kubeadm.go:1160] stopping kube-system containers ...
	I0819 19:12:58.520637  438716 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0819 19:12:58.520728  438716 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 19:12:58.561900  438716 cri.go:89] found id: ""
	I0819 19:12:58.561984  438716 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0819 19:12:58.580483  438716 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 19:12:58.591734  438716 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 19:12:58.591754  438716 kubeadm.go:157] found existing configuration files:
	
	I0819 19:12:58.591804  438716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 19:12:58.601694  438716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 19:12:58.601771  438716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 19:12:58.612132  438716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 19:12:58.621911  438716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 19:12:58.621984  438716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 19:12:58.631525  438716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 19:12:58.640802  438716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 19:12:58.640872  438716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 19:12:58.650216  438716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 19:12:58.660647  438716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 19:12:58.660720  438716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 19:12:58.669992  438716 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 19:12:58.679709  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:12:58.809302  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:12:59.757994  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:13:00.006386  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:13:00.136752  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:13:00.222424  438716 api_server.go:52] waiting for apiserver process to appear ...
	I0819 19:13:00.222542  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:12:56.813279  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:12:56.813777  438001 main.go:141] libmachine: (no-preload-278232) DBG | unable to find current IP address of domain no-preload-278232 in network mk-no-preload-278232
	I0819 19:12:56.813807  438001 main.go:141] libmachine: (no-preload-278232) DBG | I0819 19:12:56.813725  439925 retry.go:31] will retry after 1.504132921s: waiting for machine to come up
	I0819 19:12:58.319408  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:12:58.319880  438001 main.go:141] libmachine: (no-preload-278232) DBG | unable to find current IP address of domain no-preload-278232 in network mk-no-preload-278232
	I0819 19:12:58.319910  438001 main.go:141] libmachine: (no-preload-278232) DBG | I0819 19:12:58.319832  439925 retry.go:31] will retry after 1.921699926s: waiting for machine to come up
	I0819 19:13:00.243504  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:00.243995  438001 main.go:141] libmachine: (no-preload-278232) DBG | unable to find current IP address of domain no-preload-278232 in network mk-no-preload-278232
	I0819 19:13:00.244021  438001 main.go:141] libmachine: (no-preload-278232) DBG | I0819 19:13:00.243952  439925 retry.go:31] will retry after 2.040704792s: waiting for machine to come up
	I0819 19:12:59.195084  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:01.693648  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:59.419644  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:01.918769  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:00.723213  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:01.222908  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:01.723081  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:02.223465  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:02.722589  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:03.222706  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:03.722930  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:04.222826  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:04.722638  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:05.222666  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:02.287044  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:02.287490  438001 main.go:141] libmachine: (no-preload-278232) DBG | unable to find current IP address of domain no-preload-278232 in network mk-no-preload-278232
	I0819 19:13:02.287526  438001 main.go:141] libmachine: (no-preload-278232) DBG | I0819 19:13:02.287416  439925 retry.go:31] will retry after 2.562055052s: waiting for machine to come up
	I0819 19:13:04.852682  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:04.853097  438001 main.go:141] libmachine: (no-preload-278232) DBG | unable to find current IP address of domain no-preload-278232 in network mk-no-preload-278232
	I0819 19:13:04.853125  438001 main.go:141] libmachine: (no-preload-278232) DBG | I0819 19:13:04.853062  439925 retry.go:31] will retry after 3.627213972s: waiting for machine to come up
	I0819 19:13:04.194149  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:06.194831  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:04.418550  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:06.919083  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:05.723627  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:06.222663  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:06.723230  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:07.222666  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:07.722653  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:08.222861  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:08.723248  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:09.222831  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:09.722738  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:10.223069  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:08.484125  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.484586  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has current primary IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.484612  438001 main.go:141] libmachine: (no-preload-278232) Found IP for machine: 192.168.39.106
	I0819 19:13:08.484642  438001 main.go:141] libmachine: (no-preload-278232) Reserving static IP address...
	I0819 19:13:08.485049  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "no-preload-278232", mac: "52:54:00:14:f3:b1", ip: "192.168.39.106"} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:08.485091  438001 main.go:141] libmachine: (no-preload-278232) Reserved static IP address: 192.168.39.106
	I0819 19:13:08.485112  438001 main.go:141] libmachine: (no-preload-278232) DBG | skip adding static IP to network mk-no-preload-278232 - found existing host DHCP lease matching {name: "no-preload-278232", mac: "52:54:00:14:f3:b1", ip: "192.168.39.106"}
	I0819 19:13:08.485129  438001 main.go:141] libmachine: (no-preload-278232) DBG | Getting to WaitForSSH function...
	I0819 19:13:08.485145  438001 main.go:141] libmachine: (no-preload-278232) Waiting for SSH to be available...
	I0819 19:13:08.486998  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.487266  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:08.487290  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.487402  438001 main.go:141] libmachine: (no-preload-278232) DBG | Using SSH client type: external
	I0819 19:13:08.487429  438001 main.go:141] libmachine: (no-preload-278232) DBG | Using SSH private key: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/no-preload-278232/id_rsa (-rw-------)
	I0819 19:13:08.487463  438001 main.go:141] libmachine: (no-preload-278232) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.106 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19468-372744/.minikube/machines/no-preload-278232/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 19:13:08.487476  438001 main.go:141] libmachine: (no-preload-278232) DBG | About to run SSH command:
	I0819 19:13:08.487487  438001 main.go:141] libmachine: (no-preload-278232) DBG | exit 0
	I0819 19:13:08.611459  438001 main.go:141] libmachine: (no-preload-278232) DBG | SSH cmd err, output: <nil>: 
	I0819 19:13:08.611934  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetConfigRaw
	I0819 19:13:08.612610  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetIP
	I0819 19:13:08.615212  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.615564  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:08.615594  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.615919  438001 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/no-preload-278232/config.json ...
	I0819 19:13:08.616140  438001 machine.go:93] provisionDockerMachine start ...
	I0819 19:13:08.616162  438001 main.go:141] libmachine: (no-preload-278232) Calling .DriverName
	I0819 19:13:08.616387  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHHostname
	I0819 19:13:08.618650  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.618956  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:08.618988  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.619098  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHPort
	I0819 19:13:08.619291  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:08.619433  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:08.619569  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHUsername
	I0819 19:13:08.619727  438001 main.go:141] libmachine: Using SSH client type: native
	I0819 19:13:08.619893  438001 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I0819 19:13:08.619903  438001 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 19:13:08.724912  438001 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 19:13:08.724955  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetMachineName
	I0819 19:13:08.725264  438001 buildroot.go:166] provisioning hostname "no-preload-278232"
	I0819 19:13:08.725291  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetMachineName
	I0819 19:13:08.725486  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHHostname
	I0819 19:13:08.728810  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.729237  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:08.729274  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.729434  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHPort
	I0819 19:13:08.729667  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:08.729887  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:08.730067  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHUsername
	I0819 19:13:08.730244  438001 main.go:141] libmachine: Using SSH client type: native
	I0819 19:13:08.730490  438001 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I0819 19:13:08.730511  438001 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-278232 && echo "no-preload-278232" | sudo tee /etc/hostname
	I0819 19:13:08.854474  438001 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-278232
	
	I0819 19:13:08.854499  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHHostname
	I0819 19:13:08.857179  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.857511  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:08.857540  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.857713  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHPort
	I0819 19:13:08.857912  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:08.858075  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:08.858189  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHUsername
	I0819 19:13:08.858356  438001 main.go:141] libmachine: Using SSH client type: native
	I0819 19:13:08.858556  438001 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I0819 19:13:08.858579  438001 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-278232' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-278232/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-278232' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 19:13:08.973053  438001 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 19:13:08.973090  438001 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19468-372744/.minikube CaCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19468-372744/.minikube}
	I0819 19:13:08.973115  438001 buildroot.go:174] setting up certificates
	I0819 19:13:08.973125  438001 provision.go:84] configureAuth start
	I0819 19:13:08.973135  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetMachineName
	I0819 19:13:08.973417  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetIP
	I0819 19:13:08.976100  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.976459  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:08.976487  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.976690  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHHostname
	I0819 19:13:08.978902  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.979342  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:08.979370  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.979530  438001 provision.go:143] copyHostCerts
	I0819 19:13:08.979605  438001 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem, removing ...
	I0819 19:13:08.979628  438001 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem
	I0819 19:13:08.979717  438001 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem (1082 bytes)
	I0819 19:13:08.979830  438001 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem, removing ...
	I0819 19:13:08.979842  438001 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem
	I0819 19:13:08.979874  438001 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem (1123 bytes)
	I0819 19:13:08.979963  438001 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem, removing ...
	I0819 19:13:08.979974  438001 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem
	I0819 19:13:08.980002  438001 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem (1675 bytes)
	I0819 19:13:08.980075  438001 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem org=jenkins.no-preload-278232 san=[127.0.0.1 192.168.39.106 localhost minikube no-preload-278232]
	I0819 19:13:09.092643  438001 provision.go:177] copyRemoteCerts
	I0819 19:13:09.092707  438001 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 19:13:09.092739  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHHostname
	I0819 19:13:09.095542  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:09.095929  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:09.095960  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:09.096099  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHPort
	I0819 19:13:09.096318  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:09.096481  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHUsername
	I0819 19:13:09.096635  438001 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/no-preload-278232/id_rsa Username:docker}
	I0819 19:13:09.179713  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 19:13:09.206363  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0819 19:13:09.231180  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 19:13:09.256764  438001 provision.go:87] duration metric: took 283.626537ms to configureAuth
	I0819 19:13:09.256810  438001 buildroot.go:189] setting minikube options for container-runtime
	I0819 19:13:09.256993  438001 config.go:182] Loaded profile config "no-preload-278232": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:13:09.257079  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHHostname
	I0819 19:13:09.259661  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:09.260061  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:09.260094  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:09.260253  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHPort
	I0819 19:13:09.260461  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:09.260640  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:09.260796  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHUsername
	I0819 19:13:09.260973  438001 main.go:141] libmachine: Using SSH client type: native
	I0819 19:13:09.261150  438001 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I0819 19:13:09.261166  438001 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 19:13:09.534325  438001 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 19:13:09.534357  438001 machine.go:96] duration metric: took 918.201944ms to provisionDockerMachine
	I0819 19:13:09.534371  438001 start.go:293] postStartSetup for "no-preload-278232" (driver="kvm2")
	I0819 19:13:09.534387  438001 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 19:13:09.534412  438001 main.go:141] libmachine: (no-preload-278232) Calling .DriverName
	I0819 19:13:09.534794  438001 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 19:13:09.534826  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHHostname
	I0819 19:13:09.537623  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:09.537974  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:09.538002  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:09.538138  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHPort
	I0819 19:13:09.538349  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:09.538534  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHUsername
	I0819 19:13:09.538669  438001 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/no-preload-278232/id_rsa Username:docker}
	I0819 19:13:09.627085  438001 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 19:13:09.631714  438001 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 19:13:09.631740  438001 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-372744/.minikube/addons for local assets ...
	I0819 19:13:09.631817  438001 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-372744/.minikube/files for local assets ...
	I0819 19:13:09.631911  438001 filesync.go:149] local asset: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem -> 3800092.pem in /etc/ssl/certs
	I0819 19:13:09.632035  438001 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 19:13:09.642942  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem --> /etc/ssl/certs/3800092.pem (1708 bytes)
	I0819 19:13:09.669242  438001 start.go:296] duration metric: took 134.853886ms for postStartSetup
	I0819 19:13:09.669294  438001 fix.go:56] duration metric: took 19.584399031s for fixHost
	I0819 19:13:09.669325  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHHostname
	I0819 19:13:09.672072  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:09.672461  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:09.672494  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:09.672635  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHPort
	I0819 19:13:09.672937  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:09.673116  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:09.673331  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHUsername
	I0819 19:13:09.673517  438001 main.go:141] libmachine: Using SSH client type: native
	I0819 19:13:09.673699  438001 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I0819 19:13:09.673717  438001 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 19:13:09.780601  438001 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724094789.749951838
	
	I0819 19:13:09.780628  438001 fix.go:216] guest clock: 1724094789.749951838
	I0819 19:13:09.780640  438001 fix.go:229] Guest: 2024-08-19 19:13:09.749951838 +0000 UTC Remote: 2024-08-19 19:13:09.669301343 +0000 UTC m=+358.073543000 (delta=80.650495ms)
	I0819 19:13:09.780668  438001 fix.go:200] guest clock delta is within tolerance: 80.650495ms
	I0819 19:13:09.780676  438001 start.go:83] releasing machines lock for "no-preload-278232", held for 19.69582363s
	I0819 19:13:09.780703  438001 main.go:141] libmachine: (no-preload-278232) Calling .DriverName
	I0819 19:13:09.781042  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetIP
	I0819 19:13:09.783578  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:09.783967  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:09.783996  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:09.784149  438001 main.go:141] libmachine: (no-preload-278232) Calling .DriverName
	I0819 19:13:09.784649  438001 main.go:141] libmachine: (no-preload-278232) Calling .DriverName
	I0819 19:13:09.784855  438001 main.go:141] libmachine: (no-preload-278232) Calling .DriverName
	I0819 19:13:09.784946  438001 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 19:13:09.785037  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHHostname
	I0819 19:13:09.785073  438001 ssh_runner.go:195] Run: cat /version.json
	I0819 19:13:09.785107  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHHostname
	I0819 19:13:09.787346  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:09.787706  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:09.787763  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:09.787788  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:09.787977  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHPort
	I0819 19:13:09.788162  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:09.788226  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:09.788251  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:09.788327  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHUsername
	I0819 19:13:09.788447  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHPort
	I0819 19:13:09.788500  438001 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/no-preload-278232/id_rsa Username:docker}
	I0819 19:13:09.788622  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:09.788805  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHUsername
	I0819 19:13:09.788994  438001 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/no-preload-278232/id_rsa Username:docker}
	I0819 19:13:09.864596  438001 ssh_runner.go:195] Run: systemctl --version
	I0819 19:13:09.890038  438001 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 19:13:10.039016  438001 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 19:13:10.045269  438001 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 19:13:10.045352  438001 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 19:13:10.061345  438001 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 19:13:10.061380  438001 start.go:495] detecting cgroup driver to use...
	I0819 19:13:10.061467  438001 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 19:13:10.079229  438001 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 19:13:10.094396  438001 docker.go:217] disabling cri-docker service (if available) ...
	I0819 19:13:10.094471  438001 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 19:13:10.109307  438001 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 19:13:10.123389  438001 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 19:13:10.241132  438001 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 19:13:10.395346  438001 docker.go:233] disabling docker service ...
	I0819 19:13:10.395444  438001 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 19:13:10.409604  438001 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 19:13:10.424149  438001 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 19:13:10.544180  438001 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 19:13:10.671038  438001 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 19:13:10.685563  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 19:13:10.704754  438001 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 19:13:10.704819  438001 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:13:10.716002  438001 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 19:13:10.716077  438001 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:13:10.728085  438001 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:13:10.739292  438001 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:13:10.750083  438001 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 19:13:10.760832  438001 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:13:10.771231  438001 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:13:10.788807  438001 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:13:10.799472  438001 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 19:13:10.809354  438001 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 19:13:10.809432  438001 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 19:13:10.824339  438001 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 19:13:10.833761  438001 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:13:10.953587  438001 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 19:13:11.091264  438001 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 19:13:11.091336  438001 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 19:13:11.096092  438001 start.go:563] Will wait 60s for crictl version
	I0819 19:13:11.096161  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:13:11.100040  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 19:13:11.142512  438001 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 19:13:11.142612  438001 ssh_runner.go:195] Run: crio --version
	I0819 19:13:11.176967  438001 ssh_runner.go:195] Run: crio --version
	I0819 19:13:11.208687  438001 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 19:13:11.209819  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetIP
	I0819 19:13:11.212533  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:11.212876  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:11.212900  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:11.213098  438001 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 19:13:11.217234  438001 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 19:13:11.229995  438001 kubeadm.go:883] updating cluster {Name:no-preload-278232 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-278232 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.106 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 19:13:11.230124  438001 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 19:13:11.230168  438001 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 19:13:11.265699  438001 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0819 19:13:11.265730  438001 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0819 19:13:11.265816  438001 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0819 19:13:11.265836  438001 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0819 19:13:11.265843  438001 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0819 19:13:11.265816  438001 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:13:11.265875  438001 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 19:13:11.265941  438001 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0819 19:13:11.265955  438001 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0819 19:13:11.266027  438001 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0819 19:13:11.267344  438001 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0819 19:13:11.267364  438001 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0819 19:13:11.267344  438001 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0819 19:13:11.267408  438001 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0819 19:13:11.267349  438001 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0819 19:13:11.267445  438001 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0819 19:13:11.267408  438001 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 19:13:11.267407  438001 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:13:11.411117  438001 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0819 19:13:11.435022  438001 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0819 19:13:11.437707  438001 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0819 19:13:11.439226  438001 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 19:13:11.446384  438001 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0819 19:13:11.448011  438001 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0819 19:13:11.463921  438001 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0819 19:13:11.476902  438001 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0819 19:13:11.476956  438001 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0819 19:13:11.477011  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:13:11.561762  438001 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0819 19:13:11.561827  438001 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0819 19:13:11.561889  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:13:08.694513  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:11.193505  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:09.419409  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:11.919413  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:13.931174  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:10.722882  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:11.223650  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:11.722917  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:12.223146  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:12.723410  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:13.222692  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:13.722636  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:14.223152  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:14.722661  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:15.223297  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:11.657022  438001 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0819 19:13:11.657071  438001 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0819 19:13:11.657092  438001 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0819 19:13:11.657123  438001 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 19:13:11.657127  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:13:11.657164  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:13:11.657176  438001 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0819 19:13:11.657195  438001 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0819 19:13:11.657217  438001 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0819 19:13:11.657216  438001 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0819 19:13:11.657254  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:13:11.657260  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:13:11.729671  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0819 19:13:11.729903  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0819 19:13:11.730476  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 19:13:11.730489  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0819 19:13:11.730510  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0819 19:13:11.730544  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0819 19:13:11.853411  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0819 19:13:11.853647  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0819 19:13:11.872296  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0819 19:13:11.872370  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0819 19:13:11.876801  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 19:13:11.877002  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0819 19:13:11.982642  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0819 19:13:12.007940  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0819 19:13:12.031132  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0819 19:13:12.031150  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0819 19:13:12.031163  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 19:13:12.031275  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0819 19:13:12.130991  438001 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0819 19:13:12.131099  438001 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0819 19:13:12.130994  438001 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0819 19:13:12.131231  438001 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1
	I0819 19:13:12.162852  438001 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0819 19:13:12.162911  438001 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0819 19:13:12.162916  438001 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0819 19:13:12.162967  438001 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0819 19:13:12.162984  438001 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0819 19:13:12.162984  438001 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0819 19:13:12.163035  438001 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0819 19:13:12.163044  438001 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0819 19:13:12.163053  438001 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0819 19:13:12.163055  438001 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0819 19:13:12.163086  438001 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0819 19:13:12.163095  438001 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0819 19:13:12.177377  438001 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0819 19:13:12.177438  438001 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0819 19:13:12.177438  438001 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0819 19:13:12.229301  438001 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:13:14.745129  438001 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (2.582015913s)
	I0819 19:13:14.745162  438001 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0819 19:13:14.745196  438001 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0: (2.582131532s)
	I0819 19:13:14.745215  438001 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.515891614s)
	I0819 19:13:14.745232  438001 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0819 19:13:14.745200  438001 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0819 19:13:14.745247  438001 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0819 19:13:14.745285  438001 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:13:14.745298  438001 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0819 19:13:14.745325  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:13:13.693752  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:15.693871  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:16.419552  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:18.920189  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:15.723053  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:16.223486  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:16.722740  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:17.223337  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:17.723160  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:18.222651  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:18.723509  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:19.223686  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:19.723376  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:20.222953  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:16.728557  438001 ssh_runner.go:235] Completed: which crictl: (1.983204878s)
	I0819 19:13:16.728614  438001 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.983294709s)
	I0819 19:13:16.728635  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:13:16.728642  438001 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0819 19:13:16.728673  438001 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0819 19:13:16.728714  438001 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0819 19:13:16.771574  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:13:20.532388  438001 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.760772797s)
	I0819 19:13:20.532421  438001 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.80368813s)
	I0819 19:13:20.532437  438001 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0819 19:13:20.532469  438001 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0819 19:13:20.532480  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:13:20.532500  438001 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0819 19:13:18.193852  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:20.692752  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:21.419154  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:23.419271  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:20.723620  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:21.223286  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:21.723663  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:22.223594  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:22.723415  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:23.223643  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:23.723395  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:24.223476  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:24.723236  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:25.223620  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:22.500967  438001 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.968455152s)
	I0819 19:13:22.501030  438001 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0819 19:13:22.501036  438001 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (1.968509024s)
	I0819 19:13:22.501068  438001 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0819 19:13:22.501108  438001 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0819 19:13:22.501138  438001 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0819 19:13:22.501175  438001 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0819 19:13:22.506796  438001 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0819 19:13:23.962797  438001 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.461519717s)
	I0819 19:13:23.962838  438001 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0819 19:13:23.962876  438001 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0819 19:13:23.962959  438001 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0819 19:13:25.927805  438001 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.964816993s)
	I0819 19:13:25.927836  438001 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0819 19:13:25.927868  438001 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0819 19:13:25.927922  438001 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0819 19:13:26.572310  438001 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0819 19:13:26.572368  438001 cache_images.go:123] Successfully loaded all cached images
	I0819 19:13:26.572376  438001 cache_images.go:92] duration metric: took 15.306632126s to LoadCachedImages
	I0819 19:13:26.572397  438001 kubeadm.go:934] updating node { 192.168.39.106 8443 v1.31.0 crio true true} ...
	I0819 19:13:26.572549  438001 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-278232 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.106
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-278232 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 19:13:26.572635  438001 ssh_runner.go:195] Run: crio config
	I0819 19:13:26.623839  438001 cni.go:84] Creating CNI manager for ""
	I0819 19:13:26.623862  438001 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 19:13:26.623872  438001 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 19:13:26.623896  438001 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.106 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-278232 NodeName:no-preload-278232 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.106"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.106 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 19:13:26.624138  438001 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.106
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-278232"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.106
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.106"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 19:13:26.624226  438001 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 19:13:22.693093  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:24.694313  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:26.695312  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:25.918793  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:27.919721  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:25.722593  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:26.223582  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:26.722927  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:27.223364  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:27.723223  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:28.223458  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:28.723262  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:29.222823  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:29.722837  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:30.223196  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:26.634770  438001 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 19:13:26.634844  438001 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 19:13:26.644193  438001 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0819 19:13:26.661226  438001 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 19:13:26.677413  438001 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0819 19:13:26.696260  438001 ssh_runner.go:195] Run: grep 192.168.39.106	control-plane.minikube.internal$ /etc/hosts
	I0819 19:13:26.700029  438001 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.106	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 19:13:26.711667  438001 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:13:26.849658  438001 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 19:13:26.867185  438001 certs.go:68] Setting up /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/no-preload-278232 for IP: 192.168.39.106
	I0819 19:13:26.867216  438001 certs.go:194] generating shared ca certs ...
	I0819 19:13:26.867240  438001 certs.go:226] acquiring lock for ca certs: {Name:mk639e03f593e0bccac045f6e9f5ba3b96cc81e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:13:26.867431  438001 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key
	I0819 19:13:26.867489  438001 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key
	I0819 19:13:26.867502  438001 certs.go:256] generating profile certs ...
	I0819 19:13:26.867600  438001 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/no-preload-278232/client.key
	I0819 19:13:26.867705  438001 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/no-preload-278232/apiserver.key.4086521c
	I0819 19:13:26.867759  438001 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/no-preload-278232/proxy-client.key
	I0819 19:13:26.867936  438001 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem (1338 bytes)
	W0819 19:13:26.867980  438001 certs.go:480] ignoring /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009_empty.pem, impossibly tiny 0 bytes
	I0819 19:13:26.867995  438001 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 19:13:26.868037  438001 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem (1082 bytes)
	I0819 19:13:26.868075  438001 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem (1123 bytes)
	I0819 19:13:26.868107  438001 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem (1675 bytes)
	I0819 19:13:26.868171  438001 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem (1708 bytes)
	I0819 19:13:26.869217  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 19:13:26.903250  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 19:13:26.928593  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 19:13:26.957098  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 19:13:26.982422  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/no-preload-278232/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0819 19:13:27.009252  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/no-preload-278232/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 19:13:27.038043  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/no-preload-278232/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 19:13:27.075400  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/no-preload-278232/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 19:13:27.101568  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem --> /usr/share/ca-certificates/3800092.pem (1708 bytes)
	I0819 19:13:27.127162  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 19:13:27.152327  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem --> /usr/share/ca-certificates/380009.pem (1338 bytes)
	I0819 19:13:27.176207  438001 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 19:13:27.194919  438001 ssh_runner.go:195] Run: openssl version
	I0819 19:13:27.201002  438001 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3800092.pem && ln -fs /usr/share/ca-certificates/3800092.pem /etc/ssl/certs/3800092.pem"
	I0819 19:13:27.212050  438001 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3800092.pem
	I0819 19:13:27.216607  438001 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 17:56 /usr/share/ca-certificates/3800092.pem
	I0819 19:13:27.216663  438001 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3800092.pem
	I0819 19:13:27.222437  438001 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3800092.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 19:13:27.234112  438001 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 19:13:27.245472  438001 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:13:27.250203  438001 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:13:27.250257  438001 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:13:27.256045  438001 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 19:13:27.266746  438001 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/380009.pem && ln -fs /usr/share/ca-certificates/380009.pem /etc/ssl/certs/380009.pem"
	I0819 19:13:27.277316  438001 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/380009.pem
	I0819 19:13:27.281660  438001 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 17:56 /usr/share/ca-certificates/380009.pem
	I0819 19:13:27.281721  438001 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/380009.pem
	I0819 19:13:27.287223  438001 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/380009.pem /etc/ssl/certs/51391683.0"
	I0819 19:13:27.299791  438001 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 19:13:27.304470  438001 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 19:13:27.310642  438001 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 19:13:27.316259  438001 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 19:13:27.322248  438001 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 19:13:27.327902  438001 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 19:13:27.333447  438001 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 19:13:27.339044  438001 kubeadm.go:392] StartCluster: {Name:no-preload-278232 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-278232 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.106 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 19:13:27.339165  438001 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 19:13:27.339241  438001 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 19:13:27.378362  438001 cri.go:89] found id: ""
	I0819 19:13:27.378436  438001 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 19:13:27.388560  438001 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 19:13:27.388580  438001 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 19:13:27.388623  438001 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 19:13:27.397834  438001 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 19:13:27.399336  438001 kubeconfig.go:125] found "no-preload-278232" server: "https://192.168.39.106:8443"
	I0819 19:13:27.402651  438001 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 19:13:27.412108  438001 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.106
	I0819 19:13:27.412155  438001 kubeadm.go:1160] stopping kube-system containers ...
	I0819 19:13:27.412170  438001 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0819 19:13:27.412230  438001 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 19:13:27.450332  438001 cri.go:89] found id: ""
	I0819 19:13:27.450431  438001 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0819 19:13:27.466943  438001 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 19:13:27.476741  438001 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 19:13:27.476765  438001 kubeadm.go:157] found existing configuration files:
	
	I0819 19:13:27.476810  438001 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 19:13:27.485630  438001 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 19:13:27.485695  438001 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 19:13:27.495232  438001 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 19:13:27.504379  438001 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 19:13:27.504449  438001 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 19:13:27.513723  438001 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 19:13:27.522864  438001 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 19:13:27.522946  438001 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 19:13:27.532402  438001 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 19:13:27.541502  438001 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 19:13:27.541592  438001 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 19:13:27.550934  438001 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 19:13:27.560650  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:13:27.684890  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:13:28.534223  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:13:28.757538  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:13:28.831313  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:13:28.897644  438001 api_server.go:52] waiting for apiserver process to appear ...
	I0819 19:13:28.897735  438001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:29.398486  438001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:29.898494  438001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:29.924881  438001 api_server.go:72] duration metric: took 1.027247684s to wait for apiserver process to appear ...
	I0819 19:13:29.924918  438001 api_server.go:88] waiting for apiserver healthz status ...
	I0819 19:13:29.924944  438001 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8443/healthz ...
	I0819 19:13:29.925535  438001 api_server.go:269] stopped: https://192.168.39.106:8443/healthz: Get "https://192.168.39.106:8443/healthz": dial tcp 192.168.39.106:8443: connect: connection refused
	I0819 19:13:30.425624  438001 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8443/healthz ...
	I0819 19:13:29.193722  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:31.194540  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:32.406445  438001 api_server.go:279] https://192.168.39.106:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 19:13:32.406476  438001 api_server.go:103] status: https://192.168.39.106:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 19:13:32.406491  438001 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8443/healthz ...
	I0819 19:13:32.470160  438001 api_server.go:279] https://192.168.39.106:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 19:13:32.470195  438001 api_server.go:103] status: https://192.168.39.106:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 19:13:32.470211  438001 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8443/healthz ...
	I0819 19:13:32.486292  438001 api_server.go:279] https://192.168.39.106:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 19:13:32.486322  438001 api_server.go:103] status: https://192.168.39.106:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 19:13:32.925943  438001 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8443/healthz ...
	I0819 19:13:32.933024  438001 api_server.go:279] https://192.168.39.106:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 19:13:32.933068  438001 api_server.go:103] status: https://192.168.39.106:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 19:13:33.425638  438001 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8443/healthz ...
	I0819 19:13:33.431919  438001 api_server.go:279] https://192.168.39.106:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 19:13:33.432051  438001 api_server.go:103] status: https://192.168.39.106:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 19:13:33.925369  438001 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8443/healthz ...
	I0819 19:13:33.930489  438001 api_server.go:279] https://192.168.39.106:8443/healthz returned 200:
	ok
	I0819 19:13:33.937758  438001 api_server.go:141] control plane version: v1.31.0
	I0819 19:13:33.937789  438001 api_server.go:131] duration metric: took 4.012862801s to wait for apiserver health ...
	I0819 19:13:33.937800  438001 cni.go:84] Creating CNI manager for ""
	I0819 19:13:33.937807  438001 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 19:13:33.939711  438001 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 19:13:30.419241  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:32.419437  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:30.723537  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:31.223437  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:31.723289  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:32.222714  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:32.723037  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:33.223138  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:33.723303  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:34.223334  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:34.722692  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:35.223021  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:33.941055  438001 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 19:13:33.953427  438001 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 19:13:33.982889  438001 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 19:13:33.998701  438001 system_pods.go:59] 8 kube-system pods found
	I0819 19:13:33.998750  438001 system_pods.go:61] "coredns-6f6b679f8f-22lbt" [c8a5cabd-41d4-41cb-91c1-2db1f3471db3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0819 19:13:33.998762  438001 system_pods.go:61] "etcd-no-preload-278232" [36d555a1-33e4-4c6c-b24e-2fee4fd84f2b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0819 19:13:33.998775  438001 system_pods.go:61] "kube-apiserver-no-preload-278232" [af7173e5-c4ac-4ece-b8b9-bb81cb6b9bfd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0819 19:13:33.998784  438001 system_pods.go:61] "kube-controller-manager-no-preload-278232" [2463d97a-5221-40ce-8fd7-08151165d6f7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0819 19:13:33.998794  438001 system_pods.go:61] "kube-proxy-rcf49" [85d5814a-1ba9-46be-ab11-17bf40c0f029] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0819 19:13:33.998807  438001 system_pods.go:61] "kube-scheduler-no-preload-278232" [3b327704-f70c-4d6f-a774-15427a305472] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0819 19:13:33.998819  438001 system_pods.go:61] "metrics-server-6867b74b74-vxwrs" [e8b74128-b393-4f0f-90fe-e05f20d54acd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 19:13:33.998827  438001 system_pods.go:61] "storage-provisioner" [24766475-1a5b-4f1a-9350-3e891b5272cc] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0819 19:13:33.998841  438001 system_pods.go:74] duration metric: took 15.918876ms to wait for pod list to return data ...
	I0819 19:13:33.998853  438001 node_conditions.go:102] verifying NodePressure condition ...
	I0819 19:13:34.003102  438001 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 19:13:34.003131  438001 node_conditions.go:123] node cpu capacity is 2
	I0819 19:13:34.003145  438001 node_conditions.go:105] duration metric: took 4.283682ms to run NodePressure ...
	I0819 19:13:34.003163  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:13:34.300052  438001 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0819 19:13:34.304483  438001 kubeadm.go:739] kubelet initialised
	I0819 19:13:34.304505  438001 kubeadm.go:740] duration metric: took 4.421894ms waiting for restarted kubelet to initialise ...
	I0819 19:13:34.304513  438001 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 19:13:34.310575  438001 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-22lbt" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:34.316040  438001 pod_ready.go:98] node "no-preload-278232" hosting pod "coredns-6f6b679f8f-22lbt" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:34.316068  438001 pod_ready.go:82] duration metric: took 5.462078ms for pod "coredns-6f6b679f8f-22lbt" in "kube-system" namespace to be "Ready" ...
	E0819 19:13:34.316080  438001 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-278232" hosting pod "coredns-6f6b679f8f-22lbt" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:34.316088  438001 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:34.320731  438001 pod_ready.go:98] node "no-preload-278232" hosting pod "etcd-no-preload-278232" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:34.320751  438001 pod_ready.go:82] duration metric: took 4.649545ms for pod "etcd-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	E0819 19:13:34.320758  438001 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-278232" hosting pod "etcd-no-preload-278232" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:34.320763  438001 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:34.325499  438001 pod_ready.go:98] node "no-preload-278232" hosting pod "kube-apiserver-no-preload-278232" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:34.325519  438001 pod_ready.go:82] duration metric: took 4.750861ms for pod "kube-apiserver-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	E0819 19:13:34.325526  438001 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-278232" hosting pod "kube-apiserver-no-preload-278232" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:34.325531  438001 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:34.388221  438001 pod_ready.go:98] node "no-preload-278232" hosting pod "kube-controller-manager-no-preload-278232" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:34.388248  438001 pod_ready.go:82] duration metric: took 62.708596ms for pod "kube-controller-manager-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	E0819 19:13:34.388259  438001 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-278232" hosting pod "kube-controller-manager-no-preload-278232" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:34.388265  438001 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-rcf49" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:34.787164  438001 pod_ready.go:98] node "no-preload-278232" hosting pod "kube-proxy-rcf49" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:34.787193  438001 pod_ready.go:82] duration metric: took 398.919585ms for pod "kube-proxy-rcf49" in "kube-system" namespace to be "Ready" ...
	E0819 19:13:34.787203  438001 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-278232" hosting pod "kube-proxy-rcf49" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:34.787210  438001 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:35.186336  438001 pod_ready.go:98] node "no-preload-278232" hosting pod "kube-scheduler-no-preload-278232" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:35.186365  438001 pod_ready.go:82] duration metric: took 399.147858ms for pod "kube-scheduler-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	E0819 19:13:35.186377  438001 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-278232" hosting pod "kube-scheduler-no-preload-278232" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:35.186386  438001 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:35.586266  438001 pod_ready.go:98] node "no-preload-278232" hosting pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:35.586292  438001 pod_ready.go:82] duration metric: took 399.895038ms for pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace to be "Ready" ...
	E0819 19:13:35.586301  438001 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-278232" hosting pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:35.586307  438001 pod_ready.go:39] duration metric: took 1.281785432s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 19:13:35.586326  438001 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 19:13:35.598523  438001 ops.go:34] apiserver oom_adj: -16
	I0819 19:13:35.598545  438001 kubeadm.go:597] duration metric: took 8.20995933s to restartPrimaryControlPlane
	I0819 19:13:35.598554  438001 kubeadm.go:394] duration metric: took 8.259514907s to StartCluster
	I0819 19:13:35.598576  438001 settings.go:142] acquiring lock: {Name:mk396fcf49a1d0e69583cf37ff3c819e37118163 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:13:35.598662  438001 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19468-372744/kubeconfig
	I0819 19:13:35.600424  438001 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/kubeconfig: {Name:mk8e7b4e1bb7da665111d2acd83eb48882c66853 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:13:35.600672  438001 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.106 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 19:13:35.600768  438001 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 19:13:35.600850  438001 addons.go:69] Setting storage-provisioner=true in profile "no-preload-278232"
	I0819 19:13:35.600879  438001 addons.go:69] Setting metrics-server=true in profile "no-preload-278232"
	I0819 19:13:35.600924  438001 addons.go:234] Setting addon metrics-server=true in "no-preload-278232"
	W0819 19:13:35.600938  438001 addons.go:243] addon metrics-server should already be in state true
	I0819 19:13:35.600884  438001 addons.go:234] Setting addon storage-provisioner=true in "no-preload-278232"
	W0819 19:13:35.600969  438001 addons.go:243] addon storage-provisioner should already be in state true
	I0819 19:13:35.600966  438001 config.go:182] Loaded profile config "no-preload-278232": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:13:35.600976  438001 host.go:66] Checking if "no-preload-278232" exists ...
	I0819 19:13:35.600988  438001 host.go:66] Checking if "no-preload-278232" exists ...
	I0819 19:13:35.601395  438001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:13:35.601428  438001 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:13:35.601436  438001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:13:35.601453  438001 addons.go:69] Setting default-storageclass=true in profile "no-preload-278232"
	I0819 19:13:35.601501  438001 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-278232"
	I0819 19:13:35.601463  438001 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:13:35.601898  438001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:13:35.601948  438001 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:13:35.602507  438001 out.go:177] * Verifying Kubernetes components...
	I0819 19:13:35.604092  438001 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:13:35.617515  438001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34839
	I0819 19:13:35.617538  438001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36157
	I0819 19:13:35.617521  438001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35771
	I0819 19:13:35.618045  438001 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:13:35.618101  438001 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:13:35.618163  438001 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:13:35.618570  438001 main.go:141] libmachine: Using API Version  1
	I0819 19:13:35.618598  438001 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:13:35.618712  438001 main.go:141] libmachine: Using API Version  1
	I0819 19:13:35.618734  438001 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:13:35.618715  438001 main.go:141] libmachine: Using API Version  1
	I0819 19:13:35.618754  438001 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:13:35.618989  438001 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:13:35.619109  438001 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:13:35.619111  438001 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:13:35.619177  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetState
	I0819 19:13:35.619649  438001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:13:35.619693  438001 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:13:35.619695  438001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:13:35.619768  438001 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:13:35.641244  438001 addons.go:234] Setting addon default-storageclass=true in "no-preload-278232"
	W0819 19:13:35.641268  438001 addons.go:243] addon default-storageclass should already be in state true
	I0819 19:13:35.641298  438001 host.go:66] Checking if "no-preload-278232" exists ...
	I0819 19:13:35.641558  438001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:13:35.641610  438001 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:13:35.659392  438001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39373
	I0819 19:13:35.659999  438001 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:13:35.660432  438001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38477
	I0819 19:13:35.660432  438001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35477
	I0819 19:13:35.660604  438001 main.go:141] libmachine: Using API Version  1
	I0819 19:13:35.660631  438001 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:13:35.661089  438001 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:13:35.661149  438001 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:13:35.661169  438001 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:13:35.661641  438001 main.go:141] libmachine: Using API Version  1
	I0819 19:13:35.661661  438001 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:13:35.661757  438001 main.go:141] libmachine: Using API Version  1
	I0819 19:13:35.661772  438001 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:13:35.661792  438001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:13:35.661826  438001 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:13:35.662039  438001 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:13:35.662142  438001 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:13:35.662222  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetState
	I0819 19:13:35.662375  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetState
	I0819 19:13:35.664221  438001 main.go:141] libmachine: (no-preload-278232) Calling .DriverName
	I0819 19:13:35.664397  438001 main.go:141] libmachine: (no-preload-278232) Calling .DriverName
	I0819 19:13:35.666459  438001 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0819 19:13:35.666471  438001 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:13:35.667849  438001 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0819 19:13:35.667864  438001 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0819 19:13:35.667882  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHHostname
	I0819 19:13:35.667944  438001 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 19:13:35.667959  438001 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 19:13:35.667977  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHHostname
	I0819 19:13:35.673516  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHPort
	I0819 19:13:35.673544  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:35.673520  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:35.673578  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:35.673593  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:35.673602  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:35.673521  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHPort
	I0819 19:13:35.673615  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:35.673793  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:35.673937  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:35.673986  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHUsername
	I0819 19:13:35.674150  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHUsername
	I0819 19:13:35.674324  438001 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/no-preload-278232/id_rsa Username:docker}
	I0819 19:13:35.674350  438001 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/no-preload-278232/id_rsa Username:docker}
	I0819 19:13:35.683691  438001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39783
	I0819 19:13:35.684219  438001 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:13:35.684806  438001 main.go:141] libmachine: Using API Version  1
	I0819 19:13:35.684831  438001 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:13:35.685251  438001 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:13:35.685515  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetState
	I0819 19:13:35.687268  438001 main.go:141] libmachine: (no-preload-278232) Calling .DriverName
	I0819 19:13:35.687485  438001 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 19:13:35.687503  438001 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 19:13:35.687524  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHHostname
	I0819 19:13:35.690504  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:35.691297  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHPort
	I0819 19:13:35.691333  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:35.691356  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:35.691477  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:35.691659  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHUsername
	I0819 19:13:35.691814  438001 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/no-preload-278232/id_rsa Username:docker}
	I0819 19:13:35.833054  438001 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 19:13:35.855442  438001 node_ready.go:35] waiting up to 6m0s for node "no-preload-278232" to be "Ready" ...
	I0819 19:13:35.923521  438001 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0819 19:13:35.923551  438001 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0819 19:13:35.940005  438001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 19:13:35.965657  438001 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0819 19:13:35.965686  438001 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0819 19:13:36.002636  438001 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 19:13:36.002665  438001 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0819 19:13:36.024764  438001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 19:13:36.058824  438001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 19:13:36.420421  438001 main.go:141] libmachine: Making call to close driver server
	I0819 19:13:36.420452  438001 main.go:141] libmachine: (no-preload-278232) Calling .Close
	I0819 19:13:36.420785  438001 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:13:36.420804  438001 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:13:36.420844  438001 main.go:141] libmachine: (no-preload-278232) DBG | Closing plugin on server side
	I0819 19:13:36.420904  438001 main.go:141] libmachine: Making call to close driver server
	I0819 19:13:36.420918  438001 main.go:141] libmachine: (no-preload-278232) Calling .Close
	I0819 19:13:36.421185  438001 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:13:36.421210  438001 main.go:141] libmachine: (no-preload-278232) DBG | Closing plugin on server side
	I0819 19:13:36.421224  438001 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:13:36.429463  438001 main.go:141] libmachine: Making call to close driver server
	I0819 19:13:36.429481  438001 main.go:141] libmachine: (no-preload-278232) Calling .Close
	I0819 19:13:36.429811  438001 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:13:36.429830  438001 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:13:37.141893  438001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.117083882s)
	I0819 19:13:37.141987  438001 main.go:141] libmachine: Making call to close driver server
	I0819 19:13:37.141999  438001 main.go:141] libmachine: (no-preload-278232) Calling .Close
	I0819 19:13:37.142472  438001 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:13:37.142495  438001 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:13:37.142506  438001 main.go:141] libmachine: Making call to close driver server
	I0819 19:13:37.142515  438001 main.go:141] libmachine: (no-preload-278232) Calling .Close
	I0819 19:13:37.142788  438001 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:13:37.142808  438001 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:13:37.142814  438001 main.go:141] libmachine: (no-preload-278232) DBG | Closing plugin on server side
	I0819 19:13:37.161659  438001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.10278963s)
	I0819 19:13:37.161723  438001 main.go:141] libmachine: Making call to close driver server
	I0819 19:13:37.161739  438001 main.go:141] libmachine: (no-preload-278232) Calling .Close
	I0819 19:13:37.162067  438001 main.go:141] libmachine: (no-preload-278232) DBG | Closing plugin on server side
	I0819 19:13:37.162099  438001 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:13:37.162125  438001 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:13:37.162142  438001 main.go:141] libmachine: Making call to close driver server
	I0819 19:13:37.162154  438001 main.go:141] libmachine: (no-preload-278232) Calling .Close
	I0819 19:13:37.162404  438001 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:13:37.162420  438001 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:13:37.162432  438001 addons.go:475] Verifying addon metrics-server=true in "no-preload-278232"
	I0819 19:13:37.164423  438001 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0819 19:13:33.694203  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:35.694403  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:34.918988  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:36.919564  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:35.722784  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:36.223168  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:36.723041  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:37.222801  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:37.722855  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:38.223296  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:38.722936  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:39.223326  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:39.722883  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:40.223284  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:37.165767  438001 addons.go:510] duration metric: took 1.565026237s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0819 19:13:37.859454  438001 node_ready.go:53] node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:39.859662  438001 node_ready.go:53] node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:38.193207  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:40.694127  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:39.418572  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:41.918302  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:43.918558  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:40.722612  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:41.222700  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:41.723144  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:42.223369  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:42.723209  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:43.222849  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:43.723518  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:44.223585  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:44.722772  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:45.223078  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:41.859965  438001 node_ready.go:53] node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:43.359120  438001 node_ready.go:49] node "no-preload-278232" has status "Ready":"True"
	I0819 19:13:43.359151  438001 node_ready.go:38] duration metric: took 7.503671074s for node "no-preload-278232" to be "Ready" ...
	I0819 19:13:43.359169  438001 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 19:13:43.365307  438001 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-22lbt" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:43.369626  438001 pod_ready.go:93] pod "coredns-6f6b679f8f-22lbt" in "kube-system" namespace has status "Ready":"True"
	I0819 19:13:43.369646  438001 pod_ready.go:82] duration metric: took 4.316734ms for pod "coredns-6f6b679f8f-22lbt" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:43.369654  438001 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:45.377672  438001 pod_ready.go:103] pod "etcd-no-preload-278232" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:43.193775  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:45.693494  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:45.919705  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:48.418981  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:45.723287  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:46.223666  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:46.722754  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:47.223414  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:47.723567  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:48.222938  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:48.723011  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:49.223076  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:49.723443  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:50.223627  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:47.875409  438001 pod_ready.go:103] pod "etcd-no-preload-278232" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:48.377127  438001 pod_ready.go:93] pod "etcd-no-preload-278232" in "kube-system" namespace has status "Ready":"True"
	I0819 19:13:48.377155  438001 pod_ready.go:82] duration metric: took 5.007493319s for pod "etcd-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:48.377169  438001 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:48.381841  438001 pod_ready.go:93] pod "kube-apiserver-no-preload-278232" in "kube-system" namespace has status "Ready":"True"
	I0819 19:13:48.381864  438001 pod_ready.go:82] duration metric: took 4.686309ms for pod "kube-apiserver-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:48.381877  438001 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:48.386382  438001 pod_ready.go:93] pod "kube-controller-manager-no-preload-278232" in "kube-system" namespace has status "Ready":"True"
	I0819 19:13:48.386397  438001 pod_ready.go:82] duration metric: took 4.514361ms for pod "kube-controller-manager-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:48.386405  438001 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rcf49" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:48.390940  438001 pod_ready.go:93] pod "kube-proxy-rcf49" in "kube-system" namespace has status "Ready":"True"
	I0819 19:13:48.390955  438001 pod_ready.go:82] duration metric: took 4.544499ms for pod "kube-proxy-rcf49" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:48.390963  438001 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:48.395159  438001 pod_ready.go:93] pod "kube-scheduler-no-preload-278232" in "kube-system" namespace has status "Ready":"True"
	I0819 19:13:48.395180  438001 pod_ready.go:82] duration metric: took 4.211012ms for pod "kube-scheduler-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:48.395197  438001 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:50.402109  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:47.693601  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:50.193183  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:50.918811  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:52.919981  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:50.723259  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:51.222697  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:51.723284  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:52.222757  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:52.723414  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:53.223202  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:53.722721  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:54.223578  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:54.723400  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:55.222730  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:52.901901  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:54.903583  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:52.693231  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:54.693934  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:56.695700  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:55.418965  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:57.918885  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:55.723644  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:56.223212  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:56.722729  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:57.223226  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:57.723045  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:58.222901  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:58.722710  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:59.223149  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:59.723186  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:00.222763  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:00.222844  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:00.271266  438716 cri.go:89] found id: ""
	I0819 19:14:00.271296  438716 logs.go:276] 0 containers: []
	W0819 19:14:00.271305  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:00.271312  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:00.271373  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:00.311870  438716 cri.go:89] found id: ""
	I0819 19:14:00.311900  438716 logs.go:276] 0 containers: []
	W0819 19:14:00.311936  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:00.311946  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:00.312011  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:00.350476  438716 cri.go:89] found id: ""
	I0819 19:14:00.350505  438716 logs.go:276] 0 containers: []
	W0819 19:14:00.350514  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:00.350520  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:00.350586  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:00.387404  438716 cri.go:89] found id: ""
	I0819 19:14:00.387438  438716 logs.go:276] 0 containers: []
	W0819 19:14:00.387447  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:00.387457  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:00.387516  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:00.423493  438716 cri.go:89] found id: ""
	I0819 19:14:00.423521  438716 logs.go:276] 0 containers: []
	W0819 19:14:00.423529  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:00.423535  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:00.423596  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:00.458593  438716 cri.go:89] found id: ""
	I0819 19:14:00.458630  438716 logs.go:276] 0 containers: []
	W0819 19:14:00.458642  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:00.458651  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:00.458722  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:00.495645  438716 cri.go:89] found id: ""
	I0819 19:14:00.495695  438716 logs.go:276] 0 containers: []
	W0819 19:14:00.495709  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:00.495717  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:00.495782  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:00.531464  438716 cri.go:89] found id: ""
	I0819 19:14:00.531498  438716 logs.go:276] 0 containers: []
	W0819 19:14:00.531508  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:00.531529  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:00.531543  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:13:57.401329  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:59.402701  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:59.192781  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:01.194411  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:00.419287  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:02.918450  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:00.584029  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:00.584078  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:00.597870  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:00.597908  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:00.746061  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:00.746085  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:00.746098  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:00.818001  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:00.818042  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:03.358509  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:03.371262  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:03.371345  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:03.408201  438716 cri.go:89] found id: ""
	I0819 19:14:03.408231  438716 logs.go:276] 0 containers: []
	W0819 19:14:03.408241  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:03.408248  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:03.408306  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:03.445354  438716 cri.go:89] found id: ""
	I0819 19:14:03.445386  438716 logs.go:276] 0 containers: []
	W0819 19:14:03.445396  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:03.445408  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:03.445470  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:03.481144  438716 cri.go:89] found id: ""
	I0819 19:14:03.481178  438716 logs.go:276] 0 containers: []
	W0819 19:14:03.481188  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:03.481195  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:03.481260  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:03.529069  438716 cri.go:89] found id: ""
	I0819 19:14:03.529109  438716 logs.go:276] 0 containers: []
	W0819 19:14:03.529141  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:03.529148  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:03.529216  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:03.590325  438716 cri.go:89] found id: ""
	I0819 19:14:03.590364  438716 logs.go:276] 0 containers: []
	W0819 19:14:03.590377  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:03.590386  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:03.590456  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:03.634924  438716 cri.go:89] found id: ""
	I0819 19:14:03.634969  438716 logs.go:276] 0 containers: []
	W0819 19:14:03.634981  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:03.634990  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:03.635062  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:03.684133  438716 cri.go:89] found id: ""
	I0819 19:14:03.684164  438716 logs.go:276] 0 containers: []
	W0819 19:14:03.684176  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:03.684184  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:03.684253  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:03.722285  438716 cri.go:89] found id: ""
	I0819 19:14:03.722312  438716 logs.go:276] 0 containers: []
	W0819 19:14:03.722321  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:03.722330  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:03.722372  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:03.735937  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:03.735965  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:03.814906  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:03.814931  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:03.814948  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:03.896323  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:03.896363  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:03.943002  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:03.943037  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:01.901154  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:03.902972  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:05.903388  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:03.694686  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:06.193228  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:04.919332  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:07.419221  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:06.496886  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:06.510719  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:06.510790  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:06.544692  438716 cri.go:89] found id: ""
	I0819 19:14:06.544724  438716 logs.go:276] 0 containers: []
	W0819 19:14:06.544737  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:06.544747  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:06.544818  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:06.578935  438716 cri.go:89] found id: ""
	I0819 19:14:06.578962  438716 logs.go:276] 0 containers: []
	W0819 19:14:06.578971  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:06.578979  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:06.579033  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:06.614488  438716 cri.go:89] found id: ""
	I0819 19:14:06.614516  438716 logs.go:276] 0 containers: []
	W0819 19:14:06.614525  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:06.614532  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:06.614583  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:06.648579  438716 cri.go:89] found id: ""
	I0819 19:14:06.648612  438716 logs.go:276] 0 containers: []
	W0819 19:14:06.648623  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:06.648630  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:06.648685  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:06.685168  438716 cri.go:89] found id: ""
	I0819 19:14:06.685198  438716 logs.go:276] 0 containers: []
	W0819 19:14:06.685208  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:06.685217  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:06.685280  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:06.720391  438716 cri.go:89] found id: ""
	I0819 19:14:06.720424  438716 logs.go:276] 0 containers: []
	W0819 19:14:06.720433  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:06.720440  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:06.720491  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:06.758183  438716 cri.go:89] found id: ""
	I0819 19:14:06.758217  438716 logs.go:276] 0 containers: []
	W0819 19:14:06.758228  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:06.758237  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:06.758307  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:06.800182  438716 cri.go:89] found id: ""
	I0819 19:14:06.800215  438716 logs.go:276] 0 containers: []
	W0819 19:14:06.800224  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:06.800234  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:06.800247  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:06.852735  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:06.852777  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:06.867214  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:06.867249  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:06.938942  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:06.938967  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:06.938980  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:07.023950  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:07.023992  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:09.568889  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:09.588481  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:09.588545  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:09.630790  438716 cri.go:89] found id: ""
	I0819 19:14:09.630825  438716 logs.go:276] 0 containers: []
	W0819 19:14:09.630839  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:09.630848  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:09.630926  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:09.673258  438716 cri.go:89] found id: ""
	I0819 19:14:09.673291  438716 logs.go:276] 0 containers: []
	W0819 19:14:09.673302  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:09.673311  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:09.673374  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:09.709500  438716 cri.go:89] found id: ""
	I0819 19:14:09.709530  438716 logs.go:276] 0 containers: []
	W0819 19:14:09.709541  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:09.709549  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:09.709617  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:09.743110  438716 cri.go:89] found id: ""
	I0819 19:14:09.743139  438716 logs.go:276] 0 containers: []
	W0819 19:14:09.743150  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:09.743164  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:09.743238  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:09.776717  438716 cri.go:89] found id: ""
	I0819 19:14:09.776746  438716 logs.go:276] 0 containers: []
	W0819 19:14:09.776754  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:09.776761  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:09.776820  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:09.811381  438716 cri.go:89] found id: ""
	I0819 19:14:09.811409  438716 logs.go:276] 0 containers: []
	W0819 19:14:09.811417  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:09.811423  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:09.811474  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:09.843699  438716 cri.go:89] found id: ""
	I0819 19:14:09.843730  438716 logs.go:276] 0 containers: []
	W0819 19:14:09.843741  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:09.843750  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:09.843822  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:09.882972  438716 cri.go:89] found id: ""
	I0819 19:14:09.883005  438716 logs.go:276] 0 containers: []
	W0819 19:14:09.883018  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:09.883033  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:09.883050  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:09.973077  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:09.973114  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:10.014505  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:10.014556  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:10.069779  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:10.069819  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:10.084337  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:10.084367  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:10.164870  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:08.402464  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:10.900684  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:08.193980  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:10.194818  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:09.918852  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:12.419687  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:12.665929  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:12.679881  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:12.679960  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:12.718305  438716 cri.go:89] found id: ""
	I0819 19:14:12.718332  438716 logs.go:276] 0 containers: []
	W0819 19:14:12.718341  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:12.718348  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:12.718398  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:12.759084  438716 cri.go:89] found id: ""
	I0819 19:14:12.759112  438716 logs.go:276] 0 containers: []
	W0819 19:14:12.759127  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:12.759135  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:12.759205  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:12.793193  438716 cri.go:89] found id: ""
	I0819 19:14:12.793228  438716 logs.go:276] 0 containers: []
	W0819 19:14:12.793238  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:12.793245  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:12.793299  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:12.828283  438716 cri.go:89] found id: ""
	I0819 19:14:12.828310  438716 logs.go:276] 0 containers: []
	W0819 19:14:12.828322  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:12.828329  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:12.828379  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:12.861971  438716 cri.go:89] found id: ""
	I0819 19:14:12.862004  438716 logs.go:276] 0 containers: []
	W0819 19:14:12.862016  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:12.862025  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:12.862092  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:12.898173  438716 cri.go:89] found id: ""
	I0819 19:14:12.898203  438716 logs.go:276] 0 containers: []
	W0819 19:14:12.898214  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:12.898223  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:12.898287  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:12.940203  438716 cri.go:89] found id: ""
	I0819 19:14:12.940234  438716 logs.go:276] 0 containers: []
	W0819 19:14:12.940246  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:12.940254  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:12.940309  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:12.978092  438716 cri.go:89] found id: ""
	I0819 19:14:12.978123  438716 logs.go:276] 0 containers: []
	W0819 19:14:12.978134  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:12.978147  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:12.978172  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:12.992082  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:12.992117  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:13.073609  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:13.073636  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:13.073649  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:13.153060  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:13.153105  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:13.196535  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:13.196581  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:12.903116  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:15.401183  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:12.693872  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:14.694252  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:17.193116  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:14.919563  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:17.418946  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:15.750298  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:15.763913  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:15.763996  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:15.804515  438716 cri.go:89] found id: ""
	I0819 19:14:15.804542  438716 logs.go:276] 0 containers: []
	W0819 19:14:15.804551  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:15.804558  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:15.804624  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:15.847077  438716 cri.go:89] found id: ""
	I0819 19:14:15.847112  438716 logs.go:276] 0 containers: []
	W0819 19:14:15.847125  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:15.847133  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:15.847200  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:15.882316  438716 cri.go:89] found id: ""
	I0819 19:14:15.882348  438716 logs.go:276] 0 containers: []
	W0819 19:14:15.882358  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:15.882365  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:15.882417  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:15.919084  438716 cri.go:89] found id: ""
	I0819 19:14:15.919114  438716 logs.go:276] 0 containers: []
	W0819 19:14:15.919125  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:15.919132  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:15.919202  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:15.953139  438716 cri.go:89] found id: ""
	I0819 19:14:15.953175  438716 logs.go:276] 0 containers: []
	W0819 19:14:15.953188  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:15.953209  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:15.953276  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:15.993231  438716 cri.go:89] found id: ""
	I0819 19:14:15.993259  438716 logs.go:276] 0 containers: []
	W0819 19:14:15.993268  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:15.993286  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:15.993337  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:16.030382  438716 cri.go:89] found id: ""
	I0819 19:14:16.030412  438716 logs.go:276] 0 containers: []
	W0819 19:14:16.030422  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:16.030428  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:16.030482  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:16.065834  438716 cri.go:89] found id: ""
	I0819 19:14:16.065861  438716 logs.go:276] 0 containers: []
	W0819 19:14:16.065872  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:16.065885  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:16.065901  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:16.117943  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:16.117983  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:16.132010  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:16.132041  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:16.202398  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:16.202416  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:16.202429  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:16.286609  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:16.286653  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:18.830502  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:18.844022  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:18.844107  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:18.880539  438716 cri.go:89] found id: ""
	I0819 19:14:18.880576  438716 logs.go:276] 0 containers: []
	W0819 19:14:18.880588  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:18.880595  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:18.880657  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:18.918426  438716 cri.go:89] found id: ""
	I0819 19:14:18.918454  438716 logs.go:276] 0 containers: []
	W0819 19:14:18.918463  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:18.918470  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:18.918531  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:18.954534  438716 cri.go:89] found id: ""
	I0819 19:14:18.954566  438716 logs.go:276] 0 containers: []
	W0819 19:14:18.954578  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:18.954587  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:18.954651  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:18.993820  438716 cri.go:89] found id: ""
	I0819 19:14:18.993852  438716 logs.go:276] 0 containers: []
	W0819 19:14:18.993864  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:18.993885  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:18.993967  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:19.026947  438716 cri.go:89] found id: ""
	I0819 19:14:19.026982  438716 logs.go:276] 0 containers: []
	W0819 19:14:19.026995  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:19.027005  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:19.027072  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:19.062097  438716 cri.go:89] found id: ""
	I0819 19:14:19.062130  438716 logs.go:276] 0 containers: []
	W0819 19:14:19.062142  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:19.062150  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:19.062207  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:19.099522  438716 cri.go:89] found id: ""
	I0819 19:14:19.099549  438716 logs.go:276] 0 containers: []
	W0819 19:14:19.099559  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:19.099567  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:19.099630  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:19.134766  438716 cri.go:89] found id: ""
	I0819 19:14:19.134803  438716 logs.go:276] 0 containers: []
	W0819 19:14:19.134815  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:19.134850  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:19.134867  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:19.176428  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:19.176458  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:19.231448  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:19.231484  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:19.245631  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:19.245687  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:19.318679  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:19.318703  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:19.318717  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:17.401916  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:19.402628  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:19.195224  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:21.693528  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:19.918727  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:21.918863  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:23.919050  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:21.898430  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:21.913840  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:21.913911  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:21.955682  438716 cri.go:89] found id: ""
	I0819 19:14:21.955720  438716 logs.go:276] 0 containers: []
	W0819 19:14:21.955732  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:21.955743  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:21.955820  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:21.994798  438716 cri.go:89] found id: ""
	I0819 19:14:21.994836  438716 logs.go:276] 0 containers: []
	W0819 19:14:21.994845  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:21.994852  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:21.994904  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:22.029155  438716 cri.go:89] found id: ""
	I0819 19:14:22.029191  438716 logs.go:276] 0 containers: []
	W0819 19:14:22.029202  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:22.029210  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:22.029281  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:22.072489  438716 cri.go:89] found id: ""
	I0819 19:14:22.072534  438716 logs.go:276] 0 containers: []
	W0819 19:14:22.072546  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:22.072559  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:22.072621  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:22.109160  438716 cri.go:89] found id: ""
	I0819 19:14:22.109192  438716 logs.go:276] 0 containers: []
	W0819 19:14:22.109203  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:22.109211  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:22.109281  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:22.146161  438716 cri.go:89] found id: ""
	I0819 19:14:22.146194  438716 logs.go:276] 0 containers: []
	W0819 19:14:22.146206  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:22.146215  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:22.146276  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:22.183005  438716 cri.go:89] found id: ""
	I0819 19:14:22.183033  438716 logs.go:276] 0 containers: []
	W0819 19:14:22.183046  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:22.183054  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:22.183108  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:22.220745  438716 cri.go:89] found id: ""
	I0819 19:14:22.220772  438716 logs.go:276] 0 containers: []
	W0819 19:14:22.220784  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:22.220798  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:22.220817  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:22.297377  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:22.297403  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:22.297416  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:22.373503  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:22.373542  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:22.414922  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:22.414956  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:22.477902  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:22.477944  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:24.993405  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:25.007305  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:25.007379  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:25.041157  438716 cri.go:89] found id: ""
	I0819 19:14:25.041191  438716 logs.go:276] 0 containers: []
	W0819 19:14:25.041203  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:25.041211  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:25.041278  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:25.078572  438716 cri.go:89] found id: ""
	I0819 19:14:25.078605  438716 logs.go:276] 0 containers: []
	W0819 19:14:25.078617  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:25.078625  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:25.078695  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:25.114571  438716 cri.go:89] found id: ""
	I0819 19:14:25.114603  438716 logs.go:276] 0 containers: []
	W0819 19:14:25.114615  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:25.114624  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:25.114690  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:25.154341  438716 cri.go:89] found id: ""
	I0819 19:14:25.154366  438716 logs.go:276] 0 containers: []
	W0819 19:14:25.154375  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:25.154381  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:25.154434  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:25.192592  438716 cri.go:89] found id: ""
	I0819 19:14:25.192620  438716 logs.go:276] 0 containers: []
	W0819 19:14:25.192631  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:25.192640  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:25.192705  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:25.227813  438716 cri.go:89] found id: ""
	I0819 19:14:25.227847  438716 logs.go:276] 0 containers: []
	W0819 19:14:25.227860  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:25.227869  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:25.227933  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:25.264321  438716 cri.go:89] found id: ""
	I0819 19:14:25.264349  438716 logs.go:276] 0 containers: []
	W0819 19:14:25.264357  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:25.264364  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:25.264427  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:25.298562  438716 cri.go:89] found id: ""
	I0819 19:14:25.298596  438716 logs.go:276] 0 containers: []
	W0819 19:14:25.298608  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:25.298621  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:25.298638  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:25.352659  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:25.352695  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:25.366638  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:25.366665  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:25.432964  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:25.432992  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:25.433010  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:25.511487  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:25.511549  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:21.902660  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:24.401454  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:26.402255  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:24.193406  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:26.194758  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:25.919090  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:28.420031  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:28.057003  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:28.070849  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:28.070914  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:28.107817  438716 cri.go:89] found id: ""
	I0819 19:14:28.107852  438716 logs.go:276] 0 containers: []
	W0819 19:14:28.107865  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:28.107875  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:28.107948  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:28.141816  438716 cri.go:89] found id: ""
	I0819 19:14:28.141862  438716 logs.go:276] 0 containers: []
	W0819 19:14:28.141874  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:28.141887  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:28.141958  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:28.179854  438716 cri.go:89] found id: ""
	I0819 19:14:28.179885  438716 logs.go:276] 0 containers: []
	W0819 19:14:28.179893  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:28.179905  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:28.179972  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:28.217335  438716 cri.go:89] found id: ""
	I0819 19:14:28.217364  438716 logs.go:276] 0 containers: []
	W0819 19:14:28.217372  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:28.217380  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:28.217438  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:28.254161  438716 cri.go:89] found id: ""
	I0819 19:14:28.254193  438716 logs.go:276] 0 containers: []
	W0819 19:14:28.254204  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:28.254213  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:28.254276  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:28.288658  438716 cri.go:89] found id: ""
	I0819 19:14:28.288682  438716 logs.go:276] 0 containers: []
	W0819 19:14:28.288691  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:28.288698  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:28.288749  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:28.321957  438716 cri.go:89] found id: ""
	I0819 19:14:28.321987  438716 logs.go:276] 0 containers: []
	W0819 19:14:28.321996  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:28.322004  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:28.322057  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:28.355032  438716 cri.go:89] found id: ""
	I0819 19:14:28.355068  438716 logs.go:276] 0 containers: []
	W0819 19:14:28.355080  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:28.355094  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:28.355111  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:28.406220  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:28.406253  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:28.420877  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:28.420907  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:28.502576  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:28.502598  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:28.502614  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:28.582717  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:28.582769  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:28.904716  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:31.401098  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:28.195001  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:30.693605  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:30.917957  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:32.918239  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:31.121960  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:31.135502  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:31.135568  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:31.170423  438716 cri.go:89] found id: ""
	I0819 19:14:31.170451  438716 logs.go:276] 0 containers: []
	W0819 19:14:31.170461  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:31.170467  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:31.170532  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:31.207328  438716 cri.go:89] found id: ""
	I0819 19:14:31.207356  438716 logs.go:276] 0 containers: []
	W0819 19:14:31.207364  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:31.207370  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:31.207430  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:31.245655  438716 cri.go:89] found id: ""
	I0819 19:14:31.245687  438716 logs.go:276] 0 containers: []
	W0819 19:14:31.245698  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:31.245707  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:31.245773  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:31.282174  438716 cri.go:89] found id: ""
	I0819 19:14:31.282208  438716 logs.go:276] 0 containers: []
	W0819 19:14:31.282221  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:31.282230  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:31.282303  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:31.316779  438716 cri.go:89] found id: ""
	I0819 19:14:31.316810  438716 logs.go:276] 0 containers: []
	W0819 19:14:31.316818  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:31.316826  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:31.316879  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:31.356849  438716 cri.go:89] found id: ""
	I0819 19:14:31.356884  438716 logs.go:276] 0 containers: []
	W0819 19:14:31.356894  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:31.356900  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:31.356963  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:31.395102  438716 cri.go:89] found id: ""
	I0819 19:14:31.395135  438716 logs.go:276] 0 containers: []
	W0819 19:14:31.395143  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:31.395150  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:31.395205  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:31.433018  438716 cri.go:89] found id: ""
	I0819 19:14:31.433045  438716 logs.go:276] 0 containers: []
	W0819 19:14:31.433076  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:31.433091  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:31.433108  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:31.446294  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:31.446319  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:31.518158  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:31.518180  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:31.518196  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:31.600568  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:31.600611  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:31.642356  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:31.642386  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:34.195665  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:34.210300  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:34.210370  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:34.248715  438716 cri.go:89] found id: ""
	I0819 19:14:34.248753  438716 logs.go:276] 0 containers: []
	W0819 19:14:34.248767  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:34.248775  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:34.248849  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:34.285305  438716 cri.go:89] found id: ""
	I0819 19:14:34.285334  438716 logs.go:276] 0 containers: []
	W0819 19:14:34.285347  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:34.285355  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:34.285438  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:34.326114  438716 cri.go:89] found id: ""
	I0819 19:14:34.326148  438716 logs.go:276] 0 containers: []
	W0819 19:14:34.326160  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:34.326168  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:34.326235  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:34.360587  438716 cri.go:89] found id: ""
	I0819 19:14:34.360616  438716 logs.go:276] 0 containers: []
	W0819 19:14:34.360628  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:34.360638  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:34.360715  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:34.397452  438716 cri.go:89] found id: ""
	I0819 19:14:34.397483  438716 logs.go:276] 0 containers: []
	W0819 19:14:34.397491  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:34.397498  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:34.397556  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:34.433651  438716 cri.go:89] found id: ""
	I0819 19:14:34.433683  438716 logs.go:276] 0 containers: []
	W0819 19:14:34.433694  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:34.433702  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:34.433771  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:34.468758  438716 cri.go:89] found id: ""
	I0819 19:14:34.468787  438716 logs.go:276] 0 containers: []
	W0819 19:14:34.468796  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:34.468802  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:34.468856  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:34.505787  438716 cri.go:89] found id: ""
	I0819 19:14:34.505816  438716 logs.go:276] 0 containers: []
	W0819 19:14:34.505828  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:34.505842  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:34.505859  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:34.519430  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:34.519463  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:34.592785  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:34.592810  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:34.592827  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:34.671215  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:34.671254  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:34.711248  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:34.711277  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:33.403429  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:35.901124  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:33.194319  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:35.694280  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:34.918372  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:37.418982  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:37.265131  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:37.279035  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:37.279127  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:37.325556  438716 cri.go:89] found id: ""
	I0819 19:14:37.325589  438716 logs.go:276] 0 containers: []
	W0819 19:14:37.325601  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:37.325610  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:37.325676  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:37.360514  438716 cri.go:89] found id: ""
	I0819 19:14:37.360541  438716 logs.go:276] 0 containers: []
	W0819 19:14:37.360553  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:37.360561  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:37.360616  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:37.394428  438716 cri.go:89] found id: ""
	I0819 19:14:37.394456  438716 logs.go:276] 0 containers: []
	W0819 19:14:37.394465  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:37.394472  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:37.394531  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:37.430221  438716 cri.go:89] found id: ""
	I0819 19:14:37.430249  438716 logs.go:276] 0 containers: []
	W0819 19:14:37.430257  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:37.430264  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:37.430324  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:37.466598  438716 cri.go:89] found id: ""
	I0819 19:14:37.466630  438716 logs.go:276] 0 containers: []
	W0819 19:14:37.466641  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:37.466649  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:37.466719  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:37.510455  438716 cri.go:89] found id: ""
	I0819 19:14:37.510484  438716 logs.go:276] 0 containers: []
	W0819 19:14:37.510492  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:37.510499  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:37.510563  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:37.546122  438716 cri.go:89] found id: ""
	I0819 19:14:37.546157  438716 logs.go:276] 0 containers: []
	W0819 19:14:37.546169  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:37.546178  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:37.546247  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:37.579425  438716 cri.go:89] found id: ""
	I0819 19:14:37.579452  438716 logs.go:276] 0 containers: []
	W0819 19:14:37.579463  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:37.579475  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:37.579491  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:37.592673  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:37.592704  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:37.674026  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:37.674048  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:37.674065  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:37.752206  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:37.752244  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:37.791281  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:37.791321  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:40.345520  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:40.358771  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:40.358835  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:40.394515  438716 cri.go:89] found id: ""
	I0819 19:14:40.394549  438716 logs.go:276] 0 containers: []
	W0819 19:14:40.394565  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:40.394575  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:40.394637  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:40.430971  438716 cri.go:89] found id: ""
	I0819 19:14:40.431007  438716 logs.go:276] 0 containers: []
	W0819 19:14:40.431018  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:40.431027  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:40.431094  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:40.471417  438716 cri.go:89] found id: ""
	I0819 19:14:40.471443  438716 logs.go:276] 0 containers: []
	W0819 19:14:40.471452  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:40.471458  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:40.471511  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:40.508641  438716 cri.go:89] found id: ""
	I0819 19:14:40.508670  438716 logs.go:276] 0 containers: []
	W0819 19:14:40.508678  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:40.508684  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:40.508749  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:37.903083  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:40.402562  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:37.695031  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:40.193724  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:39.921480  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:42.420201  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:40.542418  438716 cri.go:89] found id: ""
	I0819 19:14:40.542456  438716 logs.go:276] 0 containers: []
	W0819 19:14:40.542465  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:40.542472  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:40.542533  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:40.577367  438716 cri.go:89] found id: ""
	I0819 19:14:40.577399  438716 logs.go:276] 0 containers: []
	W0819 19:14:40.577408  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:40.577414  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:40.577476  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:40.611111  438716 cri.go:89] found id: ""
	I0819 19:14:40.611138  438716 logs.go:276] 0 containers: []
	W0819 19:14:40.611147  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:40.611155  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:40.611222  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:40.650769  438716 cri.go:89] found id: ""
	I0819 19:14:40.650797  438716 logs.go:276] 0 containers: []
	W0819 19:14:40.650805  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:40.650814  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:40.650827  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:40.688085  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:40.688111  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:40.740187  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:40.740225  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:40.754774  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:40.754803  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:40.828689  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:40.828712  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:40.828728  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:43.419171  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:43.432127  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:43.432201  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:43.468751  438716 cri.go:89] found id: ""
	I0819 19:14:43.468778  438716 logs.go:276] 0 containers: []
	W0819 19:14:43.468787  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:43.468803  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:43.468870  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:43.503290  438716 cri.go:89] found id: ""
	I0819 19:14:43.503319  438716 logs.go:276] 0 containers: []
	W0819 19:14:43.503328  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:43.503334  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:43.503390  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:43.536382  438716 cri.go:89] found id: ""
	I0819 19:14:43.536416  438716 logs.go:276] 0 containers: []
	W0819 19:14:43.536435  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:43.536443  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:43.536494  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:43.571570  438716 cri.go:89] found id: ""
	I0819 19:14:43.571602  438716 logs.go:276] 0 containers: []
	W0819 19:14:43.571611  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:43.571617  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:43.571682  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:43.610421  438716 cri.go:89] found id: ""
	I0819 19:14:43.610455  438716 logs.go:276] 0 containers: []
	W0819 19:14:43.610465  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:43.610473  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:43.610524  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:43.647173  438716 cri.go:89] found id: ""
	I0819 19:14:43.647200  438716 logs.go:276] 0 containers: []
	W0819 19:14:43.647209  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:43.647215  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:43.647266  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:43.684493  438716 cri.go:89] found id: ""
	I0819 19:14:43.684525  438716 logs.go:276] 0 containers: []
	W0819 19:14:43.684535  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:43.684541  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:43.684609  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:43.718781  438716 cri.go:89] found id: ""
	I0819 19:14:43.718811  438716 logs.go:276] 0 containers: []
	W0819 19:14:43.718822  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:43.718834  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:43.718858  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:43.732546  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:43.732578  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:43.819640  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:43.819665  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:43.819700  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:43.900246  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:43.900286  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:43.941751  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:43.941783  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:42.901387  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:44.901876  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:42.693950  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:45.193132  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:44.918631  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:47.417977  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:46.498232  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:46.511167  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:46.511237  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:46.545493  438716 cri.go:89] found id: ""
	I0819 19:14:46.545528  438716 logs.go:276] 0 containers: []
	W0819 19:14:46.545541  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:46.545549  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:46.545607  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:46.580599  438716 cri.go:89] found id: ""
	I0819 19:14:46.580626  438716 logs.go:276] 0 containers: []
	W0819 19:14:46.580634  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:46.580640  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:46.580760  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:46.614515  438716 cri.go:89] found id: ""
	I0819 19:14:46.614551  438716 logs.go:276] 0 containers: []
	W0819 19:14:46.614561  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:46.614570  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:46.614637  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:46.647767  438716 cri.go:89] found id: ""
	I0819 19:14:46.647803  438716 logs.go:276] 0 containers: []
	W0819 19:14:46.647816  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:46.647825  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:46.647893  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:46.681660  438716 cri.go:89] found id: ""
	I0819 19:14:46.681695  438716 logs.go:276] 0 containers: []
	W0819 19:14:46.681707  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:46.681717  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:46.681788  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:46.718828  438716 cri.go:89] found id: ""
	I0819 19:14:46.718858  438716 logs.go:276] 0 containers: []
	W0819 19:14:46.718868  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:46.718875  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:46.718929  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:46.760524  438716 cri.go:89] found id: ""
	I0819 19:14:46.760553  438716 logs.go:276] 0 containers: []
	W0819 19:14:46.760561  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:46.760569  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:46.760634  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:46.799014  438716 cri.go:89] found id: ""
	I0819 19:14:46.799042  438716 logs.go:276] 0 containers: []
	W0819 19:14:46.799054  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:46.799067  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:46.799135  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:46.850769  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:46.850812  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:46.865647  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:46.865698  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:46.942197  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:46.942228  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:46.942244  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:47.019295  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:47.019337  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:49.562713  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:49.575406  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:49.575484  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:49.610067  438716 cri.go:89] found id: ""
	I0819 19:14:49.610105  438716 logs.go:276] 0 containers: []
	W0819 19:14:49.610115  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:49.610121  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:49.610182  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:49.646164  438716 cri.go:89] found id: ""
	I0819 19:14:49.646205  438716 logs.go:276] 0 containers: []
	W0819 19:14:49.646230  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:49.646238  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:49.646317  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:49.680268  438716 cri.go:89] found id: ""
	I0819 19:14:49.680303  438716 logs.go:276] 0 containers: []
	W0819 19:14:49.680314  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:49.680322  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:49.680387  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:49.714952  438716 cri.go:89] found id: ""
	I0819 19:14:49.714981  438716 logs.go:276] 0 containers: []
	W0819 19:14:49.714992  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:49.715001  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:49.715067  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:49.749483  438716 cri.go:89] found id: ""
	I0819 19:14:49.749516  438716 logs.go:276] 0 containers: []
	W0819 19:14:49.749528  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:49.749537  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:49.749616  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:49.794506  438716 cri.go:89] found id: ""
	I0819 19:14:49.794538  438716 logs.go:276] 0 containers: []
	W0819 19:14:49.794550  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:49.794558  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:49.794628  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:49.847284  438716 cri.go:89] found id: ""
	I0819 19:14:49.847313  438716 logs.go:276] 0 containers: []
	W0819 19:14:49.847324  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:49.847334  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:49.847398  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:49.903800  438716 cri.go:89] found id: ""
	I0819 19:14:49.903829  438716 logs.go:276] 0 containers: []
	W0819 19:14:49.903839  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:49.903850  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:49.903867  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:49.972836  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:49.972866  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:49.972885  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:50.049939  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:50.049976  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:50.086514  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:50.086550  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:50.140681  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:50.140718  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:46.903667  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:49.402220  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:51.402281  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:47.693723  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:49.694755  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:52.193220  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:49.919931  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:52.419880  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:52.656573  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:52.670043  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:52.670124  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:52.704514  438716 cri.go:89] found id: ""
	I0819 19:14:52.704541  438716 logs.go:276] 0 containers: []
	W0819 19:14:52.704551  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:52.704558  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:52.704621  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:52.738329  438716 cri.go:89] found id: ""
	I0819 19:14:52.738357  438716 logs.go:276] 0 containers: []
	W0819 19:14:52.738365  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:52.738371  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:52.738423  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:52.774886  438716 cri.go:89] found id: ""
	I0819 19:14:52.774917  438716 logs.go:276] 0 containers: []
	W0819 19:14:52.774926  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:52.774933  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:52.774986  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:52.810262  438716 cri.go:89] found id: ""
	I0819 19:14:52.810288  438716 logs.go:276] 0 containers: []
	W0819 19:14:52.810296  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:52.810303  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:52.810363  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:52.848429  438716 cri.go:89] found id: ""
	I0819 19:14:52.848455  438716 logs.go:276] 0 containers: []
	W0819 19:14:52.848463  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:52.848474  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:52.848539  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:52.886135  438716 cri.go:89] found id: ""
	I0819 19:14:52.886163  438716 logs.go:276] 0 containers: []
	W0819 19:14:52.886179  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:52.886185  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:52.886241  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:52.923288  438716 cri.go:89] found id: ""
	I0819 19:14:52.923314  438716 logs.go:276] 0 containers: []
	W0819 19:14:52.923325  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:52.923333  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:52.923397  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:52.957273  438716 cri.go:89] found id: ""
	I0819 19:14:52.957303  438716 logs.go:276] 0 containers: []
	W0819 19:14:52.957315  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:52.957328  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:52.957345  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:52.970687  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:52.970714  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:53.045081  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:53.045108  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:53.045125  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:53.122233  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:53.122279  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:53.161525  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:53.161554  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:53.901584  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:55.902739  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:54.194220  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:56.197070  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:54.917358  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:56.918562  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:58.919041  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:55.714177  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:55.733726  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:55.733809  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:55.781435  438716 cri.go:89] found id: ""
	I0819 19:14:55.781472  438716 logs.go:276] 0 containers: []
	W0819 19:14:55.781485  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:55.781493  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:55.781560  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:55.846316  438716 cri.go:89] found id: ""
	I0819 19:14:55.846351  438716 logs.go:276] 0 containers: []
	W0819 19:14:55.846362  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:55.846370  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:55.846439  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:55.881587  438716 cri.go:89] found id: ""
	I0819 19:14:55.881623  438716 logs.go:276] 0 containers: []
	W0819 19:14:55.881635  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:55.881644  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:55.881719  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:55.919332  438716 cri.go:89] found id: ""
	I0819 19:14:55.919374  438716 logs.go:276] 0 containers: []
	W0819 19:14:55.919382  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:55.919389  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:55.919441  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:55.954704  438716 cri.go:89] found id: ""
	I0819 19:14:55.954739  438716 logs.go:276] 0 containers: []
	W0819 19:14:55.954752  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:55.954761  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:55.954836  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:55.989289  438716 cri.go:89] found id: ""
	I0819 19:14:55.989321  438716 logs.go:276] 0 containers: []
	W0819 19:14:55.989332  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:55.989340  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:55.989406  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:56.025771  438716 cri.go:89] found id: ""
	I0819 19:14:56.025800  438716 logs.go:276] 0 containers: []
	W0819 19:14:56.025809  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:56.025816  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:56.025883  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:56.065631  438716 cri.go:89] found id: ""
	I0819 19:14:56.065673  438716 logs.go:276] 0 containers: []
	W0819 19:14:56.065686  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:56.065699  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:56.065722  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:56.119482  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:56.119523  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:56.133885  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:56.133915  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:56.207012  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:56.207033  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:56.207045  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:56.288158  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:56.288195  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:58.829677  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:58.844085  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:58.844158  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:58.880900  438716 cri.go:89] found id: ""
	I0819 19:14:58.880934  438716 logs.go:276] 0 containers: []
	W0819 19:14:58.880945  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:58.880951  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:58.881016  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:58.918833  438716 cri.go:89] found id: ""
	I0819 19:14:58.918862  438716 logs.go:276] 0 containers: []
	W0819 19:14:58.918872  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:58.918881  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:58.918939  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:58.956577  438716 cri.go:89] found id: ""
	I0819 19:14:58.956612  438716 logs.go:276] 0 containers: []
	W0819 19:14:58.956623  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:58.956634  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:58.956705  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:58.993884  438716 cri.go:89] found id: ""
	I0819 19:14:58.993914  438716 logs.go:276] 0 containers: []
	W0819 19:14:58.993923  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:58.993930  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:58.993988  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:59.031366  438716 cri.go:89] found id: ""
	I0819 19:14:59.031389  438716 logs.go:276] 0 containers: []
	W0819 19:14:59.031398  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:59.031405  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:59.031464  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:59.072014  438716 cri.go:89] found id: ""
	I0819 19:14:59.072047  438716 logs.go:276] 0 containers: []
	W0819 19:14:59.072058  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:59.072065  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:59.072129  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:59.108713  438716 cri.go:89] found id: ""
	I0819 19:14:59.108744  438716 logs.go:276] 0 containers: []
	W0819 19:14:59.108756  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:59.108765  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:59.108866  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:59.147599  438716 cri.go:89] found id: ""
	I0819 19:14:59.147634  438716 logs.go:276] 0 containers: []
	W0819 19:14:59.147647  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:59.147659  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:59.147695  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:59.224745  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:59.224781  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:59.264586  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:59.264616  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:59.317065  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:59.317104  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:59.331230  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:59.331264  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:59.398370  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:58.401471  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:00.402623  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:58.694096  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:01.193262  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:01.418063  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:03.418302  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:01.899123  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:01.912743  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:01.912824  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:01.949717  438716 cri.go:89] found id: ""
	I0819 19:15:01.949748  438716 logs.go:276] 0 containers: []
	W0819 19:15:01.949756  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:01.949763  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:01.949819  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:01.992776  438716 cri.go:89] found id: ""
	I0819 19:15:01.992802  438716 logs.go:276] 0 containers: []
	W0819 19:15:01.992812  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:01.992819  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:01.992884  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:02.030551  438716 cri.go:89] found id: ""
	I0819 19:15:02.030579  438716 logs.go:276] 0 containers: []
	W0819 19:15:02.030592  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:02.030600  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:02.030672  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:02.069927  438716 cri.go:89] found id: ""
	I0819 19:15:02.069955  438716 logs.go:276] 0 containers: []
	W0819 19:15:02.069964  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:02.069971  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:02.070031  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:02.106584  438716 cri.go:89] found id: ""
	I0819 19:15:02.106609  438716 logs.go:276] 0 containers: []
	W0819 19:15:02.106619  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:02.106629  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:02.106695  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:02.145007  438716 cri.go:89] found id: ""
	I0819 19:15:02.145035  438716 logs.go:276] 0 containers: []
	W0819 19:15:02.145044  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:02.145051  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:02.145113  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:02.180693  438716 cri.go:89] found id: ""
	I0819 19:15:02.180730  438716 logs.go:276] 0 containers: []
	W0819 19:15:02.180741  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:02.180748  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:02.180800  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:02.215563  438716 cri.go:89] found id: ""
	I0819 19:15:02.215597  438716 logs.go:276] 0 containers: []
	W0819 19:15:02.215609  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:02.215623  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:02.215641  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:02.285658  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:02.285692  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:02.285711  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:02.363620  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:02.363660  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:02.414240  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:02.414274  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:02.467336  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:02.467380  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:04.981935  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:04.995537  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:04.995611  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:05.032700  438716 cri.go:89] found id: ""
	I0819 19:15:05.032735  438716 logs.go:276] 0 containers: []
	W0819 19:15:05.032748  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:05.032756  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:05.032827  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:05.069132  438716 cri.go:89] found id: ""
	I0819 19:15:05.069162  438716 logs.go:276] 0 containers: []
	W0819 19:15:05.069173  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:05.069181  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:05.069247  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:05.105320  438716 cri.go:89] found id: ""
	I0819 19:15:05.105346  438716 logs.go:276] 0 containers: []
	W0819 19:15:05.105355  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:05.105361  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:05.105421  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:05.142311  438716 cri.go:89] found id: ""
	I0819 19:15:05.142343  438716 logs.go:276] 0 containers: []
	W0819 19:15:05.142354  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:05.142362  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:05.142412  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:05.177398  438716 cri.go:89] found id: ""
	I0819 19:15:05.177426  438716 logs.go:276] 0 containers: []
	W0819 19:15:05.177437  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:05.177450  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:05.177506  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:05.212749  438716 cri.go:89] found id: ""
	I0819 19:15:05.212780  438716 logs.go:276] 0 containers: []
	W0819 19:15:05.212789  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:05.212796  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:05.212854  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:05.246325  438716 cri.go:89] found id: ""
	I0819 19:15:05.246356  438716 logs.go:276] 0 containers: []
	W0819 19:15:05.246364  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:05.246371  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:05.246420  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:05.287429  438716 cri.go:89] found id: ""
	I0819 19:15:05.287456  438716 logs.go:276] 0 containers: []
	W0819 19:15:05.287466  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:05.287476  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:05.287489  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:05.338742  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:05.338787  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:05.352948  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:05.352978  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:05.421478  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:05.421502  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:05.421529  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:05.497772  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:05.497809  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:02.902202  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:05.403518  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:03.193491  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:05.194340  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:05.419361  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:07.918522  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:08.040403  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:08.053761  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:08.053827  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:08.087047  438716 cri.go:89] found id: ""
	I0819 19:15:08.087073  438716 logs.go:276] 0 containers: []
	W0819 19:15:08.087082  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:08.087089  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:08.087140  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:08.122012  438716 cri.go:89] found id: ""
	I0819 19:15:08.122048  438716 logs.go:276] 0 containers: []
	W0819 19:15:08.122059  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:08.122068  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:08.122134  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:08.155319  438716 cri.go:89] found id: ""
	I0819 19:15:08.155349  438716 logs.go:276] 0 containers: []
	W0819 19:15:08.155360  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:08.155368  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:08.155447  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:08.196003  438716 cri.go:89] found id: ""
	I0819 19:15:08.196027  438716 logs.go:276] 0 containers: []
	W0819 19:15:08.196035  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:08.196041  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:08.196091  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:08.230798  438716 cri.go:89] found id: ""
	I0819 19:15:08.230826  438716 logs.go:276] 0 containers: []
	W0819 19:15:08.230836  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:08.230845  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:08.230910  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:08.267522  438716 cri.go:89] found id: ""
	I0819 19:15:08.267554  438716 logs.go:276] 0 containers: []
	W0819 19:15:08.267562  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:08.267569  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:08.267621  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:08.304775  438716 cri.go:89] found id: ""
	I0819 19:15:08.304801  438716 logs.go:276] 0 containers: []
	W0819 19:15:08.304809  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:08.304815  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:08.304866  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:08.344694  438716 cri.go:89] found id: ""
	I0819 19:15:08.344720  438716 logs.go:276] 0 containers: []
	W0819 19:15:08.344734  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:08.344744  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:08.344757  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:08.383581  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:08.383619  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:08.433868  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:08.433905  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:08.447627  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:08.447657  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:08.518846  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:08.518869  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:08.518887  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:07.901746  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:09.902647  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:07.693351  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:10.193893  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:12.194400  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:09.919436  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:12.418215  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:11.104449  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:11.118149  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:11.118228  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:11.157917  438716 cri.go:89] found id: ""
	I0819 19:15:11.157951  438716 logs.go:276] 0 containers: []
	W0819 19:15:11.157963  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:11.157971  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:11.158040  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:11.196685  438716 cri.go:89] found id: ""
	I0819 19:15:11.196711  438716 logs.go:276] 0 containers: []
	W0819 19:15:11.196721  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:11.196729  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:11.196788  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:11.231089  438716 cri.go:89] found id: ""
	I0819 19:15:11.231124  438716 logs.go:276] 0 containers: []
	W0819 19:15:11.231135  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:11.231144  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:11.231223  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:11.267001  438716 cri.go:89] found id: ""
	I0819 19:15:11.267032  438716 logs.go:276] 0 containers: []
	W0819 19:15:11.267041  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:11.267048  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:11.267113  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:11.302178  438716 cri.go:89] found id: ""
	I0819 19:15:11.302210  438716 logs.go:276] 0 containers: []
	W0819 19:15:11.302223  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:11.302232  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:11.302292  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:11.336335  438716 cri.go:89] found id: ""
	I0819 19:15:11.336368  438716 logs.go:276] 0 containers: []
	W0819 19:15:11.336442  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:11.336458  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:11.336525  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:11.370891  438716 cri.go:89] found id: ""
	I0819 19:15:11.370926  438716 logs.go:276] 0 containers: []
	W0819 19:15:11.370937  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:11.370945  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:11.371007  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:11.407439  438716 cri.go:89] found id: ""
	I0819 19:15:11.407466  438716 logs.go:276] 0 containers: []
	W0819 19:15:11.407473  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:11.407482  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:11.407497  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:11.458692  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:11.458735  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:11.473104  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:11.473133  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:11.542004  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:11.542031  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:11.542050  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:11.619972  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:11.620014  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:14.159220  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:14.173135  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:14.173204  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:14.210347  438716 cri.go:89] found id: ""
	I0819 19:15:14.210377  438716 logs.go:276] 0 containers: []
	W0819 19:15:14.210389  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:14.210398  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:14.210468  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:14.247143  438716 cri.go:89] found id: ""
	I0819 19:15:14.247169  438716 logs.go:276] 0 containers: []
	W0819 19:15:14.247180  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:14.247187  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:14.247260  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:14.284949  438716 cri.go:89] found id: ""
	I0819 19:15:14.284981  438716 logs.go:276] 0 containers: []
	W0819 19:15:14.284995  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:14.285003  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:14.285071  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:14.326801  438716 cri.go:89] found id: ""
	I0819 19:15:14.326826  438716 logs.go:276] 0 containers: []
	W0819 19:15:14.326834  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:14.326842  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:14.326903  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:14.362730  438716 cri.go:89] found id: ""
	I0819 19:15:14.362764  438716 logs.go:276] 0 containers: []
	W0819 19:15:14.362775  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:14.362783  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:14.362852  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:14.403406  438716 cri.go:89] found id: ""
	I0819 19:15:14.403437  438716 logs.go:276] 0 containers: []
	W0819 19:15:14.403448  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:14.403456  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:14.403514  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:14.440641  438716 cri.go:89] found id: ""
	I0819 19:15:14.440670  438716 logs.go:276] 0 containers: []
	W0819 19:15:14.440678  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:14.440685  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:14.440737  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:14.479477  438716 cri.go:89] found id: ""
	I0819 19:15:14.479511  438716 logs.go:276] 0 containers: []
	W0819 19:15:14.479521  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:14.479530  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:14.479544  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:14.530573  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:14.530620  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:14.545329  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:14.545368  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:14.619632  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:14.619652  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:14.619680  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:14.694923  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:14.694956  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:12.401350  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:14.402845  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:14.693534  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:16.693737  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:14.420872  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:16.918227  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:18.919244  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:17.237830  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:17.250579  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:17.250645  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:17.284706  438716 cri.go:89] found id: ""
	I0819 19:15:17.284738  438716 logs.go:276] 0 containers: []
	W0819 19:15:17.284750  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:17.284759  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:17.284832  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:17.320313  438716 cri.go:89] found id: ""
	I0819 19:15:17.320342  438716 logs.go:276] 0 containers: []
	W0819 19:15:17.320350  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:17.320356  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:17.320419  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:17.355974  438716 cri.go:89] found id: ""
	I0819 19:15:17.356008  438716 logs.go:276] 0 containers: []
	W0819 19:15:17.356018  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:17.356027  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:17.356093  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:17.390759  438716 cri.go:89] found id: ""
	I0819 19:15:17.390786  438716 logs.go:276] 0 containers: []
	W0819 19:15:17.390795  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:17.390803  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:17.390861  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:17.431951  438716 cri.go:89] found id: ""
	I0819 19:15:17.431982  438716 logs.go:276] 0 containers: []
	W0819 19:15:17.431993  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:17.432001  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:17.432068  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:17.467183  438716 cri.go:89] found id: ""
	I0819 19:15:17.467215  438716 logs.go:276] 0 containers: []
	W0819 19:15:17.467227  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:17.467236  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:17.467306  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:17.502678  438716 cri.go:89] found id: ""
	I0819 19:15:17.502709  438716 logs.go:276] 0 containers: []
	W0819 19:15:17.502721  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:17.502730  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:17.502801  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:17.537597  438716 cri.go:89] found id: ""
	I0819 19:15:17.537629  438716 logs.go:276] 0 containers: []
	W0819 19:15:17.537643  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:17.537656  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:17.537672  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:17.620076  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:17.620117  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:17.659979  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:17.660009  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:17.710963  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:17.711006  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:17.725556  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:17.725590  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:17.796176  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:20.297246  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:20.311395  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:20.311476  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:20.352279  438716 cri.go:89] found id: ""
	I0819 19:15:20.352317  438716 logs.go:276] 0 containers: []
	W0819 19:15:20.352328  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:20.352338  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:20.352401  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:20.390335  438716 cri.go:89] found id: ""
	I0819 19:15:20.390368  438716 logs.go:276] 0 containers: []
	W0819 19:15:20.390377  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:20.390384  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:20.390450  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:20.430264  438716 cri.go:89] found id: ""
	I0819 19:15:20.430300  438716 logs.go:276] 0 containers: []
	W0819 19:15:20.430312  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:20.430320  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:20.430386  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:20.469670  438716 cri.go:89] found id: ""
	I0819 19:15:20.469703  438716 logs.go:276] 0 containers: []
	W0819 19:15:20.469715  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:20.469723  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:20.469790  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:20.503233  438716 cri.go:89] found id: ""
	I0819 19:15:20.503263  438716 logs.go:276] 0 containers: []
	W0819 19:15:20.503274  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:20.503283  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:20.503371  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:16.902246  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:19.402407  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:18.693921  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:21.193124  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:21.418463  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:23.418730  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:20.538180  438716 cri.go:89] found id: ""
	I0819 19:15:20.538211  438716 logs.go:276] 0 containers: []
	W0819 19:15:20.538223  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:20.538231  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:20.538302  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:20.573301  438716 cri.go:89] found id: ""
	I0819 19:15:20.573329  438716 logs.go:276] 0 containers: []
	W0819 19:15:20.573337  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:20.573352  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:20.573411  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:20.606962  438716 cri.go:89] found id: ""
	I0819 19:15:20.606995  438716 logs.go:276] 0 containers: []
	W0819 19:15:20.607007  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:20.607019  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:20.607035  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:20.658392  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:20.658428  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:20.672063  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:20.672092  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:20.747987  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:20.748010  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:20.748035  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:20.829367  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:20.829415  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:23.378885  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:23.393711  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:23.393778  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:23.430629  438716 cri.go:89] found id: ""
	I0819 19:15:23.430655  438716 logs.go:276] 0 containers: []
	W0819 19:15:23.430665  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:23.430675  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:23.430727  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:23.467509  438716 cri.go:89] found id: ""
	I0819 19:15:23.467541  438716 logs.go:276] 0 containers: []
	W0819 19:15:23.467552  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:23.467560  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:23.467634  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:23.505313  438716 cri.go:89] found id: ""
	I0819 19:15:23.505351  438716 logs.go:276] 0 containers: []
	W0819 19:15:23.505359  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:23.505366  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:23.505416  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:23.543393  438716 cri.go:89] found id: ""
	I0819 19:15:23.543428  438716 logs.go:276] 0 containers: []
	W0819 19:15:23.543441  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:23.543450  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:23.543514  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:23.578265  438716 cri.go:89] found id: ""
	I0819 19:15:23.578293  438716 logs.go:276] 0 containers: []
	W0819 19:15:23.578301  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:23.578308  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:23.578376  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:23.613951  438716 cri.go:89] found id: ""
	I0819 19:15:23.613981  438716 logs.go:276] 0 containers: []
	W0819 19:15:23.613989  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:23.613996  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:23.614061  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:23.647387  438716 cri.go:89] found id: ""
	I0819 19:15:23.647418  438716 logs.go:276] 0 containers: []
	W0819 19:15:23.647426  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:23.647433  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:23.647501  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:23.682482  438716 cri.go:89] found id: ""
	I0819 19:15:23.682510  438716 logs.go:276] 0 containers: []
	W0819 19:15:23.682519  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:23.682530  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:23.682547  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:23.696601  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:23.696629  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:23.766762  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:23.766788  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:23.766804  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:23.850947  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:23.850988  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:23.891113  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:23.891146  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:21.902926  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:24.401874  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:23.193192  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:25.193347  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:25.919555  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:28.419920  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:26.444086  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:26.457774  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:26.457844  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:26.494525  438716 cri.go:89] found id: ""
	I0819 19:15:26.494552  438716 logs.go:276] 0 containers: []
	W0819 19:15:26.494560  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:26.494567  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:26.494618  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:26.535317  438716 cri.go:89] found id: ""
	I0819 19:15:26.535348  438716 logs.go:276] 0 containers: []
	W0819 19:15:26.535359  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:26.535368  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:26.535437  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:26.570853  438716 cri.go:89] found id: ""
	I0819 19:15:26.570886  438716 logs.go:276] 0 containers: []
	W0819 19:15:26.570896  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:26.570920  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:26.570987  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:26.610739  438716 cri.go:89] found id: ""
	I0819 19:15:26.610773  438716 logs.go:276] 0 containers: []
	W0819 19:15:26.610785  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:26.610794  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:26.610885  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:26.651274  438716 cri.go:89] found id: ""
	I0819 19:15:26.651303  438716 logs.go:276] 0 containers: []
	W0819 19:15:26.651311  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:26.651318  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:26.651367  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:26.689963  438716 cri.go:89] found id: ""
	I0819 19:15:26.689993  438716 logs.go:276] 0 containers: []
	W0819 19:15:26.690005  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:26.690013  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:26.690083  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:26.729433  438716 cri.go:89] found id: ""
	I0819 19:15:26.729465  438716 logs.go:276] 0 containers: []
	W0819 19:15:26.729475  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:26.729483  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:26.729548  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:26.768386  438716 cri.go:89] found id: ""
	I0819 19:15:26.768418  438716 logs.go:276] 0 containers: []
	W0819 19:15:26.768427  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:26.768436  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:26.768449  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:26.821526  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:26.821564  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:26.835714  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:26.835763  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:26.907981  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:26.908007  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:26.908023  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:26.991969  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:26.992008  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:29.529743  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:29.544812  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:29.544883  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:29.581455  438716 cri.go:89] found id: ""
	I0819 19:15:29.581486  438716 logs.go:276] 0 containers: []
	W0819 19:15:29.581496  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:29.581503  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:29.581559  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:29.634542  438716 cri.go:89] found id: ""
	I0819 19:15:29.634576  438716 logs.go:276] 0 containers: []
	W0819 19:15:29.634587  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:29.634596  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:29.634663  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:29.670388  438716 cri.go:89] found id: ""
	I0819 19:15:29.670422  438716 logs.go:276] 0 containers: []
	W0819 19:15:29.670439  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:29.670449  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:29.670511  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:29.712267  438716 cri.go:89] found id: ""
	I0819 19:15:29.712293  438716 logs.go:276] 0 containers: []
	W0819 19:15:29.712304  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:29.712313  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:29.712376  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:29.752392  438716 cri.go:89] found id: ""
	I0819 19:15:29.752423  438716 logs.go:276] 0 containers: []
	W0819 19:15:29.752432  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:29.752438  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:29.752500  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:29.791734  438716 cri.go:89] found id: ""
	I0819 19:15:29.791763  438716 logs.go:276] 0 containers: []
	W0819 19:15:29.791772  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:29.791778  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:29.791830  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:29.832882  438716 cri.go:89] found id: ""
	I0819 19:15:29.832910  438716 logs.go:276] 0 containers: []
	W0819 19:15:29.832921  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:29.832929  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:29.832986  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:29.872035  438716 cri.go:89] found id: ""
	I0819 19:15:29.872068  438716 logs.go:276] 0 containers: []
	W0819 19:15:29.872076  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:29.872086  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:29.872098  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:29.926551  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:29.926588  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:29.940500  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:29.940537  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:30.010327  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:30.010348  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:30.010368  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:30.090864  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:30.090910  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:26.902881  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:29.401449  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:27.692753  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:29.693161  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:32.193256  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:30.421066  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:32.918642  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:32.636291  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:32.649264  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:32.649334  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:32.683746  438716 cri.go:89] found id: ""
	I0819 19:15:32.683774  438716 logs.go:276] 0 containers: []
	W0819 19:15:32.683785  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:32.683794  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:32.683867  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:32.723805  438716 cri.go:89] found id: ""
	I0819 19:15:32.723838  438716 logs.go:276] 0 containers: []
	W0819 19:15:32.723850  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:32.723858  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:32.723917  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:32.758119  438716 cri.go:89] found id: ""
	I0819 19:15:32.758148  438716 logs.go:276] 0 containers: []
	W0819 19:15:32.758157  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:32.758164  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:32.758215  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:32.792726  438716 cri.go:89] found id: ""
	I0819 19:15:32.792754  438716 logs.go:276] 0 containers: []
	W0819 19:15:32.792768  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:32.792775  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:32.792823  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:32.829180  438716 cri.go:89] found id: ""
	I0819 19:15:32.829208  438716 logs.go:276] 0 containers: []
	W0819 19:15:32.829217  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:32.829224  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:32.829274  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:32.869045  438716 cri.go:89] found id: ""
	I0819 19:15:32.869081  438716 logs.go:276] 0 containers: []
	W0819 19:15:32.869093  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:32.869102  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:32.869172  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:32.904780  438716 cri.go:89] found id: ""
	I0819 19:15:32.904803  438716 logs.go:276] 0 containers: []
	W0819 19:15:32.904811  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:32.904818  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:32.904870  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:32.940846  438716 cri.go:89] found id: ""
	I0819 19:15:32.940876  438716 logs.go:276] 0 containers: []
	W0819 19:15:32.940886  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:32.940900  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:32.940924  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:33.008569  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:33.008592  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:33.008606  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:33.092605  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:33.092657  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:33.133016  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:33.133045  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:33.188335  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:33.188376  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:31.901719  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:34.401060  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:36.401983  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:34.193690  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:36.694042  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:34.918948  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:37.418186  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:35.704043  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:35.717647  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:35.717708  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:35.752337  438716 cri.go:89] found id: ""
	I0819 19:15:35.752364  438716 logs.go:276] 0 containers: []
	W0819 19:15:35.752372  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:35.752378  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:35.752431  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:35.787233  438716 cri.go:89] found id: ""
	I0819 19:15:35.787261  438716 logs.go:276] 0 containers: []
	W0819 19:15:35.787269  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:35.787275  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:35.787334  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:35.819641  438716 cri.go:89] found id: ""
	I0819 19:15:35.819667  438716 logs.go:276] 0 containers: []
	W0819 19:15:35.819697  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:35.819705  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:35.819775  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:35.856133  438716 cri.go:89] found id: ""
	I0819 19:15:35.856160  438716 logs.go:276] 0 containers: []
	W0819 19:15:35.856169  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:35.856176  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:35.856240  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:35.889390  438716 cri.go:89] found id: ""
	I0819 19:15:35.889422  438716 logs.go:276] 0 containers: []
	W0819 19:15:35.889432  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:35.889438  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:35.889501  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:35.927477  438716 cri.go:89] found id: ""
	I0819 19:15:35.927519  438716 logs.go:276] 0 containers: []
	W0819 19:15:35.927531  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:35.927539  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:35.927600  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:35.961787  438716 cri.go:89] found id: ""
	I0819 19:15:35.961825  438716 logs.go:276] 0 containers: []
	W0819 19:15:35.961837  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:35.961845  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:35.961912  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:35.998350  438716 cri.go:89] found id: ""
	I0819 19:15:35.998384  438716 logs.go:276] 0 containers: []
	W0819 19:15:35.998396  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:35.998407  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:35.998419  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:36.054352  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:36.054394  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:36.078278  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:36.078311  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:36.166388  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:36.166416  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:36.166433  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:36.247222  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:36.247269  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:38.786510  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:38.800306  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:38.800364  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:38.834555  438716 cri.go:89] found id: ""
	I0819 19:15:38.834583  438716 logs.go:276] 0 containers: []
	W0819 19:15:38.834591  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:38.834598  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:38.834648  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:38.869078  438716 cri.go:89] found id: ""
	I0819 19:15:38.869105  438716 logs.go:276] 0 containers: []
	W0819 19:15:38.869114  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:38.869120  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:38.869174  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:38.903702  438716 cri.go:89] found id: ""
	I0819 19:15:38.903728  438716 logs.go:276] 0 containers: []
	W0819 19:15:38.903736  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:38.903743  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:38.903795  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:38.938326  438716 cri.go:89] found id: ""
	I0819 19:15:38.938352  438716 logs.go:276] 0 containers: []
	W0819 19:15:38.938360  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:38.938367  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:38.938422  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:38.976032  438716 cri.go:89] found id: ""
	I0819 19:15:38.976063  438716 logs.go:276] 0 containers: []
	W0819 19:15:38.976075  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:38.976084  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:38.976149  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:39.009957  438716 cri.go:89] found id: ""
	I0819 19:15:39.009991  438716 logs.go:276] 0 containers: []
	W0819 19:15:39.010002  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:39.010011  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:39.010077  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:39.046381  438716 cri.go:89] found id: ""
	I0819 19:15:39.046408  438716 logs.go:276] 0 containers: []
	W0819 19:15:39.046416  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:39.046422  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:39.046474  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:39.083022  438716 cri.go:89] found id: ""
	I0819 19:15:39.083050  438716 logs.go:276] 0 containers: []
	W0819 19:15:39.083058  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:39.083067  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:39.083079  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:39.160731  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:39.160768  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:39.204846  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:39.204879  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:39.259248  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:39.259287  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:39.273764  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:39.273796  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:39.344477  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:38.402275  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:40.901494  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:39.194367  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:41.692933  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:39.419291  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:41.919708  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:43.919984  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:41.845258  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:41.861691  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:41.861754  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:41.908235  438716 cri.go:89] found id: ""
	I0819 19:15:41.908269  438716 logs.go:276] 0 containers: []
	W0819 19:15:41.908281  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:41.908289  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:41.908357  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:41.965631  438716 cri.go:89] found id: ""
	I0819 19:15:41.965657  438716 logs.go:276] 0 containers: []
	W0819 19:15:41.965667  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:41.965673  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:41.965732  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:42.004540  438716 cri.go:89] found id: ""
	I0819 19:15:42.004569  438716 logs.go:276] 0 containers: []
	W0819 19:15:42.004578  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:42.004585  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:42.004650  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:42.042189  438716 cri.go:89] found id: ""
	I0819 19:15:42.042215  438716 logs.go:276] 0 containers: []
	W0819 19:15:42.042224  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:42.042231  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:42.042299  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:42.079313  438716 cri.go:89] found id: ""
	I0819 19:15:42.079349  438716 logs.go:276] 0 containers: []
	W0819 19:15:42.079361  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:42.079370  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:42.079450  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:42.116130  438716 cri.go:89] found id: ""
	I0819 19:15:42.116164  438716 logs.go:276] 0 containers: []
	W0819 19:15:42.116176  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:42.116184  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:42.116253  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:42.154886  438716 cri.go:89] found id: ""
	I0819 19:15:42.154919  438716 logs.go:276] 0 containers: []
	W0819 19:15:42.154928  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:42.154935  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:42.154987  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:42.191204  438716 cri.go:89] found id: ""
	I0819 19:15:42.191237  438716 logs.go:276] 0 containers: []
	W0819 19:15:42.191248  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:42.191258  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:42.191275  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:42.244395  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:42.244434  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:42.258029  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:42.258066  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:42.323461  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:42.323481  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:42.323498  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:42.401932  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:42.401969  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:44.943615  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:44.958243  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:44.958315  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:44.995181  438716 cri.go:89] found id: ""
	I0819 19:15:44.995217  438716 logs.go:276] 0 containers: []
	W0819 19:15:44.995236  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:44.995244  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:44.995309  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:45.030705  438716 cri.go:89] found id: ""
	I0819 19:15:45.030743  438716 logs.go:276] 0 containers: []
	W0819 19:15:45.030752  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:45.030759  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:45.030814  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:45.068186  438716 cri.go:89] found id: ""
	I0819 19:15:45.068215  438716 logs.go:276] 0 containers: []
	W0819 19:15:45.068224  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:45.068231  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:45.068314  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:45.105415  438716 cri.go:89] found id: ""
	I0819 19:15:45.105443  438716 logs.go:276] 0 containers: []
	W0819 19:15:45.105452  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:45.105458  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:45.105517  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:45.143628  438716 cri.go:89] found id: ""
	I0819 19:15:45.143662  438716 logs.go:276] 0 containers: []
	W0819 19:15:45.143694  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:45.143704  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:45.143771  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:45.184896  438716 cri.go:89] found id: ""
	I0819 19:15:45.184922  438716 logs.go:276] 0 containers: []
	W0819 19:15:45.184930  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:45.184937  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:45.185000  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:45.222599  438716 cri.go:89] found id: ""
	I0819 19:15:45.222631  438716 logs.go:276] 0 containers: []
	W0819 19:15:45.222639  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:45.222645  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:45.222700  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:45.260310  438716 cri.go:89] found id: ""
	I0819 19:15:45.260341  438716 logs.go:276] 0 containers: []
	W0819 19:15:45.260352  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:45.260361  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:45.260379  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:45.273687  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:45.273718  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:45.351367  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:45.351390  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:45.351407  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:45.428751  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:45.428787  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:45.468830  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:45.468869  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:42.902576  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:45.402812  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:43.693205  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:46.192804  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:46.419903  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:48.918620  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:48.023654  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:48.037206  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:48.037294  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:48.071647  438716 cri.go:89] found id: ""
	I0819 19:15:48.071686  438716 logs.go:276] 0 containers: []
	W0819 19:15:48.071695  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:48.071704  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:48.071765  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:48.106542  438716 cri.go:89] found id: ""
	I0819 19:15:48.106575  438716 logs.go:276] 0 containers: []
	W0819 19:15:48.106586  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:48.106596  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:48.106662  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:48.151917  438716 cri.go:89] found id: ""
	I0819 19:15:48.151949  438716 logs.go:276] 0 containers: []
	W0819 19:15:48.151959  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:48.151966  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:48.152022  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:48.190095  438716 cri.go:89] found id: ""
	I0819 19:15:48.190125  438716 logs.go:276] 0 containers: []
	W0819 19:15:48.190137  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:48.190146  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:48.190211  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:48.227193  438716 cri.go:89] found id: ""
	I0819 19:15:48.227228  438716 logs.go:276] 0 containers: []
	W0819 19:15:48.227240  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:48.227248  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:48.227317  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:48.261353  438716 cri.go:89] found id: ""
	I0819 19:15:48.261386  438716 logs.go:276] 0 containers: []
	W0819 19:15:48.261396  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:48.261403  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:48.261455  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:48.295749  438716 cri.go:89] found id: ""
	I0819 19:15:48.295782  438716 logs.go:276] 0 containers: []
	W0819 19:15:48.295794  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:48.295803  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:48.295874  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:48.338350  438716 cri.go:89] found id: ""
	I0819 19:15:48.338383  438716 logs.go:276] 0 containers: []
	W0819 19:15:48.338394  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:48.338404  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:48.338420  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:48.420705  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:48.420749  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:48.464114  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:48.464153  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:48.519461  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:48.519505  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:48.534324  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:48.534357  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:48.603580  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:47.900813  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:49.902363  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:48.194425  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:50.693598  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:51.419909  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:53.918494  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:51.104343  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:51.117552  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:51.117629  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:51.150630  438716 cri.go:89] found id: ""
	I0819 19:15:51.150665  438716 logs.go:276] 0 containers: []
	W0819 19:15:51.150677  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:51.150691  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:51.150765  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:51.184316  438716 cri.go:89] found id: ""
	I0819 19:15:51.184346  438716 logs.go:276] 0 containers: []
	W0819 19:15:51.184356  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:51.184362  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:51.184410  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:51.221252  438716 cri.go:89] found id: ""
	I0819 19:15:51.221277  438716 logs.go:276] 0 containers: []
	W0819 19:15:51.221286  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:51.221292  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:51.221349  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:51.255727  438716 cri.go:89] found id: ""
	I0819 19:15:51.255755  438716 logs.go:276] 0 containers: []
	W0819 19:15:51.255763  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:51.255769  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:51.255823  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:51.290615  438716 cri.go:89] found id: ""
	I0819 19:15:51.290651  438716 logs.go:276] 0 containers: []
	W0819 19:15:51.290660  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:51.290667  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:51.290721  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:51.326895  438716 cri.go:89] found id: ""
	I0819 19:15:51.326922  438716 logs.go:276] 0 containers: []
	W0819 19:15:51.326930  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:51.326937  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:51.326987  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:51.365516  438716 cri.go:89] found id: ""
	I0819 19:15:51.365547  438716 logs.go:276] 0 containers: []
	W0819 19:15:51.365558  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:51.365566  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:51.365632  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:51.399002  438716 cri.go:89] found id: ""
	I0819 19:15:51.399030  438716 logs.go:276] 0 containers: []
	W0819 19:15:51.399038  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:51.399048  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:51.399059  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:51.453481  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:51.453524  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:51.467246  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:51.467277  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:51.548547  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:51.548578  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:51.548595  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:51.635627  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:51.635670  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:54.175003  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:54.190462  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:54.190537  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:54.232140  438716 cri.go:89] found id: ""
	I0819 19:15:54.232168  438716 logs.go:276] 0 containers: []
	W0819 19:15:54.232178  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:54.232186  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:54.232254  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:54.267700  438716 cri.go:89] found id: ""
	I0819 19:15:54.267732  438716 logs.go:276] 0 containers: []
	W0819 19:15:54.267742  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:54.267748  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:54.267807  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:54.306272  438716 cri.go:89] found id: ""
	I0819 19:15:54.306300  438716 logs.go:276] 0 containers: []
	W0819 19:15:54.306308  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:54.306315  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:54.306368  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:54.341503  438716 cri.go:89] found id: ""
	I0819 19:15:54.341536  438716 logs.go:276] 0 containers: []
	W0819 19:15:54.341549  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:54.341556  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:54.341609  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:54.375535  438716 cri.go:89] found id: ""
	I0819 19:15:54.375570  438716 logs.go:276] 0 containers: []
	W0819 19:15:54.375582  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:54.375591  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:54.375661  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:54.409611  438716 cri.go:89] found id: ""
	I0819 19:15:54.409641  438716 logs.go:276] 0 containers: []
	W0819 19:15:54.409653  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:54.409662  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:54.409731  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:54.444318  438716 cri.go:89] found id: ""
	I0819 19:15:54.444346  438716 logs.go:276] 0 containers: []
	W0819 19:15:54.444358  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:54.444366  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:54.444425  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:54.480746  438716 cri.go:89] found id: ""
	I0819 19:15:54.480777  438716 logs.go:276] 0 containers: []
	W0819 19:15:54.480789  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:54.480802  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:54.480817  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:54.534209  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:54.534245  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:54.549557  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:54.549598  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:54.625086  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:54.625111  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:54.625136  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:54.705549  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:54.705589  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:52.401150  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:54.402049  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:56.402545  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:52.693826  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:54.694875  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:57.193741  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:56.418166  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:58.418955  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:57.257440  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:57.276724  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:57.276812  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:57.319032  438716 cri.go:89] found id: ""
	I0819 19:15:57.319062  438716 logs.go:276] 0 containers: []
	W0819 19:15:57.319073  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:57.319081  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:57.319163  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:57.357093  438716 cri.go:89] found id: ""
	I0819 19:15:57.357129  438716 logs.go:276] 0 containers: []
	W0819 19:15:57.357140  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:57.357152  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:57.357222  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:57.393978  438716 cri.go:89] found id: ""
	I0819 19:15:57.394013  438716 logs.go:276] 0 containers: []
	W0819 19:15:57.394025  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:57.394033  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:57.394102  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:57.428731  438716 cri.go:89] found id: ""
	I0819 19:15:57.428760  438716 logs.go:276] 0 containers: []
	W0819 19:15:57.428768  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:57.428775  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:57.428824  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:57.467772  438716 cri.go:89] found id: ""
	I0819 19:15:57.467810  438716 logs.go:276] 0 containers: []
	W0819 19:15:57.467822  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:57.467832  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:57.467904  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:57.502398  438716 cri.go:89] found id: ""
	I0819 19:15:57.502434  438716 logs.go:276] 0 containers: []
	W0819 19:15:57.502444  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:57.502450  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:57.502503  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:57.536729  438716 cri.go:89] found id: ""
	I0819 19:15:57.536760  438716 logs.go:276] 0 containers: []
	W0819 19:15:57.536771  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:57.536779  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:57.536845  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:57.574738  438716 cri.go:89] found id: ""
	I0819 19:15:57.574762  438716 logs.go:276] 0 containers: []
	W0819 19:15:57.574770  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:57.574780  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:57.574793  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:57.630063  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:57.630113  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:57.643083  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:57.643111  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:57.725081  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:57.725104  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:57.725118  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:57.805065  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:57.805105  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:00.344557  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:00.357940  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:00.358005  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:00.399319  438716 cri.go:89] found id: ""
	I0819 19:16:00.399355  438716 logs.go:276] 0 containers: []
	W0819 19:16:00.399368  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:00.399377  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:00.399446  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:00.444223  438716 cri.go:89] found id: ""
	I0819 19:16:00.444254  438716 logs.go:276] 0 containers: []
	W0819 19:16:00.444264  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:00.444271  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:00.444323  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:00.479903  438716 cri.go:89] found id: ""
	I0819 19:16:00.479932  438716 logs.go:276] 0 containers: []
	W0819 19:16:00.479942  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:00.479948  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:00.480003  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:00.515923  438716 cri.go:89] found id: ""
	I0819 19:16:00.515954  438716 logs.go:276] 0 containers: []
	W0819 19:16:00.515966  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:00.515974  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:00.516043  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:58.901349  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:00.902114  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:59.194660  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:01.693174  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:00.419210  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:02.918814  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:00.551319  438716 cri.go:89] found id: ""
	I0819 19:16:00.551348  438716 logs.go:276] 0 containers: []
	W0819 19:16:00.551360  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:00.551370  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:00.551434  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:00.587847  438716 cri.go:89] found id: ""
	I0819 19:16:00.587882  438716 logs.go:276] 0 containers: []
	W0819 19:16:00.587892  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:00.587901  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:00.587976  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:00.624769  438716 cri.go:89] found id: ""
	I0819 19:16:00.624800  438716 logs.go:276] 0 containers: []
	W0819 19:16:00.624812  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:00.624820  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:00.624894  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:00.659300  438716 cri.go:89] found id: ""
	I0819 19:16:00.659330  438716 logs.go:276] 0 containers: []
	W0819 19:16:00.659342  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:00.659355  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:00.659371  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:00.739073  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:00.739113  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:00.779087  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:00.779116  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:00.831864  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:00.831914  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:00.845832  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:00.845863  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:00.920622  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:03.420751  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:03.434599  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:03.434664  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:03.469288  438716 cri.go:89] found id: ""
	I0819 19:16:03.469326  438716 logs.go:276] 0 containers: []
	W0819 19:16:03.469349  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:03.469372  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:03.469445  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:03.507885  438716 cri.go:89] found id: ""
	I0819 19:16:03.507911  438716 logs.go:276] 0 containers: []
	W0819 19:16:03.507927  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:03.507934  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:03.507987  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:03.543805  438716 cri.go:89] found id: ""
	I0819 19:16:03.543837  438716 logs.go:276] 0 containers: []
	W0819 19:16:03.543847  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:03.543854  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:03.543928  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:03.584060  438716 cri.go:89] found id: ""
	I0819 19:16:03.584093  438716 logs.go:276] 0 containers: []
	W0819 19:16:03.584105  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:03.584114  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:03.584202  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:03.619724  438716 cri.go:89] found id: ""
	I0819 19:16:03.619758  438716 logs.go:276] 0 containers: []
	W0819 19:16:03.619769  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:03.619776  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:03.619854  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:03.657180  438716 cri.go:89] found id: ""
	I0819 19:16:03.657213  438716 logs.go:276] 0 containers: []
	W0819 19:16:03.657225  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:03.657234  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:03.657303  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:03.695099  438716 cri.go:89] found id: ""
	I0819 19:16:03.695125  438716 logs.go:276] 0 containers: []
	W0819 19:16:03.695134  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:03.695139  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:03.695193  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:03.730263  438716 cri.go:89] found id: ""
	I0819 19:16:03.730291  438716 logs.go:276] 0 containers: []
	W0819 19:16:03.730302  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:03.730314  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:03.730331  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:03.780776  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:03.780816  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:03.795381  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:03.795419  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:03.869995  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:03.870016  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:03.870029  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:03.949654  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:03.949691  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:03.402500  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:05.902412  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:03.694220  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:06.193280  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:04.919284  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:07.418061  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:06.493589  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:06.506758  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:06.506834  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:06.545325  438716 cri.go:89] found id: ""
	I0819 19:16:06.545357  438716 logs.go:276] 0 containers: []
	W0819 19:16:06.545370  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:06.545378  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:06.545443  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:06.581708  438716 cri.go:89] found id: ""
	I0819 19:16:06.581741  438716 logs.go:276] 0 containers: []
	W0819 19:16:06.581753  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:06.581761  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:06.581828  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:06.626543  438716 cri.go:89] found id: ""
	I0819 19:16:06.626588  438716 logs.go:276] 0 containers: []
	W0819 19:16:06.626600  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:06.626609  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:06.626676  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:06.662466  438716 cri.go:89] found id: ""
	I0819 19:16:06.662499  438716 logs.go:276] 0 containers: []
	W0819 19:16:06.662509  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:06.662518  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:06.662585  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:06.701584  438716 cri.go:89] found id: ""
	I0819 19:16:06.701619  438716 logs.go:276] 0 containers: []
	W0819 19:16:06.701628  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:06.701635  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:06.701688  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:06.736245  438716 cri.go:89] found id: ""
	I0819 19:16:06.736280  438716 logs.go:276] 0 containers: []
	W0819 19:16:06.736292  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:06.736300  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:06.736392  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:06.774411  438716 cri.go:89] found id: ""
	I0819 19:16:06.774439  438716 logs.go:276] 0 containers: []
	W0819 19:16:06.774447  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:06.774454  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:06.774510  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:06.809560  438716 cri.go:89] found id: ""
	I0819 19:16:06.809597  438716 logs.go:276] 0 containers: []
	W0819 19:16:06.809609  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:06.809624  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:06.809648  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:06.884841  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:06.884862  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:06.884878  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:06.971467  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:06.971507  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:07.010737  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:07.010767  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:07.063807  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:07.063846  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:09.578451  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:09.591643  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:09.591737  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:09.625607  438716 cri.go:89] found id: ""
	I0819 19:16:09.625639  438716 logs.go:276] 0 containers: []
	W0819 19:16:09.625650  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:09.625659  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:09.625727  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:09.669145  438716 cri.go:89] found id: ""
	I0819 19:16:09.669177  438716 logs.go:276] 0 containers: []
	W0819 19:16:09.669185  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:09.669191  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:09.669254  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:09.707035  438716 cri.go:89] found id: ""
	I0819 19:16:09.707064  438716 logs.go:276] 0 containers: []
	W0819 19:16:09.707073  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:09.707080  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:09.707142  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:09.742089  438716 cri.go:89] found id: ""
	I0819 19:16:09.742116  438716 logs.go:276] 0 containers: []
	W0819 19:16:09.742125  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:09.742132  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:09.742193  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:09.782736  438716 cri.go:89] found id: ""
	I0819 19:16:09.782774  438716 logs.go:276] 0 containers: []
	W0819 19:16:09.782785  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:09.782794  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:09.782860  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:09.818003  438716 cri.go:89] found id: ""
	I0819 19:16:09.818031  438716 logs.go:276] 0 containers: []
	W0819 19:16:09.818040  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:09.818047  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:09.818110  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:09.852716  438716 cri.go:89] found id: ""
	I0819 19:16:09.852748  438716 logs.go:276] 0 containers: []
	W0819 19:16:09.852757  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:09.852764  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:09.852828  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:09.887176  438716 cri.go:89] found id: ""
	I0819 19:16:09.887206  438716 logs.go:276] 0 containers: []
	W0819 19:16:09.887218  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:09.887230  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:09.887247  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:09.901547  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:09.901573  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:09.969153  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:09.969190  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:09.969205  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:10.053777  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:10.053820  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:10.100888  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:10.100916  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:08.401650  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:10.402279  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:08.194305  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:10.693097  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:09.418856  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:11.918836  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:12.655112  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:12.667824  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:12.667897  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:12.702337  438716 cri.go:89] found id: ""
	I0819 19:16:12.702364  438716 logs.go:276] 0 containers: []
	W0819 19:16:12.702373  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:12.702379  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:12.702432  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:12.736628  438716 cri.go:89] found id: ""
	I0819 19:16:12.736655  438716 logs.go:276] 0 containers: []
	W0819 19:16:12.736663  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:12.736669  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:12.736720  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:12.773598  438716 cri.go:89] found id: ""
	I0819 19:16:12.773628  438716 logs.go:276] 0 containers: []
	W0819 19:16:12.773636  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:12.773643  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:12.773695  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:12.806584  438716 cri.go:89] found id: ""
	I0819 19:16:12.806620  438716 logs.go:276] 0 containers: []
	W0819 19:16:12.806632  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:12.806640  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:12.806723  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:12.840535  438716 cri.go:89] found id: ""
	I0819 19:16:12.840561  438716 logs.go:276] 0 containers: []
	W0819 19:16:12.840569  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:12.840575  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:12.840639  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:12.877680  438716 cri.go:89] found id: ""
	I0819 19:16:12.877712  438716 logs.go:276] 0 containers: []
	W0819 19:16:12.877721  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:12.877728  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:12.877779  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:12.912226  438716 cri.go:89] found id: ""
	I0819 19:16:12.912253  438716 logs.go:276] 0 containers: []
	W0819 19:16:12.912264  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:12.912272  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:12.912342  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:12.953463  438716 cri.go:89] found id: ""
	I0819 19:16:12.953493  438716 logs.go:276] 0 containers: []
	W0819 19:16:12.953504  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:12.953524  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:12.953542  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:13.007648  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:13.007691  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:13.022452  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:13.022494  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:13.092411  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:13.092439  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:13.092455  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:13.168711  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:13.168750  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:12.903478  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:15.402551  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:12.693162  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:14.698051  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:17.193988  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:14.417821  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:16.418541  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:18.918478  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:15.711501  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:15.724841  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:15.724921  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:15.760120  438716 cri.go:89] found id: ""
	I0819 19:16:15.760149  438716 logs.go:276] 0 containers: []
	W0819 19:16:15.760158  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:15.760166  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:15.760234  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:15.794959  438716 cri.go:89] found id: ""
	I0819 19:16:15.794988  438716 logs.go:276] 0 containers: []
	W0819 19:16:15.794996  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:15.795002  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:15.795054  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:15.842776  438716 cri.go:89] found id: ""
	I0819 19:16:15.842804  438716 logs.go:276] 0 containers: []
	W0819 19:16:15.842814  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:15.842820  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:15.842874  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:15.882134  438716 cri.go:89] found id: ""
	I0819 19:16:15.882167  438716 logs.go:276] 0 containers: []
	W0819 19:16:15.882178  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:15.882187  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:15.882251  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:15.919296  438716 cri.go:89] found id: ""
	I0819 19:16:15.919325  438716 logs.go:276] 0 containers: []
	W0819 19:16:15.919336  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:15.919345  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:15.919409  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:15.956401  438716 cri.go:89] found id: ""
	I0819 19:16:15.956429  438716 logs.go:276] 0 containers: []
	W0819 19:16:15.956437  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:15.956444  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:15.956507  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:15.994271  438716 cri.go:89] found id: ""
	I0819 19:16:15.994304  438716 logs.go:276] 0 containers: []
	W0819 19:16:15.994314  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:15.994320  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:15.994378  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:16.033685  438716 cri.go:89] found id: ""
	I0819 19:16:16.033714  438716 logs.go:276] 0 containers: []
	W0819 19:16:16.033724  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:16.033736  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:16.033754  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:16.083929  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:16.083964  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:16.107309  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:16.107342  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:16.193657  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:16.193681  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:16.193697  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:16.276974  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:16.277016  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:18.818532  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:18.831586  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:18.831655  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:18.866663  438716 cri.go:89] found id: ""
	I0819 19:16:18.866689  438716 logs.go:276] 0 containers: []
	W0819 19:16:18.866700  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:18.866709  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:18.866769  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:18.900711  438716 cri.go:89] found id: ""
	I0819 19:16:18.900746  438716 logs.go:276] 0 containers: []
	W0819 19:16:18.900757  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:18.900765  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:18.900849  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:18.935156  438716 cri.go:89] found id: ""
	I0819 19:16:18.935179  438716 logs.go:276] 0 containers: []
	W0819 19:16:18.935186  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:18.935193  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:18.935246  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:18.973853  438716 cri.go:89] found id: ""
	I0819 19:16:18.973889  438716 logs.go:276] 0 containers: []
	W0819 19:16:18.973902  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:18.973911  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:18.973978  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:19.014212  438716 cri.go:89] found id: ""
	I0819 19:16:19.014241  438716 logs.go:276] 0 containers: []
	W0819 19:16:19.014250  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:19.014255  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:19.014317  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:19.056089  438716 cri.go:89] found id: ""
	I0819 19:16:19.056125  438716 logs.go:276] 0 containers: []
	W0819 19:16:19.056137  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:19.056146  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:19.056211  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:19.091372  438716 cri.go:89] found id: ""
	I0819 19:16:19.091399  438716 logs.go:276] 0 containers: []
	W0819 19:16:19.091411  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:19.091420  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:19.091478  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:19.129737  438716 cri.go:89] found id: ""
	I0819 19:16:19.129767  438716 logs.go:276] 0 containers: []
	W0819 19:16:19.129777  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:19.129787  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:19.129800  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:19.207325  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:19.207360  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:19.247780  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:19.247816  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:19.302496  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:19.302543  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:19.317706  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:19.317739  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:19.395029  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:17.901762  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:19.901818  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:19.195079  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:21.693863  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:21.418534  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:23.420217  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:21.895538  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:21.910595  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:21.910658  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:21.948363  438716 cri.go:89] found id: ""
	I0819 19:16:21.948398  438716 logs.go:276] 0 containers: []
	W0819 19:16:21.948410  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:21.948419  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:21.948492  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:21.983391  438716 cri.go:89] found id: ""
	I0819 19:16:21.983428  438716 logs.go:276] 0 containers: []
	W0819 19:16:21.983440  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:21.983449  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:21.983520  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:22.022383  438716 cri.go:89] found id: ""
	I0819 19:16:22.022415  438716 logs.go:276] 0 containers: []
	W0819 19:16:22.022427  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:22.022436  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:22.022493  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:22.060676  438716 cri.go:89] found id: ""
	I0819 19:16:22.060707  438716 logs.go:276] 0 containers: []
	W0819 19:16:22.060716  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:22.060725  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:22.060778  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:22.095188  438716 cri.go:89] found id: ""
	I0819 19:16:22.095218  438716 logs.go:276] 0 containers: []
	W0819 19:16:22.095227  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:22.095234  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:22.095300  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:22.131164  438716 cri.go:89] found id: ""
	I0819 19:16:22.131192  438716 logs.go:276] 0 containers: []
	W0819 19:16:22.131200  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:22.131209  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:22.131275  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:22.166539  438716 cri.go:89] found id: ""
	I0819 19:16:22.166566  438716 logs.go:276] 0 containers: []
	W0819 19:16:22.166573  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:22.166580  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:22.166643  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:22.205604  438716 cri.go:89] found id: ""
	I0819 19:16:22.205631  438716 logs.go:276] 0 containers: []
	W0819 19:16:22.205640  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:22.205649  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:22.205662  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:22.265650  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:22.265689  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:22.280401  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:22.280443  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:22.356818  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:22.356851  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:22.356872  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:22.437678  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:22.437719  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:24.979655  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:24.993462  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:24.993526  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:25.029955  438716 cri.go:89] found id: ""
	I0819 19:16:25.029983  438716 logs.go:276] 0 containers: []
	W0819 19:16:25.029992  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:25.029999  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:25.030049  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:25.068478  438716 cri.go:89] found id: ""
	I0819 19:16:25.068507  438716 logs.go:276] 0 containers: []
	W0819 19:16:25.068518  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:25.068527  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:25.068594  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:25.105209  438716 cri.go:89] found id: ""
	I0819 19:16:25.105238  438716 logs.go:276] 0 containers: []
	W0819 19:16:25.105247  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:25.105256  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:25.105327  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:25.143166  438716 cri.go:89] found id: ""
	I0819 19:16:25.143203  438716 logs.go:276] 0 containers: []
	W0819 19:16:25.143218  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:25.143225  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:25.143279  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:25.177993  438716 cri.go:89] found id: ""
	I0819 19:16:25.178023  438716 logs.go:276] 0 containers: []
	W0819 19:16:25.178035  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:25.178044  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:25.178129  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:25.216473  438716 cri.go:89] found id: ""
	I0819 19:16:25.216501  438716 logs.go:276] 0 containers: []
	W0819 19:16:25.216523  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:25.216540  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:25.216603  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:25.251454  438716 cri.go:89] found id: ""
	I0819 19:16:25.251486  438716 logs.go:276] 0 containers: []
	W0819 19:16:25.251495  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:25.251501  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:25.251555  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:25.287145  438716 cri.go:89] found id: ""
	I0819 19:16:25.287179  438716 logs.go:276] 0 containers: []
	W0819 19:16:25.287188  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:25.287198  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:25.287210  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:25.371571  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:25.371619  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:25.418247  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:25.418277  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:25.472209  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:25.472248  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:25.486286  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:25.486315  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 19:16:21.902887  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:23.904358  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:26.403026  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:24.193797  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:26.194535  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:25.919371  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:28.418267  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	W0819 19:16:25.554470  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:28.055382  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:28.068750  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:28.068827  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:28.101856  438716 cri.go:89] found id: ""
	I0819 19:16:28.101891  438716 logs.go:276] 0 containers: []
	W0819 19:16:28.101903  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:28.101912  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:28.101977  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:28.136402  438716 cri.go:89] found id: ""
	I0819 19:16:28.136437  438716 logs.go:276] 0 containers: []
	W0819 19:16:28.136449  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:28.136460  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:28.136528  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:28.171766  438716 cri.go:89] found id: ""
	I0819 19:16:28.171795  438716 logs.go:276] 0 containers: []
	W0819 19:16:28.171803  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:28.171809  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:28.171864  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:28.206228  438716 cri.go:89] found id: ""
	I0819 19:16:28.206256  438716 logs.go:276] 0 containers: []
	W0819 19:16:28.206264  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:28.206272  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:28.206337  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:28.248877  438716 cri.go:89] found id: ""
	I0819 19:16:28.248912  438716 logs.go:276] 0 containers: []
	W0819 19:16:28.248923  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:28.248931  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:28.249002  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:28.290160  438716 cri.go:89] found id: ""
	I0819 19:16:28.290201  438716 logs.go:276] 0 containers: []
	W0819 19:16:28.290212  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:28.290221  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:28.290287  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:28.340413  438716 cri.go:89] found id: ""
	I0819 19:16:28.340445  438716 logs.go:276] 0 containers: []
	W0819 19:16:28.340454  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:28.340461  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:28.340513  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:28.385486  438716 cri.go:89] found id: ""
	I0819 19:16:28.385513  438716 logs.go:276] 0 containers: []
	W0819 19:16:28.385521  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:28.385532  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:28.385544  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:28.441987  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:28.442029  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:28.456509  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:28.456538  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:28.527941  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:28.527976  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:28.527993  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:28.612696  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:28.612738  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:28.901312  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:30.901640  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:28.693578  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:30.693686  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:30.418811  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:32.919696  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:31.154773  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:31.168718  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:31.168789  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:31.205365  438716 cri.go:89] found id: ""
	I0819 19:16:31.205399  438716 logs.go:276] 0 containers: []
	W0819 19:16:31.205411  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:31.205419  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:31.205496  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:31.238829  438716 cri.go:89] found id: ""
	I0819 19:16:31.238871  438716 logs.go:276] 0 containers: []
	W0819 19:16:31.238879  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:31.238886  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:31.238936  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:31.273229  438716 cri.go:89] found id: ""
	I0819 19:16:31.273259  438716 logs.go:276] 0 containers: []
	W0819 19:16:31.273304  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:31.273313  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:31.273377  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:31.309559  438716 cri.go:89] found id: ""
	I0819 19:16:31.309601  438716 logs.go:276] 0 containers: []
	W0819 19:16:31.309613  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:31.309622  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:31.309689  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:31.344939  438716 cri.go:89] found id: ""
	I0819 19:16:31.344971  438716 logs.go:276] 0 containers: []
	W0819 19:16:31.344981  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:31.344987  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:31.345043  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:31.382423  438716 cri.go:89] found id: ""
	I0819 19:16:31.382455  438716 logs.go:276] 0 containers: []
	W0819 19:16:31.382468  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:31.382474  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:31.382525  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:31.420148  438716 cri.go:89] found id: ""
	I0819 19:16:31.420174  438716 logs.go:276] 0 containers: []
	W0819 19:16:31.420184  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:31.420192  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:31.420262  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:31.455691  438716 cri.go:89] found id: ""
	I0819 19:16:31.455720  438716 logs.go:276] 0 containers: []
	W0819 19:16:31.455730  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:31.455740  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:31.455753  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:31.509501  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:31.509549  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:31.523650  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:31.523693  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:31.591535  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:31.591557  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:31.591574  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:31.674038  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:31.674077  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:34.216506  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:34.232782  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:34.232875  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:34.286103  438716 cri.go:89] found id: ""
	I0819 19:16:34.286136  438716 logs.go:276] 0 containers: []
	W0819 19:16:34.286147  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:34.286156  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:34.286221  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:34.324193  438716 cri.go:89] found id: ""
	I0819 19:16:34.324220  438716 logs.go:276] 0 containers: []
	W0819 19:16:34.324229  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:34.324235  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:34.324292  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:34.382777  438716 cri.go:89] found id: ""
	I0819 19:16:34.382804  438716 logs.go:276] 0 containers: []
	W0819 19:16:34.382814  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:34.382822  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:34.382887  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:34.420714  438716 cri.go:89] found id: ""
	I0819 19:16:34.420743  438716 logs.go:276] 0 containers: []
	W0819 19:16:34.420753  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:34.420771  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:34.420840  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:34.455338  438716 cri.go:89] found id: ""
	I0819 19:16:34.455369  438716 logs.go:276] 0 containers: []
	W0819 19:16:34.455381  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:34.455391  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:34.455467  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:34.489528  438716 cri.go:89] found id: ""
	I0819 19:16:34.489566  438716 logs.go:276] 0 containers: []
	W0819 19:16:34.489575  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:34.489581  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:34.489634  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:34.523830  438716 cri.go:89] found id: ""
	I0819 19:16:34.523857  438716 logs.go:276] 0 containers: []
	W0819 19:16:34.523866  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:34.523873  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:34.523940  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:34.559023  438716 cri.go:89] found id: ""
	I0819 19:16:34.559052  438716 logs.go:276] 0 containers: []
	W0819 19:16:34.559063  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:34.559077  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:34.559092  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:34.639116  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:34.639159  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:34.675990  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:34.676017  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:34.730900  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:34.730935  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:34.744938  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:34.744964  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:34.816267  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:32.902138  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:35.401865  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:32.696537  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:35.192648  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:35.687633  438245 pod_ready.go:82] duration metric: took 4m0.000667446s for pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace to be "Ready" ...
	E0819 19:16:35.687688  438245 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0819 19:16:35.687715  438245 pod_ready.go:39] duration metric: took 4m13.552784118s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 19:16:35.687770  438245 kubeadm.go:597] duration metric: took 4m20.936149722s to restartPrimaryControlPlane
	W0819 19:16:35.687875  438245 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0819 19:16:35.687929  438245 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 19:16:35.419327  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:37.420007  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:37.317314  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:37.331915  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:37.331982  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:37.370233  438716 cri.go:89] found id: ""
	I0819 19:16:37.370261  438716 logs.go:276] 0 containers: []
	W0819 19:16:37.370269  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:37.370276  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:37.370343  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:37.409042  438716 cri.go:89] found id: ""
	I0819 19:16:37.409071  438716 logs.go:276] 0 containers: []
	W0819 19:16:37.409082  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:37.409090  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:37.409161  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:37.445903  438716 cri.go:89] found id: ""
	I0819 19:16:37.445932  438716 logs.go:276] 0 containers: []
	W0819 19:16:37.445941  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:37.445948  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:37.445999  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:37.484275  438716 cri.go:89] found id: ""
	I0819 19:16:37.484318  438716 logs.go:276] 0 containers: []
	W0819 19:16:37.484328  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:37.484334  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:37.484393  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:37.528131  438716 cri.go:89] found id: ""
	I0819 19:16:37.528161  438716 logs.go:276] 0 containers: []
	W0819 19:16:37.528174  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:37.528180  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:37.528243  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:37.563374  438716 cri.go:89] found id: ""
	I0819 19:16:37.563406  438716 logs.go:276] 0 containers: []
	W0819 19:16:37.563414  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:37.563421  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:37.563473  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:37.597234  438716 cri.go:89] found id: ""
	I0819 19:16:37.597260  438716 logs.go:276] 0 containers: []
	W0819 19:16:37.597267  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:37.597274  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:37.597329  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:37.634809  438716 cri.go:89] found id: ""
	I0819 19:16:37.634845  438716 logs.go:276] 0 containers: []
	W0819 19:16:37.634854  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:37.634864  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:37.634879  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:37.704354  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:37.704380  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:37.704396  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:37.788606  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:37.788646  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:37.830486  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:37.830513  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:37.890642  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:37.890681  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:40.405473  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:40.420019  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:40.420094  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:40.458558  438716 cri.go:89] found id: ""
	I0819 19:16:40.458586  438716 logs.go:276] 0 containers: []
	W0819 19:16:40.458598  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:40.458606  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:40.458671  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:40.500353  438716 cri.go:89] found id: ""
	I0819 19:16:40.500379  438716 logs.go:276] 0 containers: []
	W0819 19:16:40.500388  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:40.500394  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:40.500445  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:37.901881  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:39.902097  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:39.918877  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:41.919112  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:43.920092  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:40.534281  438716 cri.go:89] found id: ""
	I0819 19:16:40.534307  438716 logs.go:276] 0 containers: []
	W0819 19:16:40.534316  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:40.534322  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:40.534379  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:40.569537  438716 cri.go:89] found id: ""
	I0819 19:16:40.569568  438716 logs.go:276] 0 containers: []
	W0819 19:16:40.569578  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:40.569587  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:40.569654  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:40.603066  438716 cri.go:89] found id: ""
	I0819 19:16:40.603097  438716 logs.go:276] 0 containers: []
	W0819 19:16:40.603110  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:40.603118  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:40.603171  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:40.637598  438716 cri.go:89] found id: ""
	I0819 19:16:40.637628  438716 logs.go:276] 0 containers: []
	W0819 19:16:40.637637  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:40.637643  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:40.637704  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:40.673583  438716 cri.go:89] found id: ""
	I0819 19:16:40.673616  438716 logs.go:276] 0 containers: []
	W0819 19:16:40.673629  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:40.673637  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:40.673692  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:40.708324  438716 cri.go:89] found id: ""
	I0819 19:16:40.708354  438716 logs.go:276] 0 containers: []
	W0819 19:16:40.708363  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:40.708373  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:40.708387  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:40.789743  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:40.789782  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:40.830849  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:40.830884  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:40.882662  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:40.882700  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:40.896843  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:40.896869  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:40.969491  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:43.470579  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:43.483791  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:43.483876  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:43.523764  438716 cri.go:89] found id: ""
	I0819 19:16:43.523797  438716 logs.go:276] 0 containers: []
	W0819 19:16:43.523809  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:43.523817  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:43.523882  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:43.557925  438716 cri.go:89] found id: ""
	I0819 19:16:43.557953  438716 logs.go:276] 0 containers: []
	W0819 19:16:43.557960  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:43.557966  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:43.558017  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:43.591324  438716 cri.go:89] found id: ""
	I0819 19:16:43.591355  438716 logs.go:276] 0 containers: []
	W0819 19:16:43.591364  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:43.591370  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:43.591421  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:43.625798  438716 cri.go:89] found id: ""
	I0819 19:16:43.625826  438716 logs.go:276] 0 containers: []
	W0819 19:16:43.625834  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:43.625840  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:43.625898  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:43.659787  438716 cri.go:89] found id: ""
	I0819 19:16:43.659815  438716 logs.go:276] 0 containers: []
	W0819 19:16:43.659823  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:43.659830  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:43.659882  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:43.692982  438716 cri.go:89] found id: ""
	I0819 19:16:43.693008  438716 logs.go:276] 0 containers: []
	W0819 19:16:43.693017  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:43.693024  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:43.693075  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:43.726059  438716 cri.go:89] found id: ""
	I0819 19:16:43.726092  438716 logs.go:276] 0 containers: []
	W0819 19:16:43.726104  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:43.726113  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:43.726187  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:43.760906  438716 cri.go:89] found id: ""
	I0819 19:16:43.760947  438716 logs.go:276] 0 containers: []
	W0819 19:16:43.760958  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:43.760971  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:43.760994  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:43.812249  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:43.812285  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:43.826538  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:43.826566  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:43.894904  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:43.894926  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:43.894941  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:43.975746  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:43.975796  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:41.902398  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:43.902728  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:46.401834  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:46.419345  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:48.918688  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:46.515329  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:46.529088  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:46.529170  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:46.564525  438716 cri.go:89] found id: ""
	I0819 19:16:46.564557  438716 logs.go:276] 0 containers: []
	W0819 19:16:46.564570  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:46.564578  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:46.564647  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:46.598457  438716 cri.go:89] found id: ""
	I0819 19:16:46.598485  438716 logs.go:276] 0 containers: []
	W0819 19:16:46.598494  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:46.598499  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:46.598549  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:46.631767  438716 cri.go:89] found id: ""
	I0819 19:16:46.631798  438716 logs.go:276] 0 containers: []
	W0819 19:16:46.631807  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:46.631814  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:46.631867  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:46.664978  438716 cri.go:89] found id: ""
	I0819 19:16:46.665013  438716 logs.go:276] 0 containers: []
	W0819 19:16:46.665026  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:46.665034  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:46.665094  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:46.701024  438716 cri.go:89] found id: ""
	I0819 19:16:46.701052  438716 logs.go:276] 0 containers: []
	W0819 19:16:46.701061  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:46.701067  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:46.701132  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:46.735834  438716 cri.go:89] found id: ""
	I0819 19:16:46.735874  438716 logs.go:276] 0 containers: []
	W0819 19:16:46.735886  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:46.735894  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:46.735978  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:46.773392  438716 cri.go:89] found id: ""
	I0819 19:16:46.773426  438716 logs.go:276] 0 containers: []
	W0819 19:16:46.773437  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:46.773445  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:46.773498  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:46.819800  438716 cri.go:89] found id: ""
	I0819 19:16:46.819829  438716 logs.go:276] 0 containers: []
	W0819 19:16:46.819841  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:46.819869  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:46.819889  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:46.860633  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:46.860669  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:46.911895  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:46.911936  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:46.927388  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:46.927422  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:46.998601  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:46.998628  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:46.998645  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:49.585303  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:49.598962  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:49.599032  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:49.631891  438716 cri.go:89] found id: ""
	I0819 19:16:49.631920  438716 logs.go:276] 0 containers: []
	W0819 19:16:49.631931  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:49.631940  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:49.631998  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:49.671731  438716 cri.go:89] found id: ""
	I0819 19:16:49.671761  438716 logs.go:276] 0 containers: []
	W0819 19:16:49.671777  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:49.671786  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:49.671846  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:49.707517  438716 cri.go:89] found id: ""
	I0819 19:16:49.707556  438716 logs.go:276] 0 containers: []
	W0819 19:16:49.707568  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:49.707578  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:49.707651  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:49.744255  438716 cri.go:89] found id: ""
	I0819 19:16:49.744289  438716 logs.go:276] 0 containers: []
	W0819 19:16:49.744299  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:49.744305  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:49.744357  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:49.779224  438716 cri.go:89] found id: ""
	I0819 19:16:49.779252  438716 logs.go:276] 0 containers: []
	W0819 19:16:49.779259  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:49.779266  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:49.779322  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:49.815641  438716 cri.go:89] found id: ""
	I0819 19:16:49.815689  438716 logs.go:276] 0 containers: []
	W0819 19:16:49.815701  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:49.815711  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:49.815769  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:49.851861  438716 cri.go:89] found id: ""
	I0819 19:16:49.851894  438716 logs.go:276] 0 containers: []
	W0819 19:16:49.851906  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:49.851915  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:49.851984  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:49.888140  438716 cri.go:89] found id: ""
	I0819 19:16:49.888173  438716 logs.go:276] 0 containers: []
	W0819 19:16:49.888186  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:49.888199  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:49.888215  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:49.940389  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:49.940430  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:49.954519  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:49.954553  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:50.028462  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:50.028486  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:50.028502  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:50.108319  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:50.108362  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:48.901902  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:50.902702  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:50.919079  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:52.919271  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:52.647146  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:52.660468  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:52.660558  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:52.697665  438716 cri.go:89] found id: ""
	I0819 19:16:52.697703  438716 logs.go:276] 0 containers: []
	W0819 19:16:52.697719  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:52.697727  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:52.697786  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:52.739169  438716 cri.go:89] found id: ""
	I0819 19:16:52.739203  438716 logs.go:276] 0 containers: []
	W0819 19:16:52.739214  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:52.739222  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:52.739289  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:52.776580  438716 cri.go:89] found id: ""
	I0819 19:16:52.776610  438716 logs.go:276] 0 containers: []
	W0819 19:16:52.776619  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:52.776630  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:52.776683  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:52.813443  438716 cri.go:89] found id: ""
	I0819 19:16:52.813475  438716 logs.go:276] 0 containers: []
	W0819 19:16:52.813488  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:52.813497  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:52.813557  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:52.848035  438716 cri.go:89] found id: ""
	I0819 19:16:52.848064  438716 logs.go:276] 0 containers: []
	W0819 19:16:52.848075  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:52.848082  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:52.848150  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:52.881814  438716 cri.go:89] found id: ""
	I0819 19:16:52.881841  438716 logs.go:276] 0 containers: []
	W0819 19:16:52.881858  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:52.881867  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:52.881930  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:52.922179  438716 cri.go:89] found id: ""
	I0819 19:16:52.922202  438716 logs.go:276] 0 containers: []
	W0819 19:16:52.922210  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:52.922216  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:52.922277  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:52.958110  438716 cri.go:89] found id: ""
	I0819 19:16:52.958136  438716 logs.go:276] 0 containers: []
	W0819 19:16:52.958144  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:52.958153  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:52.958167  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:53.008553  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:53.008592  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:53.022826  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:53.022860  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:53.094940  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:53.094967  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:53.094982  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:53.173877  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:53.173920  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:53.403382  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:55.905504  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:55.419297  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:55.419331  438295 pod_ready.go:82] duration metric: took 4m0.007107243s for pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace to be "Ready" ...
	E0819 19:16:55.419345  438295 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0819 19:16:55.419355  438295 pod_ready.go:39] duration metric: took 4m4.316528467s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 19:16:55.419408  438295 api_server.go:52] waiting for apiserver process to appear ...
	I0819 19:16:55.419449  438295 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:55.419499  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:55.466648  438295 cri.go:89] found id: "d66ad075c652a3b446078444a32327c07459f74199be8f89197067dbad566d5a"
	I0819 19:16:55.466679  438295 cri.go:89] found id: ""
	I0819 19:16:55.466690  438295 logs.go:276] 1 containers: [d66ad075c652a3b446078444a32327c07459f74199be8f89197067dbad566d5a]
	I0819 19:16:55.466758  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:16:55.471085  438295 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:55.471164  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:55.509883  438295 cri.go:89] found id: "a3cb2c04e3eb3398fa324b660ca1864f22175cbf41fd84eae34a24ce7928b672"
	I0819 19:16:55.509910  438295 cri.go:89] found id: ""
	I0819 19:16:55.509921  438295 logs.go:276] 1 containers: [a3cb2c04e3eb3398fa324b660ca1864f22175cbf41fd84eae34a24ce7928b672]
	I0819 19:16:55.509984  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:16:55.516866  438295 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:55.516954  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:55.560957  438295 cri.go:89] found id: "a6bc5b24f616e32fdffb80b6ed0201250b02f143c8217d56ef90dc55551d709f"
	I0819 19:16:55.560988  438295 cri.go:89] found id: ""
	I0819 19:16:55.560999  438295 logs.go:276] 1 containers: [a6bc5b24f616e32fdffb80b6ed0201250b02f143c8217d56ef90dc55551d709f]
	I0819 19:16:55.561065  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:16:55.565592  438295 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:55.565662  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:55.610872  438295 cri.go:89] found id: "c09c2a3840c6b84c4d187a5b4938f1e79c515609ad3ff7077a163e94acd5fc22"
	I0819 19:16:55.610905  438295 cri.go:89] found id: ""
	I0819 19:16:55.610914  438295 logs.go:276] 1 containers: [c09c2a3840c6b84c4d187a5b4938f1e79c515609ad3ff7077a163e94acd5fc22]
	I0819 19:16:55.610976  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:16:55.615411  438295 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:55.615486  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:55.652759  438295 cri.go:89] found id: "3e23a8501fe9333693618c26b918ed665ca9f2ea955dfc771ddbd90f4af91338"
	I0819 19:16:55.652792  438295 cri.go:89] found id: ""
	I0819 19:16:55.652807  438295 logs.go:276] 1 containers: [3e23a8501fe9333693618c26b918ed665ca9f2ea955dfc771ddbd90f4af91338]
	I0819 19:16:55.652873  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:16:55.657124  438295 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:55.657190  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:55.699063  438295 cri.go:89] found id: "6e6dab43bac16fb6a2155177fd2cb01da57c882a322ae89145bc332c50c87071"
	I0819 19:16:55.699085  438295 cri.go:89] found id: ""
	I0819 19:16:55.699093  438295 logs.go:276] 1 containers: [6e6dab43bac16fb6a2155177fd2cb01da57c882a322ae89145bc332c50c87071]
	I0819 19:16:55.699145  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:16:55.703224  438295 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:55.703292  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:55.753166  438295 cri.go:89] found id: ""
	I0819 19:16:55.753198  438295 logs.go:276] 0 containers: []
	W0819 19:16:55.753210  438295 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:55.753218  438295 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 19:16:55.753286  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 19:16:55.803518  438295 cri.go:89] found id: "902796698c02b97c3f50f231cba5dfbc00bc7e8344f104fe7a36109e1d10a4f8"
	I0819 19:16:55.803551  438295 cri.go:89] found id: "44a4290db8405288dc877d1dbfa8f1a4976cb6221431aef419db3cdff822d3b6"
	I0819 19:16:55.803558  438295 cri.go:89] found id: ""
	I0819 19:16:55.803568  438295 logs.go:276] 2 containers: [902796698c02b97c3f50f231cba5dfbc00bc7e8344f104fe7a36109e1d10a4f8 44a4290db8405288dc877d1dbfa8f1a4976cb6221431aef419db3cdff822d3b6]
	I0819 19:16:55.803637  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:16:55.808063  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:16:55.812708  438295 logs.go:123] Gathering logs for container status ...
	I0819 19:16:55.812737  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:55.861697  438295 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:55.861736  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 19:16:55.911203  438295 logs.go:138] Found kubelet problem: Aug 19 19:12:40 embed-certs-024748 kubelet[936]: W0819 19:12:40.671901     936 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:embed-certs-024748" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-024748' and this object
	W0819 19:16:55.911420  438295 logs.go:138] Found kubelet problem: Aug 19 19:12:40 embed-certs-024748 kubelet[936]: E0819 19:12:40.672098     936 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:embed-certs-024748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-024748' and this object" logger="UnhandledError"
	W0819 19:16:55.911603  438295 logs.go:138] Found kubelet problem: Aug 19 19:12:40 embed-certs-024748 kubelet[936]: W0819 19:12:40.672624     936 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-024748" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-024748' and this object
	W0819 19:16:55.911834  438295 logs.go:138] Found kubelet problem: Aug 19 19:12:40 embed-certs-024748 kubelet[936]: E0819 19:12:40.672667     936 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:embed-certs-024748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-024748' and this object" logger="UnhandledError"
	I0819 19:16:55.949585  438295 logs.go:123] Gathering logs for kube-scheduler [c09c2a3840c6b84c4d187a5b4938f1e79c515609ad3ff7077a163e94acd5fc22] ...
	I0819 19:16:55.949663  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c09c2a3840c6b84c4d187a5b4938f1e79c515609ad3ff7077a163e94acd5fc22"
	I0819 19:16:55.995063  438295 logs.go:123] Gathering logs for kube-controller-manager [6e6dab43bac16fb6a2155177fd2cb01da57c882a322ae89145bc332c50c87071] ...
	I0819 19:16:55.995100  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e6dab43bac16fb6a2155177fd2cb01da57c882a322ae89145bc332c50c87071"
	I0819 19:16:56.062320  438295 logs.go:123] Gathering logs for storage-provisioner [902796698c02b97c3f50f231cba5dfbc00bc7e8344f104fe7a36109e1d10a4f8] ...
	I0819 19:16:56.062376  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 902796698c02b97c3f50f231cba5dfbc00bc7e8344f104fe7a36109e1d10a4f8"
	I0819 19:16:56.100112  438295 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:56.100152  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:56.589439  438295 logs.go:123] Gathering logs for kube-proxy [3e23a8501fe9333693618c26b918ed665ca9f2ea955dfc771ddbd90f4af91338] ...
	I0819 19:16:56.589486  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e23a8501fe9333693618c26b918ed665ca9f2ea955dfc771ddbd90f4af91338"
	I0819 19:16:56.632096  438295 logs.go:123] Gathering logs for storage-provisioner [44a4290db8405288dc877d1dbfa8f1a4976cb6221431aef419db3cdff822d3b6] ...
	I0819 19:16:56.632132  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44a4290db8405288dc877d1dbfa8f1a4976cb6221431aef419db3cdff822d3b6"
	I0819 19:16:56.670952  438295 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:56.670984  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:56.685246  438295 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:56.685279  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 19:16:56.826418  438295 logs.go:123] Gathering logs for kube-apiserver [d66ad075c652a3b446078444a32327c07459f74199be8f89197067dbad566d5a] ...
	I0819 19:16:56.826456  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d66ad075c652a3b446078444a32327c07459f74199be8f89197067dbad566d5a"
	I0819 19:16:56.876901  438295 logs.go:123] Gathering logs for etcd [a3cb2c04e3eb3398fa324b660ca1864f22175cbf41fd84eae34a24ce7928b672] ...
	I0819 19:16:56.876944  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a3cb2c04e3eb3398fa324b660ca1864f22175cbf41fd84eae34a24ce7928b672"
	I0819 19:16:56.920390  438295 logs.go:123] Gathering logs for coredns [a6bc5b24f616e32fdffb80b6ed0201250b02f143c8217d56ef90dc55551d709f] ...
	I0819 19:16:56.920423  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6bc5b24f616e32fdffb80b6ed0201250b02f143c8217d56ef90dc55551d709f"
	I0819 19:16:56.961691  438295 out.go:358] Setting ErrFile to fd 2...
	I0819 19:16:56.961718  438295 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 19:16:56.961793  438295 out.go:270] X Problems detected in kubelet:
	W0819 19:16:56.961805  438295 out.go:270]   Aug 19 19:12:40 embed-certs-024748 kubelet[936]: W0819 19:12:40.671901     936 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:embed-certs-024748" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-024748' and this object
	W0819 19:16:56.961824  438295 out.go:270]   Aug 19 19:12:40 embed-certs-024748 kubelet[936]: E0819 19:12:40.672098     936 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:embed-certs-024748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-024748' and this object" logger="UnhandledError"
	W0819 19:16:56.961839  438295 out.go:270]   Aug 19 19:12:40 embed-certs-024748 kubelet[936]: W0819 19:12:40.672624     936 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-024748" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-024748' and this object
	W0819 19:16:56.961853  438295 out.go:270]   Aug 19 19:12:40 embed-certs-024748 kubelet[936]: E0819 19:12:40.672667     936 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:embed-certs-024748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-024748' and this object" logger="UnhandledError"
	I0819 19:16:56.961884  438295 out.go:358] Setting ErrFile to fd 2...
	I0819 19:16:56.961893  438295 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:16:55.716096  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:55.734732  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:55.734817  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:55.780484  438716 cri.go:89] found id: ""
	I0819 19:16:55.780514  438716 logs.go:276] 0 containers: []
	W0819 19:16:55.780525  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:55.780534  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:55.780607  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:55.821755  438716 cri.go:89] found id: ""
	I0819 19:16:55.821778  438716 logs.go:276] 0 containers: []
	W0819 19:16:55.821786  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:55.821792  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:55.821855  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:55.861032  438716 cri.go:89] found id: ""
	I0819 19:16:55.861066  438716 logs.go:276] 0 containers: []
	W0819 19:16:55.861077  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:55.861086  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:55.861159  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:55.909978  438716 cri.go:89] found id: ""
	I0819 19:16:55.910004  438716 logs.go:276] 0 containers: []
	W0819 19:16:55.910015  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:55.910024  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:55.910087  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:55.956603  438716 cri.go:89] found id: ""
	I0819 19:16:55.956634  438716 logs.go:276] 0 containers: []
	W0819 19:16:55.956645  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:55.956653  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:55.956722  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:55.999176  438716 cri.go:89] found id: ""
	I0819 19:16:55.999203  438716 logs.go:276] 0 containers: []
	W0819 19:16:55.999216  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:55.999225  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:55.999286  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:56.035141  438716 cri.go:89] found id: ""
	I0819 19:16:56.035172  438716 logs.go:276] 0 containers: []
	W0819 19:16:56.035183  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:56.035192  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:56.035255  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:56.076152  438716 cri.go:89] found id: ""
	I0819 19:16:56.076185  438716 logs.go:276] 0 containers: []
	W0819 19:16:56.076197  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:56.076209  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:56.076226  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:56.136624  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:56.136671  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:56.151867  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:56.151902  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:56.231650  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:56.231696  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:56.231713  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:56.307203  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:56.307247  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:58.848295  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:58.861984  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:58.862172  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:58.900089  438716 cri.go:89] found id: ""
	I0819 19:16:58.900114  438716 logs.go:276] 0 containers: []
	W0819 19:16:58.900124  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:58.900132  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:58.900203  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:58.932528  438716 cri.go:89] found id: ""
	I0819 19:16:58.932551  438716 logs.go:276] 0 containers: []
	W0819 19:16:58.932559  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:58.932565  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:58.932618  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:58.967255  438716 cri.go:89] found id: ""
	I0819 19:16:58.967283  438716 logs.go:276] 0 containers: []
	W0819 19:16:58.967291  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:58.967298  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:58.967349  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:59.000887  438716 cri.go:89] found id: ""
	I0819 19:16:59.000923  438716 logs.go:276] 0 containers: []
	W0819 19:16:59.000934  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:59.000942  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:59.001009  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:59.041386  438716 cri.go:89] found id: ""
	I0819 19:16:59.041417  438716 logs.go:276] 0 containers: []
	W0819 19:16:59.041428  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:59.041436  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:59.041499  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:59.080036  438716 cri.go:89] found id: ""
	I0819 19:16:59.080078  438716 logs.go:276] 0 containers: []
	W0819 19:16:59.080090  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:59.080099  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:59.080168  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:59.113946  438716 cri.go:89] found id: ""
	I0819 19:16:59.113982  438716 logs.go:276] 0 containers: []
	W0819 19:16:59.113995  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:59.114004  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:59.114066  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:59.155413  438716 cri.go:89] found id: ""
	I0819 19:16:59.155437  438716 logs.go:276] 0 containers: []
	W0819 19:16:59.155446  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:59.155456  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:59.155477  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:59.223795  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:59.223815  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:59.223828  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:59.304516  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:59.304554  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:59.344975  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:59.345005  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:59.397751  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:59.397789  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:58.402453  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:00.901494  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:02.043611  438245 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.355651212s)
	I0819 19:17:02.043735  438245 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 19:17:02.066981  438245 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 19:17:02.083179  438245 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 19:17:02.100807  438245 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 19:17:02.100829  438245 kubeadm.go:157] found existing configuration files:
	
	I0819 19:17:02.100877  438245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0819 19:17:02.116462  438245 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 19:17:02.116534  438245 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 19:17:02.127313  438245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0819 19:17:02.147096  438245 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 19:17:02.147170  438245 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 19:17:02.159262  438245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0819 19:17:02.168825  438245 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 19:17:02.168918  438245 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 19:17:02.179354  438245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0819 19:17:02.188982  438245 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 19:17:02.189051  438245 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 19:17:02.199291  438245 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 19:17:01.914433  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:17:01.927468  438716 kubeadm.go:597] duration metric: took 4m3.453401239s to restartPrimaryControlPlane
	W0819 19:17:01.927564  438716 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0819 19:17:01.927600  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 19:17:02.647971  438716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 19:17:02.665946  438716 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 19:17:02.676665  438716 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 19:17:02.686818  438716 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 19:17:02.686840  438716 kubeadm.go:157] found existing configuration files:
	
	I0819 19:17:02.686885  438716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 19:17:02.697160  438716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 19:17:02.697228  438716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 19:17:02.707774  438716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 19:17:02.717251  438716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 19:17:02.717310  438716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 19:17:02.727481  438716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 19:17:02.738085  438716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 19:17:02.738141  438716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 19:17:02.749286  438716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 19:17:02.759965  438716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 19:17:02.760025  438716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 19:17:02.770753  438716 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 19:17:02.835857  438716 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0819 19:17:02.835940  438716 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 19:17:02.983775  438716 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 19:17:02.983974  438716 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 19:17:02.984149  438716 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0819 19:17:03.173404  438716 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 19:17:03.175412  438716 out.go:235]   - Generating certificates and keys ...
	I0819 19:17:03.175520  438716 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 19:17:03.175659  438716 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 19:17:03.175805  438716 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 19:17:03.175913  438716 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 19:17:03.176021  438716 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 19:17:03.176125  438716 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 19:17:03.176626  438716 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 19:17:03.177624  438716 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 19:17:03.178399  438716 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 19:17:03.179325  438716 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 19:17:03.179599  438716 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 19:17:03.179702  438716 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 19:17:03.416467  438716 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 19:17:03.505378  438716 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 19:17:03.588959  438716 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 19:17:03.680602  438716 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 19:17:03.697717  438716 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 19:17:03.700436  438716 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 19:17:03.700579  438716 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 19:17:03.858804  438716 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 19:17:03.861395  438716 out.go:235]   - Booting up control plane ...
	I0819 19:17:03.861520  438716 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 19:17:03.877387  438716 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 19:17:03.878611  438716 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 19:17:03.882842  438716 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 19:17:03.887436  438716 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0819 19:17:02.902839  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:05.402376  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:02.248409  438245 kubeadm.go:310] W0819 19:17:02.217617    2563 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 19:17:02.250447  438245 kubeadm.go:310] W0819 19:17:02.219827    2563 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 19:17:02.377127  438245 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 19:17:06.962848  438295 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:17:06.984774  438295 api_server.go:72] duration metric: took 4m23.117653428s to wait for apiserver process to appear ...
	I0819 19:17:06.984811  438295 api_server.go:88] waiting for apiserver healthz status ...
	I0819 19:17:06.984865  438295 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:17:06.984939  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:17:07.025158  438295 cri.go:89] found id: "d66ad075c652a3b446078444a32327c07459f74199be8f89197067dbad566d5a"
	I0819 19:17:07.025201  438295 cri.go:89] found id: ""
	I0819 19:17:07.025213  438295 logs.go:276] 1 containers: [d66ad075c652a3b446078444a32327c07459f74199be8f89197067dbad566d5a]
	I0819 19:17:07.025287  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:07.032365  438295 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:17:07.032446  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:17:07.073368  438295 cri.go:89] found id: "a3cb2c04e3eb3398fa324b660ca1864f22175cbf41fd84eae34a24ce7928b672"
	I0819 19:17:07.073394  438295 cri.go:89] found id: ""
	I0819 19:17:07.073403  438295 logs.go:276] 1 containers: [a3cb2c04e3eb3398fa324b660ca1864f22175cbf41fd84eae34a24ce7928b672]
	I0819 19:17:07.073463  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:07.078781  438295 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:17:07.078891  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:17:07.123263  438295 cri.go:89] found id: "a6bc5b24f616e32fdffb80b6ed0201250b02f143c8217d56ef90dc55551d709f"
	I0819 19:17:07.123293  438295 cri.go:89] found id: ""
	I0819 19:17:07.123303  438295 logs.go:276] 1 containers: [a6bc5b24f616e32fdffb80b6ed0201250b02f143c8217d56ef90dc55551d709f]
	I0819 19:17:07.123365  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:07.128485  438295 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:17:07.128579  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:17:07.167105  438295 cri.go:89] found id: "c09c2a3840c6b84c4d187a5b4938f1e79c515609ad3ff7077a163e94acd5fc22"
	I0819 19:17:07.167137  438295 cri.go:89] found id: ""
	I0819 19:17:07.167148  438295 logs.go:276] 1 containers: [c09c2a3840c6b84c4d187a5b4938f1e79c515609ad3ff7077a163e94acd5fc22]
	I0819 19:17:07.167215  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:07.171571  438295 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:17:07.171641  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:17:07.215524  438295 cri.go:89] found id: "3e23a8501fe9333693618c26b918ed665ca9f2ea955dfc771ddbd90f4af91338"
	I0819 19:17:07.215547  438295 cri.go:89] found id: ""
	I0819 19:17:07.215555  438295 logs.go:276] 1 containers: [3e23a8501fe9333693618c26b918ed665ca9f2ea955dfc771ddbd90f4af91338]
	I0819 19:17:07.215621  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:07.221604  438295 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:17:07.221676  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:17:07.263106  438295 cri.go:89] found id: "6e6dab43bac16fb6a2155177fd2cb01da57c882a322ae89145bc332c50c87071"
	I0819 19:17:07.263140  438295 cri.go:89] found id: ""
	I0819 19:17:07.263149  438295 logs.go:276] 1 containers: [6e6dab43bac16fb6a2155177fd2cb01da57c882a322ae89145bc332c50c87071]
	I0819 19:17:07.263209  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:07.267703  438295 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:17:07.267770  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:17:07.316006  438295 cri.go:89] found id: ""
	I0819 19:17:07.316042  438295 logs.go:276] 0 containers: []
	W0819 19:17:07.316054  438295 logs.go:278] No container was found matching "kindnet"
	I0819 19:17:07.316062  438295 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 19:17:07.316132  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 19:17:07.361100  438295 cri.go:89] found id: "902796698c02b97c3f50f231cba5dfbc00bc7e8344f104fe7a36109e1d10a4f8"
	I0819 19:17:07.361123  438295 cri.go:89] found id: "44a4290db8405288dc877d1dbfa8f1a4976cb6221431aef419db3cdff822d3b6"
	I0819 19:17:07.361126  438295 cri.go:89] found id: ""
	I0819 19:17:07.361133  438295 logs.go:276] 2 containers: [902796698c02b97c3f50f231cba5dfbc00bc7e8344f104fe7a36109e1d10a4f8 44a4290db8405288dc877d1dbfa8f1a4976cb6221431aef419db3cdff822d3b6]
	I0819 19:17:07.361190  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:07.366949  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:07.372724  438295 logs.go:123] Gathering logs for kubelet ...
	I0819 19:17:07.372748  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 19:17:07.413540  438295 logs.go:138] Found kubelet problem: Aug 19 19:12:40 embed-certs-024748 kubelet[936]: W0819 19:12:40.671901     936 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:embed-certs-024748" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-024748' and this object
	W0819 19:17:07.413722  438295 logs.go:138] Found kubelet problem: Aug 19 19:12:40 embed-certs-024748 kubelet[936]: E0819 19:12:40.672098     936 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:embed-certs-024748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-024748' and this object" logger="UnhandledError"
	W0819 19:17:07.413858  438295 logs.go:138] Found kubelet problem: Aug 19 19:12:40 embed-certs-024748 kubelet[936]: W0819 19:12:40.672624     936 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-024748" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-024748' and this object
	W0819 19:17:07.414017  438295 logs.go:138] Found kubelet problem: Aug 19 19:12:40 embed-certs-024748 kubelet[936]: E0819 19:12:40.672667     936 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:embed-certs-024748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-024748' and this object" logger="UnhandledError"
	I0819 19:17:07.452061  438295 logs.go:123] Gathering logs for coredns [a6bc5b24f616e32fdffb80b6ed0201250b02f143c8217d56ef90dc55551d709f] ...
	I0819 19:17:07.452104  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6bc5b24f616e32fdffb80b6ed0201250b02f143c8217d56ef90dc55551d709f"
	I0819 19:17:07.490598  438295 logs.go:123] Gathering logs for kube-scheduler [c09c2a3840c6b84c4d187a5b4938f1e79c515609ad3ff7077a163e94acd5fc22] ...
	I0819 19:17:07.490636  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c09c2a3840c6b84c4d187a5b4938f1e79c515609ad3ff7077a163e94acd5fc22"
	I0819 19:17:07.530454  438295 logs.go:123] Gathering logs for kube-proxy [3e23a8501fe9333693618c26b918ed665ca9f2ea955dfc771ddbd90f4af91338] ...
	I0819 19:17:07.530486  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e23a8501fe9333693618c26b918ed665ca9f2ea955dfc771ddbd90f4af91338"
	I0819 19:17:07.581488  438295 logs.go:123] Gathering logs for storage-provisioner [902796698c02b97c3f50f231cba5dfbc00bc7e8344f104fe7a36109e1d10a4f8] ...
	I0819 19:17:07.581528  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 902796698c02b97c3f50f231cba5dfbc00bc7e8344f104fe7a36109e1d10a4f8"
	I0819 19:17:07.621752  438295 logs.go:123] Gathering logs for storage-provisioner [44a4290db8405288dc877d1dbfa8f1a4976cb6221431aef419db3cdff822d3b6] ...
	I0819 19:17:07.621787  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44a4290db8405288dc877d1dbfa8f1a4976cb6221431aef419db3cdff822d3b6"
	I0819 19:17:07.661330  438295 logs.go:123] Gathering logs for container status ...
	I0819 19:17:07.661365  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:17:07.709227  438295 logs.go:123] Gathering logs for dmesg ...
	I0819 19:17:07.709261  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:17:07.724634  438295 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:17:07.724670  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 19:17:07.850212  438295 logs.go:123] Gathering logs for kube-apiserver [d66ad075c652a3b446078444a32327c07459f74199be8f89197067dbad566d5a] ...
	I0819 19:17:07.850247  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d66ad075c652a3b446078444a32327c07459f74199be8f89197067dbad566d5a"
	I0819 19:17:07.894464  438295 logs.go:123] Gathering logs for etcd [a3cb2c04e3eb3398fa324b660ca1864f22175cbf41fd84eae34a24ce7928b672] ...
	I0819 19:17:07.894507  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a3cb2c04e3eb3398fa324b660ca1864f22175cbf41fd84eae34a24ce7928b672"
	I0819 19:17:07.943807  438295 logs.go:123] Gathering logs for kube-controller-manager [6e6dab43bac16fb6a2155177fd2cb01da57c882a322ae89145bc332c50c87071] ...
	I0819 19:17:07.943841  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e6dab43bac16fb6a2155177fd2cb01da57c882a322ae89145bc332c50c87071"
	I0819 19:17:08.007428  438295 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:17:08.007463  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:17:08.487397  438295 out.go:358] Setting ErrFile to fd 2...
	I0819 19:17:08.487435  438295 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 19:17:08.487518  438295 out.go:270] X Problems detected in kubelet:
	W0819 19:17:08.487534  438295 out.go:270]   Aug 19 19:12:40 embed-certs-024748 kubelet[936]: W0819 19:12:40.671901     936 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:embed-certs-024748" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-024748' and this object
	W0819 19:17:08.487546  438295 out.go:270]   Aug 19 19:12:40 embed-certs-024748 kubelet[936]: E0819 19:12:40.672098     936 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:embed-certs-024748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-024748' and this object" logger="UnhandledError"
	W0819 19:17:08.487560  438295 out.go:270]   Aug 19 19:12:40 embed-certs-024748 kubelet[936]: W0819 19:12:40.672624     936 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-024748" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-024748' and this object
	W0819 19:17:08.487574  438295 out.go:270]   Aug 19 19:12:40 embed-certs-024748 kubelet[936]: E0819 19:12:40.672667     936 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:embed-certs-024748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-024748' and this object" logger="UnhandledError"
	I0819 19:17:08.487584  438295 out.go:358] Setting ErrFile to fd 2...
	I0819 19:17:08.487598  438295 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:17:10.237580  438245 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0819 19:17:10.237675  438245 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 19:17:10.237792  438245 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 19:17:10.237934  438245 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 19:17:10.238088  438245 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0819 19:17:10.238194  438245 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 19:17:10.239873  438245 out.go:235]   - Generating certificates and keys ...
	I0819 19:17:10.239957  438245 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 19:17:10.240051  438245 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 19:17:10.240187  438245 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 19:17:10.240294  438245 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 19:17:10.240410  438245 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 19:17:10.240495  438245 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 19:17:10.240598  438245 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 19:17:10.240680  438245 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 19:17:10.240747  438245 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 19:17:10.240843  438245 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 19:17:10.240886  438245 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 19:17:10.240958  438245 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 19:17:10.241024  438245 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 19:17:10.241094  438245 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0819 19:17:10.241159  438245 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 19:17:10.241248  438245 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 19:17:10.241328  438245 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 19:17:10.241431  438245 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 19:17:10.241535  438245 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 19:17:10.243764  438245 out.go:235]   - Booting up control plane ...
	I0819 19:17:10.243859  438245 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 19:17:10.243934  438245 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 19:17:10.243994  438245 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 19:17:10.244131  438245 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 19:17:10.244263  438245 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 19:17:10.244301  438245 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 19:17:10.244458  438245 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0819 19:17:10.244611  438245 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0819 19:17:10.244685  438245 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.412341ms
	I0819 19:17:10.244770  438245 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0819 19:17:10.244850  438245 kubeadm.go:310] [api-check] The API server is healthy after 5.002047877s
	I0819 19:17:10.244953  438245 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 19:17:10.245093  438245 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 19:17:10.245199  438245 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 19:17:10.245400  438245 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-982795 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 19:17:10.245465  438245 kubeadm.go:310] [bootstrap-token] Using token: trsfx5.kx2phd1605yhia2w
	I0819 19:17:10.247722  438245 out.go:235]   - Configuring RBAC rules ...
	I0819 19:17:10.247861  438245 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 19:17:10.247955  438245 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 19:17:10.248144  438245 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 19:17:10.248264  438245 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 19:17:10.248379  438245 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 19:17:10.248468  438245 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 19:17:10.248567  438245 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 19:17:10.248612  438245 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 19:17:10.248654  438245 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 19:17:10.248660  438245 kubeadm.go:310] 
	I0819 19:17:10.248708  438245 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 19:17:10.248713  438245 kubeadm.go:310] 
	I0819 19:17:10.248779  438245 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 19:17:10.248786  438245 kubeadm.go:310] 
	I0819 19:17:10.248806  438245 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 19:17:10.248866  438245 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 19:17:10.248910  438245 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 19:17:10.248916  438245 kubeadm.go:310] 
	I0819 19:17:10.248966  438245 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 19:17:10.248972  438245 kubeadm.go:310] 
	I0819 19:17:10.249014  438245 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 19:17:10.249024  438245 kubeadm.go:310] 
	I0819 19:17:10.249069  438245 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 19:17:10.249136  438245 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 19:17:10.249209  438245 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 19:17:10.249221  438245 kubeadm.go:310] 
	I0819 19:17:10.249319  438245 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 19:17:10.249386  438245 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 19:17:10.249392  438245 kubeadm.go:310] 
	I0819 19:17:10.249464  438245 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token trsfx5.kx2phd1605yhia2w \
	I0819 19:17:10.249553  438245 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3fcbd90565c5acbc36a47b2db682cb22dce9b172c9bf3af21e506ebb67608039 \
	I0819 19:17:10.249575  438245 kubeadm.go:310] 	--control-plane 
	I0819 19:17:10.249581  438245 kubeadm.go:310] 
	I0819 19:17:10.249658  438245 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 19:17:10.249664  438245 kubeadm.go:310] 
	I0819 19:17:10.249734  438245 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token trsfx5.kx2phd1605yhia2w \
	I0819 19:17:10.249833  438245 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3fcbd90565c5acbc36a47b2db682cb22dce9b172c9bf3af21e506ebb67608039 
	I0819 19:17:10.249849  438245 cni.go:84] Creating CNI manager for ""
	I0819 19:17:10.249857  438245 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 19:17:10.252133  438245 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 19:17:07.403590  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:09.901861  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:10.253419  438245 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 19:17:10.264266  438245 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 19:17:10.289509  438245 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 19:17:10.289661  438245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-982795 minikube.k8s.io/updated_at=2024_08_19T19_17_10_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=9c2db9d51ec33b5c53a86e9ba3d384ee332e3411 minikube.k8s.io/name=default-k8s-diff-port-982795 minikube.k8s.io/primary=true
	I0819 19:17:10.289663  438245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:17:10.322738  438245 ops.go:34] apiserver oom_adj: -16
	I0819 19:17:10.519946  438245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:17:11.020736  438245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:17:11.520925  438245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:17:12.020276  438245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:17:12.520277  438245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:17:13.020787  438245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:17:13.520048  438245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:17:14.020893  438245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:17:14.520869  438245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:17:14.642214  438245 kubeadm.go:1113] duration metric: took 4.352638211s to wait for elevateKubeSystemPrivileges
	I0819 19:17:14.642251  438245 kubeadm.go:394] duration metric: took 4m59.943476935s to StartCluster
	I0819 19:17:14.642295  438245 settings.go:142] acquiring lock: {Name:mk396fcf49a1d0e69583cf37ff3c819e37118163 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:17:14.642382  438245 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19468-372744/kubeconfig
	I0819 19:17:14.644103  438245 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/kubeconfig: {Name:mk8e7b4e1bb7da665111d2acd83eb48882c66853 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:17:14.644408  438245 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.48 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 19:17:14.644550  438245 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 19:17:14.644641  438245 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-982795"
	I0819 19:17:14.644665  438245 config.go:182] Loaded profile config "default-k8s-diff-port-982795": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:17:14.644687  438245 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-982795"
	W0819 19:17:14.644701  438245 addons.go:243] addon storage-provisioner should already be in state true
	I0819 19:17:14.644712  438245 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-982795"
	I0819 19:17:14.644735  438245 host.go:66] Checking if "default-k8s-diff-port-982795" exists ...
	I0819 19:17:14.644757  438245 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-982795"
	W0819 19:17:14.644770  438245 addons.go:243] addon metrics-server should already be in state true
	I0819 19:17:14.644678  438245 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-982795"
	I0819 19:17:14.644852  438245 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-982795"
	I0819 19:17:14.644797  438245 host.go:66] Checking if "default-k8s-diff-port-982795" exists ...
	I0819 19:17:14.645125  438245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:17:14.645176  438245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:17:14.645272  438245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:17:14.645291  438245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:17:14.645355  438245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:17:14.645401  438245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:17:14.646083  438245 out.go:177] * Verifying Kubernetes components...
	I0819 19:17:14.647579  438245 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:17:14.662756  438245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42581
	I0819 19:17:14.663407  438245 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:17:14.664088  438245 main.go:141] libmachine: Using API Version  1
	I0819 19:17:14.664117  438245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:17:14.664528  438245 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:17:14.665189  438245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:17:14.665222  438245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:17:14.665665  438245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43637
	I0819 19:17:14.665842  438245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44021
	I0819 19:17:14.666204  438245 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:17:14.666321  438245 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:17:14.666761  438245 main.go:141] libmachine: Using API Version  1
	I0819 19:17:14.666783  438245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:17:14.666955  438245 main.go:141] libmachine: Using API Version  1
	I0819 19:17:14.666979  438245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:17:14.667173  438245 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:17:14.667363  438245 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:17:14.667592  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetState
	I0819 19:17:14.667786  438245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:17:14.667818  438245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:17:14.671231  438245 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-982795"
	W0819 19:17:14.671249  438245 addons.go:243] addon default-storageclass should already be in state true
	I0819 19:17:14.671273  438245 host.go:66] Checking if "default-k8s-diff-port-982795" exists ...
	I0819 19:17:14.671507  438245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:17:14.671533  438245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:17:14.682996  438245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36593
	I0819 19:17:14.683560  438245 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:17:14.684268  438245 main.go:141] libmachine: Using API Version  1
	I0819 19:17:14.684292  438245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:17:14.684686  438245 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:17:14.684899  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetState
	I0819 19:17:14.686943  438245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44459
	I0819 19:17:14.687384  438245 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:17:14.687309  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .DriverName
	I0819 19:17:14.687874  438245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46587
	I0819 19:17:14.687965  438245 main.go:141] libmachine: Using API Version  1
	I0819 19:17:14.687980  438245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:17:14.688367  438245 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:17:14.688420  438245 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:17:14.688623  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetState
	I0819 19:17:14.689039  438245 main.go:141] libmachine: Using API Version  1
	I0819 19:17:14.689362  438245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:17:14.689690  438245 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:17:14.690179  438245 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:17:14.690626  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .DriverName
	I0819 19:17:14.690789  438245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:17:14.690823  438245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:17:14.690938  438245 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 19:17:14.690958  438245 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 19:17:14.690979  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHHostname
	I0819 19:17:14.692114  438245 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0819 19:17:11.902284  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:13.903205  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:16.402298  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:14.693147  438245 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0819 19:17:14.693163  438245 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0819 19:17:14.693182  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHHostname
	I0819 19:17:14.694601  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:17:14.695302  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:17:14.695333  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:17:14.695541  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHPort
	I0819 19:17:14.695760  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:17:14.696133  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHUsername
	I0819 19:17:14.696303  438245 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/default-k8s-diff-port-982795/id_rsa Username:docker}
	I0819 19:17:14.696554  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:17:14.696979  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:17:14.697003  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:17:14.697110  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHPort
	I0819 19:17:14.697274  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:17:14.697445  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHUsername
	I0819 19:17:14.697578  438245 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/default-k8s-diff-port-982795/id_rsa Username:docker}
	I0819 19:17:14.708592  438245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38807
	I0819 19:17:14.709140  438245 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:17:14.709716  438245 main.go:141] libmachine: Using API Version  1
	I0819 19:17:14.709737  438245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:17:14.710049  438245 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:17:14.710269  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetState
	I0819 19:17:14.711887  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .DriverName
	I0819 19:17:14.712147  438245 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 19:17:14.712162  438245 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 19:17:14.712179  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHHostname
	I0819 19:17:14.715593  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:17:14.716040  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:17:14.716062  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:17:14.716384  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHPort
	I0819 19:17:14.716561  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:17:14.716710  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHUsername
	I0819 19:17:14.716938  438245 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/default-k8s-diff-port-982795/id_rsa Username:docker}
	I0819 19:17:14.874857  438245 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 19:17:14.903798  438245 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-982795" to be "Ready" ...
	I0819 19:17:14.919842  438245 node_ready.go:49] node "default-k8s-diff-port-982795" has status "Ready":"True"
	I0819 19:17:14.919866  438245 node_ready.go:38] duration metric: took 16.039402ms for node "default-k8s-diff-port-982795" to be "Ready" ...
	I0819 19:17:14.919877  438245 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 19:17:14.932785  438245 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-845gx" in "kube-system" namespace to be "Ready" ...
	I0819 19:17:15.019664  438245 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0819 19:17:15.019718  438245 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0819 19:17:15.030317  438245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 19:17:15.056177  438245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 19:17:15.074202  438245 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0819 19:17:15.074235  438245 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0819 19:17:15.127037  438245 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 19:17:15.127071  438245 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0819 19:17:15.217951  438245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 19:17:15.351034  438245 main.go:141] libmachine: Making call to close driver server
	I0819 19:17:15.351067  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .Close
	I0819 19:17:15.351398  438245 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:17:15.351417  438245 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:17:15.351429  438245 main.go:141] libmachine: Making call to close driver server
	I0819 19:17:15.351441  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .Close
	I0819 19:17:15.351678  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | Closing plugin on server side
	I0819 19:17:15.351728  438245 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:17:15.351750  438245 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:17:15.357999  438245 main.go:141] libmachine: Making call to close driver server
	I0819 19:17:15.358023  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .Close
	I0819 19:17:15.358291  438245 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:17:15.358316  438245 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:17:16.196638  438245 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.140417152s)
	I0819 19:17:16.196694  438245 main.go:141] libmachine: Making call to close driver server
	I0819 19:17:16.196707  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .Close
	I0819 19:17:16.197022  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | Closing plugin on server side
	I0819 19:17:16.197112  438245 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:17:16.197137  438245 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:17:16.197157  438245 main.go:141] libmachine: Making call to close driver server
	I0819 19:17:16.197167  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .Close
	I0819 19:17:16.197449  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | Closing plugin on server side
	I0819 19:17:16.197493  438245 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:17:16.197505  438245 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:17:16.638069  438245 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.42006496s)
	I0819 19:17:16.638141  438245 main.go:141] libmachine: Making call to close driver server
	I0819 19:17:16.638159  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .Close
	I0819 19:17:16.638488  438245 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:17:16.638518  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | Closing plugin on server side
	I0819 19:17:16.638529  438245 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:17:16.638564  438245 main.go:141] libmachine: Making call to close driver server
	I0819 19:17:16.638574  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .Close
	I0819 19:17:16.638861  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | Closing plugin on server side
	I0819 19:17:16.638896  438245 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:17:16.638904  438245 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:17:16.638915  438245 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-982795"
	I0819 19:17:16.641476  438245 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0819 19:17:16.642733  438245 addons.go:510] duration metric: took 1.998196502s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0819 19:17:16.954631  438245 pod_ready.go:103] pod "coredns-6f6b679f8f-845gx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:18.489333  438295 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0819 19:17:18.494609  438295 api_server.go:279] https://192.168.72.96:8443/healthz returned 200:
	ok
	I0819 19:17:18.495587  438295 api_server.go:141] control plane version: v1.31.0
	I0819 19:17:18.495613  438295 api_server.go:131] duration metric: took 11.510793296s to wait for apiserver health ...
	I0819 19:17:18.495624  438295 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 19:17:18.495656  438295 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:17:18.495735  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:17:18.540446  438295 cri.go:89] found id: "d66ad075c652a3b446078444a32327c07459f74199be8f89197067dbad566d5a"
	I0819 19:17:18.540477  438295 cri.go:89] found id: ""
	I0819 19:17:18.540487  438295 logs.go:276] 1 containers: [d66ad075c652a3b446078444a32327c07459f74199be8f89197067dbad566d5a]
	I0819 19:17:18.540555  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:18.551443  438295 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:17:18.551527  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:17:18.592388  438295 cri.go:89] found id: "a3cb2c04e3eb3398fa324b660ca1864f22175cbf41fd84eae34a24ce7928b672"
	I0819 19:17:18.592416  438295 cri.go:89] found id: ""
	I0819 19:17:18.592427  438295 logs.go:276] 1 containers: [a3cb2c04e3eb3398fa324b660ca1864f22175cbf41fd84eae34a24ce7928b672]
	I0819 19:17:18.592495  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:18.597534  438295 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:17:18.597615  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:17:18.637782  438295 cri.go:89] found id: "a6bc5b24f616e32fdffb80b6ed0201250b02f143c8217d56ef90dc55551d709f"
	I0819 19:17:18.637804  438295 cri.go:89] found id: ""
	I0819 19:17:18.637812  438295 logs.go:276] 1 containers: [a6bc5b24f616e32fdffb80b6ed0201250b02f143c8217d56ef90dc55551d709f]
	I0819 19:17:18.637861  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:18.642557  438295 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:17:18.642618  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:17:18.679573  438295 cri.go:89] found id: "c09c2a3840c6b84c4d187a5b4938f1e79c515609ad3ff7077a163e94acd5fc22"
	I0819 19:17:18.679597  438295 cri.go:89] found id: ""
	I0819 19:17:18.679605  438295 logs.go:276] 1 containers: [c09c2a3840c6b84c4d187a5b4938f1e79c515609ad3ff7077a163e94acd5fc22]
	I0819 19:17:18.679657  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:18.684160  438295 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:17:18.684230  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:17:18.726848  438295 cri.go:89] found id: "3e23a8501fe9333693618c26b918ed665ca9f2ea955dfc771ddbd90f4af91338"
	I0819 19:17:18.726881  438295 cri.go:89] found id: ""
	I0819 19:17:18.726889  438295 logs.go:276] 1 containers: [3e23a8501fe9333693618c26b918ed665ca9f2ea955dfc771ddbd90f4af91338]
	I0819 19:17:18.726943  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:18.731422  438295 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:17:18.731484  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:17:18.773623  438295 cri.go:89] found id: "6e6dab43bac16fb6a2155177fd2cb01da57c882a322ae89145bc332c50c87071"
	I0819 19:17:18.773649  438295 cri.go:89] found id: ""
	I0819 19:17:18.773658  438295 logs.go:276] 1 containers: [6e6dab43bac16fb6a2155177fd2cb01da57c882a322ae89145bc332c50c87071]
	I0819 19:17:18.773709  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:18.779609  438295 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:17:18.779687  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:17:18.822876  438295 cri.go:89] found id: ""
	I0819 19:17:18.822911  438295 logs.go:276] 0 containers: []
	W0819 19:17:18.822922  438295 logs.go:278] No container was found matching "kindnet"
	I0819 19:17:18.822931  438295 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 19:17:18.822998  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 19:17:18.868653  438295 cri.go:89] found id: "902796698c02b97c3f50f231cba5dfbc00bc7e8344f104fe7a36109e1d10a4f8"
	I0819 19:17:18.868685  438295 cri.go:89] found id: "44a4290db8405288dc877d1dbfa8f1a4976cb6221431aef419db3cdff822d3b6"
	I0819 19:17:18.868691  438295 cri.go:89] found id: ""
	I0819 19:17:18.868701  438295 logs.go:276] 2 containers: [902796698c02b97c3f50f231cba5dfbc00bc7e8344f104fe7a36109e1d10a4f8 44a4290db8405288dc877d1dbfa8f1a4976cb6221431aef419db3cdff822d3b6]
	I0819 19:17:18.868776  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:18.873136  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:18.877397  438295 logs.go:123] Gathering logs for kube-proxy [3e23a8501fe9333693618c26b918ed665ca9f2ea955dfc771ddbd90f4af91338] ...
	I0819 19:17:18.877425  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e23a8501fe9333693618c26b918ed665ca9f2ea955dfc771ddbd90f4af91338"
	I0819 19:17:18.918085  438295 logs.go:123] Gathering logs for kube-controller-manager [6e6dab43bac16fb6a2155177fd2cb01da57c882a322ae89145bc332c50c87071] ...
	I0819 19:17:18.918118  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e6dab43bac16fb6a2155177fd2cb01da57c882a322ae89145bc332c50c87071"
	I0819 19:17:18.973344  438295 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:17:18.973378  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:17:18.901539  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:20.902550  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:19.440295  438245 pod_ready.go:103] pod "coredns-6f6b679f8f-845gx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:21.939652  438245 pod_ready.go:103] pod "coredns-6f6b679f8f-845gx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:19.443625  438295 logs.go:123] Gathering logs for container status ...
	I0819 19:17:19.443689  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:17:19.492650  438295 logs.go:123] Gathering logs for dmesg ...
	I0819 19:17:19.492696  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:17:19.507957  438295 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:17:19.507996  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 19:17:19.617295  438295 logs.go:123] Gathering logs for coredns [a6bc5b24f616e32fdffb80b6ed0201250b02f143c8217d56ef90dc55551d709f] ...
	I0819 19:17:19.617341  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6bc5b24f616e32fdffb80b6ed0201250b02f143c8217d56ef90dc55551d709f"
	I0819 19:17:19.669869  438295 logs.go:123] Gathering logs for kube-scheduler [c09c2a3840c6b84c4d187a5b4938f1e79c515609ad3ff7077a163e94acd5fc22] ...
	I0819 19:17:19.669930  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c09c2a3840c6b84c4d187a5b4938f1e79c515609ad3ff7077a163e94acd5fc22"
	I0819 19:17:19.706649  438295 logs.go:123] Gathering logs for storage-provisioner [44a4290db8405288dc877d1dbfa8f1a4976cb6221431aef419db3cdff822d3b6] ...
	I0819 19:17:19.706681  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44a4290db8405288dc877d1dbfa8f1a4976cb6221431aef419db3cdff822d3b6"
	I0819 19:17:19.746742  438295 logs.go:123] Gathering logs for kubelet ...
	I0819 19:17:19.746780  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 19:17:19.796224  438295 logs.go:138] Found kubelet problem: Aug 19 19:12:40 embed-certs-024748 kubelet[936]: W0819 19:12:40.671901     936 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:embed-certs-024748" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-024748' and this object
	W0819 19:17:19.796442  438295 logs.go:138] Found kubelet problem: Aug 19 19:12:40 embed-certs-024748 kubelet[936]: E0819 19:12:40.672098     936 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:embed-certs-024748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-024748' and this object" logger="UnhandledError"
	W0819 19:17:19.796622  438295 logs.go:138] Found kubelet problem: Aug 19 19:12:40 embed-certs-024748 kubelet[936]: W0819 19:12:40.672624     936 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-024748" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-024748' and this object
	W0819 19:17:19.796845  438295 logs.go:138] Found kubelet problem: Aug 19 19:12:40 embed-certs-024748 kubelet[936]: E0819 19:12:40.672667     936 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:embed-certs-024748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-024748' and this object" logger="UnhandledError"
	I0819 19:17:19.836283  438295 logs.go:123] Gathering logs for kube-apiserver [d66ad075c652a3b446078444a32327c07459f74199be8f89197067dbad566d5a] ...
	I0819 19:17:19.836328  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d66ad075c652a3b446078444a32327c07459f74199be8f89197067dbad566d5a"
	I0819 19:17:19.889829  438295 logs.go:123] Gathering logs for etcd [a3cb2c04e3eb3398fa324b660ca1864f22175cbf41fd84eae34a24ce7928b672] ...
	I0819 19:17:19.889875  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a3cb2c04e3eb3398fa324b660ca1864f22175cbf41fd84eae34a24ce7928b672"
	I0819 19:17:19.938361  438295 logs.go:123] Gathering logs for storage-provisioner [902796698c02b97c3f50f231cba5dfbc00bc7e8344f104fe7a36109e1d10a4f8] ...
	I0819 19:17:19.938397  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 902796698c02b97c3f50f231cba5dfbc00bc7e8344f104fe7a36109e1d10a4f8"
	I0819 19:17:19.978525  438295 out.go:358] Setting ErrFile to fd 2...
	I0819 19:17:19.978557  438295 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 19:17:19.978628  438295 out.go:270] X Problems detected in kubelet:
	W0819 19:17:19.978642  438295 out.go:270]   Aug 19 19:12:40 embed-certs-024748 kubelet[936]: W0819 19:12:40.671901     936 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:embed-certs-024748" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-024748' and this object
	W0819 19:17:19.978656  438295 out.go:270]   Aug 19 19:12:40 embed-certs-024748 kubelet[936]: E0819 19:12:40.672098     936 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:embed-certs-024748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-024748' and this object" logger="UnhandledError"
	W0819 19:17:19.978669  438295 out.go:270]   Aug 19 19:12:40 embed-certs-024748 kubelet[936]: W0819 19:12:40.672624     936 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-024748" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-024748' and this object
	W0819 19:17:19.978680  438295 out.go:270]   Aug 19 19:12:40 embed-certs-024748 kubelet[936]: E0819 19:12:40.672667     936 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:embed-certs-024748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-024748' and this object" logger="UnhandledError"
	I0819 19:17:19.978690  438295 out.go:358] Setting ErrFile to fd 2...
	I0819 19:17:19.978699  438295 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:17:23.941399  438245 pod_ready.go:93] pod "coredns-6f6b679f8f-845gx" in "kube-system" namespace has status "Ready":"True"
	I0819 19:17:23.941426  438245 pod_ready.go:82] duration metric: took 9.00859927s for pod "coredns-6f6b679f8f-845gx" in "kube-system" namespace to be "Ready" ...
	I0819 19:17:23.941438  438245 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-tlxtt" in "kube-system" namespace to be "Ready" ...
	I0819 19:17:23.946827  438245 pod_ready.go:93] pod "coredns-6f6b679f8f-tlxtt" in "kube-system" namespace has status "Ready":"True"
	I0819 19:17:23.946848  438245 pod_ready.go:82] duration metric: took 5.40058ms for pod "coredns-6f6b679f8f-tlxtt" in "kube-system" namespace to be "Ready" ...
	I0819 19:17:23.946859  438245 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:17:23.956158  438245 pod_ready.go:93] pod "etcd-default-k8s-diff-port-982795" in "kube-system" namespace has status "Ready":"True"
	I0819 19:17:23.956181  438245 pod_ready.go:82] duration metric: took 9.312871ms for pod "etcd-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:17:23.956193  438245 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:17:23.962573  438245 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-982795" in "kube-system" namespace has status "Ready":"True"
	I0819 19:17:23.962595  438245 pod_ready.go:82] duration metric: took 6.3934ms for pod "kube-apiserver-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:17:23.962607  438245 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:17:23.968186  438245 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-982795" in "kube-system" namespace has status "Ready":"True"
	I0819 19:17:23.968206  438245 pod_ready.go:82] duration metric: took 5.591464ms for pod "kube-controller-manager-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:17:23.968214  438245 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2v4hk" in "kube-system" namespace to be "Ready" ...
	I0819 19:17:24.337409  438245 pod_ready.go:93] pod "kube-proxy-2v4hk" in "kube-system" namespace has status "Ready":"True"
	I0819 19:17:24.337443  438245 pod_ready.go:82] duration metric: took 369.220318ms for pod "kube-proxy-2v4hk" in "kube-system" namespace to be "Ready" ...
	I0819 19:17:24.337460  438245 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:17:24.737326  438245 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-982795" in "kube-system" namespace has status "Ready":"True"
	I0819 19:17:24.737362  438245 pod_ready.go:82] duration metric: took 399.891804ms for pod "kube-scheduler-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:17:24.737375  438245 pod_ready.go:39] duration metric: took 9.817484404s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 19:17:24.737396  438245 api_server.go:52] waiting for apiserver process to appear ...
	I0819 19:17:24.737467  438245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:17:24.753681  438245 api_server.go:72] duration metric: took 10.109231411s to wait for apiserver process to appear ...
	I0819 19:17:24.753711  438245 api_server.go:88] waiting for apiserver healthz status ...
	I0819 19:17:24.753734  438245 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8444/healthz ...
	I0819 19:17:24.757976  438245 api_server.go:279] https://192.168.61.48:8444/healthz returned 200:
	ok
	I0819 19:17:24.758875  438245 api_server.go:141] control plane version: v1.31.0
	I0819 19:17:24.758899  438245 api_server.go:131] duration metric: took 5.179486ms to wait for apiserver health ...
	I0819 19:17:24.758908  438245 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 19:17:24.944008  438245 system_pods.go:59] 9 kube-system pods found
	I0819 19:17:24.944053  438245 system_pods.go:61] "coredns-6f6b679f8f-845gx" [95155dd2-d46c-4445-b735-26eae16aaff9] Running
	I0819 19:17:24.944058  438245 system_pods.go:61] "coredns-6f6b679f8f-tlxtt" [150ac4be-bef1-4f0a-ab16-f085284686cb] Running
	I0819 19:17:24.944062  438245 system_pods.go:61] "etcd-default-k8s-diff-port-982795" [eb29f445-6242-4b60-a8d5-7c684df17926] Running
	I0819 19:17:24.944066  438245 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-982795" [2add6270-bf14-43e7-834b-3e629f46efa3] Running
	I0819 19:17:24.944070  438245 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-982795" [6b636d4b-0efa-4cef-b0d4-d4539ddc5c90] Running
	I0819 19:17:24.944073  438245 system_pods.go:61] "kube-proxy-2v4hk" [042d5d54-6557-4d8e-8f4e-2d56e95882ce] Running
	I0819 19:17:24.944076  438245 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-982795" [6eff3815-26b3-4e95-a754-2dc65fd29126] Running
	I0819 19:17:24.944082  438245 system_pods.go:61] "metrics-server-6867b74b74-2dp5r" [04e0ce68-d9a2-426a-a0e9-47f6f7867efd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 19:17:24.944086  438245 system_pods.go:61] "storage-provisioner" [23fcea86-977e-4eb1-9e5a-23d6bdfb09c0] Running
	I0819 19:17:24.944094  438245 system_pods.go:74] duration metric: took 185.180015ms to wait for pod list to return data ...
	I0819 19:17:24.944104  438245 default_sa.go:34] waiting for default service account to be created ...
	I0819 19:17:25.137108  438245 default_sa.go:45] found service account: "default"
	I0819 19:17:25.137147  438245 default_sa.go:55] duration metric: took 193.033434ms for default service account to be created ...
	I0819 19:17:25.137160  438245 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 19:17:25.340115  438245 system_pods.go:86] 9 kube-system pods found
	I0819 19:17:25.340146  438245 system_pods.go:89] "coredns-6f6b679f8f-845gx" [95155dd2-d46c-4445-b735-26eae16aaff9] Running
	I0819 19:17:25.340155  438245 system_pods.go:89] "coredns-6f6b679f8f-tlxtt" [150ac4be-bef1-4f0a-ab16-f085284686cb] Running
	I0819 19:17:25.340161  438245 system_pods.go:89] "etcd-default-k8s-diff-port-982795" [eb29f445-6242-4b60-a8d5-7c684df17926] Running
	I0819 19:17:25.340167  438245 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-982795" [2add6270-bf14-43e7-834b-3e629f46efa3] Running
	I0819 19:17:25.340173  438245 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-982795" [6b636d4b-0efa-4cef-b0d4-d4539ddc5c90] Running
	I0819 19:17:25.340177  438245 system_pods.go:89] "kube-proxy-2v4hk" [042d5d54-6557-4d8e-8f4e-2d56e95882ce] Running
	I0819 19:17:25.340182  438245 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-982795" [6eff3815-26b3-4e95-a754-2dc65fd29126] Running
	I0819 19:17:25.340192  438245 system_pods.go:89] "metrics-server-6867b74b74-2dp5r" [04e0ce68-d9a2-426a-a0e9-47f6f7867efd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 19:17:25.340198  438245 system_pods.go:89] "storage-provisioner" [23fcea86-977e-4eb1-9e5a-23d6bdfb09c0] Running
	I0819 19:17:25.340211  438245 system_pods.go:126] duration metric: took 203.044324ms to wait for k8s-apps to be running ...
	I0819 19:17:25.340224  438245 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 19:17:25.340278  438245 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 19:17:25.355190  438245 system_svc.go:56] duration metric: took 14.954269ms WaitForService to wait for kubelet
	I0819 19:17:25.355223  438245 kubeadm.go:582] duration metric: took 10.710777567s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 19:17:25.355252  438245 node_conditions.go:102] verifying NodePressure condition ...
	I0819 19:17:25.537425  438245 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 19:17:25.537459  438245 node_conditions.go:123] node cpu capacity is 2
	I0819 19:17:25.537472  438245 node_conditions.go:105] duration metric: took 182.213218ms to run NodePressure ...
	I0819 19:17:25.537491  438245 start.go:241] waiting for startup goroutines ...
	I0819 19:17:25.537501  438245 start.go:246] waiting for cluster config update ...
	I0819 19:17:25.537516  438245 start.go:255] writing updated cluster config ...
	I0819 19:17:25.537851  438245 ssh_runner.go:195] Run: rm -f paused
	I0819 19:17:25.589212  438245 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 19:17:25.591352  438245 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-982795" cluster and "default" namespace by default
	I0819 19:17:22.902846  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:25.401911  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:29.988042  438295 system_pods.go:59] 8 kube-system pods found
	I0819 19:17:29.988074  438295 system_pods.go:61] "coredns-6f6b679f8f-7ww4z" [bbde00d4-6027-4d8d-b51e-bd68915da166] Running
	I0819 19:17:29.988080  438295 system_pods.go:61] "etcd-embed-certs-024748" [846ff0f0-5399-43fd-8e7b-1f64997cd291] Running
	I0819 19:17:29.988084  438295 system_pods.go:61] "kube-apiserver-embed-certs-024748" [3ff558d6-e82e-47a0-bb81-15244bee6470] Running
	I0819 19:17:29.988088  438295 system_pods.go:61] "kube-controller-manager-embed-certs-024748" [993b82ba-e8e7-4896-a06b-87c4f08d5985] Running
	I0819 19:17:29.988092  438295 system_pods.go:61] "kube-proxy-bmmbh" [1f77f152-f5f4-40f6-9632-1eaa36b9ea31] Running
	I0819 19:17:29.988095  438295 system_pods.go:61] "kube-scheduler-embed-certs-024748" [34684d4c-2479-45c5-883b-158cf9f974f5] Running
	I0819 19:17:29.988100  438295 system_pods.go:61] "metrics-server-6867b74b74-kxcwh" [15f86629-d916-4fdc-9ecf-9cb1b6c83f85] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 19:17:29.988104  438295 system_pods.go:61] "storage-provisioner" [7acb6ce1-21b6-4cdd-a5cb-76d694fc0a38] Running
	I0819 19:17:29.988113  438295 system_pods.go:74] duration metric: took 11.492481541s to wait for pod list to return data ...
	I0819 19:17:29.988120  438295 default_sa.go:34] waiting for default service account to be created ...
	I0819 19:17:29.991728  438295 default_sa.go:45] found service account: "default"
	I0819 19:17:29.991755  438295 default_sa.go:55] duration metric: took 3.62838ms for default service account to be created ...
	I0819 19:17:29.991764  438295 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 19:17:29.997212  438295 system_pods.go:86] 8 kube-system pods found
	I0819 19:17:29.997237  438295 system_pods.go:89] "coredns-6f6b679f8f-7ww4z" [bbde00d4-6027-4d8d-b51e-bd68915da166] Running
	I0819 19:17:29.997243  438295 system_pods.go:89] "etcd-embed-certs-024748" [846ff0f0-5399-43fd-8e7b-1f64997cd291] Running
	I0819 19:17:29.997247  438295 system_pods.go:89] "kube-apiserver-embed-certs-024748" [3ff558d6-e82e-47a0-bb81-15244bee6470] Running
	I0819 19:17:29.997252  438295 system_pods.go:89] "kube-controller-manager-embed-certs-024748" [993b82ba-e8e7-4896-a06b-87c4f08d5985] Running
	I0819 19:17:29.997256  438295 system_pods.go:89] "kube-proxy-bmmbh" [1f77f152-f5f4-40f6-9632-1eaa36b9ea31] Running
	I0819 19:17:29.997260  438295 system_pods.go:89] "kube-scheduler-embed-certs-024748" [34684d4c-2479-45c5-883b-158cf9f974f5] Running
	I0819 19:17:29.997267  438295 system_pods.go:89] "metrics-server-6867b74b74-kxcwh" [15f86629-d916-4fdc-9ecf-9cb1b6c83f85] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 19:17:29.997270  438295 system_pods.go:89] "storage-provisioner" [7acb6ce1-21b6-4cdd-a5cb-76d694fc0a38] Running
	I0819 19:17:29.997277  438295 system_pods.go:126] duration metric: took 5.507363ms to wait for k8s-apps to be running ...
	I0819 19:17:29.997283  438295 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 19:17:29.997329  438295 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 19:17:30.015349  438295 system_svc.go:56] duration metric: took 18.05422ms WaitForService to wait for kubelet
	I0819 19:17:30.015385  438295 kubeadm.go:582] duration metric: took 4m46.148274918s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 19:17:30.015408  438295 node_conditions.go:102] verifying NodePressure condition ...
	I0819 19:17:30.019744  438295 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 19:17:30.019767  438295 node_conditions.go:123] node cpu capacity is 2
	I0819 19:17:30.019779  438295 node_conditions.go:105] duration metric: took 4.364435ms to run NodePressure ...
	I0819 19:17:30.019791  438295 start.go:241] waiting for startup goroutines ...
	I0819 19:17:30.019798  438295 start.go:246] waiting for cluster config update ...
	I0819 19:17:30.019809  438295 start.go:255] writing updated cluster config ...
	I0819 19:17:30.020080  438295 ssh_runner.go:195] Run: rm -f paused
	I0819 19:17:30.071945  438295 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 19:17:30.073912  438295 out.go:177] * Done! kubectl is now configured to use "embed-certs-024748" cluster and "default" namespace by default
	I0819 19:17:27.901471  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:29.901560  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:32.401214  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:34.402184  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:36.901979  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:38.902132  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:41.401103  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:43.889122  438716 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0819 19:17:43.889226  438716 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 19:17:43.889441  438716 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 19:17:43.402531  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:45.402739  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:48.889647  438716 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 19:17:48.889896  438716 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 19:17:47.902033  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:48.402784  438001 pod_ready.go:82] duration metric: took 4m0.007573449s for pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace to be "Ready" ...
	E0819 19:17:48.402807  438001 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0819 19:17:48.402814  438001 pod_ready.go:39] duration metric: took 4m5.043625176s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 19:17:48.402837  438001 api_server.go:52] waiting for apiserver process to appear ...
	I0819 19:17:48.402866  438001 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:17:48.402916  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:17:48.465049  438001 cri.go:89] found id: "cdac290df2d44c9b30a9c4378f98137a73e603fccd18bc228cca5d017f0a7094"
	I0819 19:17:48.465072  438001 cri.go:89] found id: ""
	I0819 19:17:48.465081  438001 logs.go:276] 1 containers: [cdac290df2d44c9b30a9c4378f98137a73e603fccd18bc228cca5d017f0a7094]
	I0819 19:17:48.465157  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:48.469640  438001 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:17:48.469708  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:17:48.506800  438001 cri.go:89] found id: "27d104597d0ca1b418bd0cab630536ff2d859717c314b48ea994680b21a5bd9a"
	I0819 19:17:48.506825  438001 cri.go:89] found id: ""
	I0819 19:17:48.506836  438001 logs.go:276] 1 containers: [27d104597d0ca1b418bd0cab630536ff2d859717c314b48ea994680b21a5bd9a]
	I0819 19:17:48.506900  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:48.511810  438001 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:17:48.511899  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:17:48.558215  438001 cri.go:89] found id: "6ad390cacd3d89ad9a5e7af71dab26d472a67971ffda086057b7cf0e0a9560aa"
	I0819 19:17:48.558240  438001 cri.go:89] found id: ""
	I0819 19:17:48.558250  438001 logs.go:276] 1 containers: [6ad390cacd3d89ad9a5e7af71dab26d472a67971ffda086057b7cf0e0a9560aa]
	I0819 19:17:48.558308  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:48.562785  438001 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:17:48.562844  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:17:48.602715  438001 cri.go:89] found id: "123f84ccdc9cf1aa830891307b79d42c9166f018bff19b498a5107e428feb92f"
	I0819 19:17:48.602738  438001 cri.go:89] found id: ""
	I0819 19:17:48.602748  438001 logs.go:276] 1 containers: [123f84ccdc9cf1aa830891307b79d42c9166f018bff19b498a5107e428feb92f]
	I0819 19:17:48.602815  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:48.607456  438001 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:17:48.607512  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:17:48.648285  438001 cri.go:89] found id: "236b4296ad713b251ca958489ebfc4ce41bd2cb64d538cf0cf5f72cc9243e94a"
	I0819 19:17:48.648314  438001 cri.go:89] found id: ""
	I0819 19:17:48.648324  438001 logs.go:276] 1 containers: [236b4296ad713b251ca958489ebfc4ce41bd2cb64d538cf0cf5f72cc9243e94a]
	I0819 19:17:48.648374  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:48.653772  438001 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:17:48.653830  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:17:48.697336  438001 cri.go:89] found id: "390aeac356048873634022bb4093a927ddaf293b994b7316b79cfc2c4c329346"
	I0819 19:17:48.697365  438001 cri.go:89] found id: ""
	I0819 19:17:48.697376  438001 logs.go:276] 1 containers: [390aeac356048873634022bb4093a927ddaf293b994b7316b79cfc2c4c329346]
	I0819 19:17:48.697438  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:48.701661  438001 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:17:48.701726  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:17:48.737952  438001 cri.go:89] found id: ""
	I0819 19:17:48.737990  438001 logs.go:276] 0 containers: []
	W0819 19:17:48.738002  438001 logs.go:278] No container was found matching "kindnet"
	I0819 19:17:48.738010  438001 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 19:17:48.738076  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 19:17:48.780047  438001 cri.go:89] found id: "fd16c88623359ff9e44155c82c7e33b07dc040678d1d6f1915a25d80a5db0bbd"
	I0819 19:17:48.780076  438001 cri.go:89] found id: "482a17643a2dedc658bdc88ca54e2ffb40166833acfc42adf452364226e51dc6"
	I0819 19:17:48.780082  438001 cri.go:89] found id: ""
	I0819 19:17:48.780092  438001 logs.go:276] 2 containers: [fd16c88623359ff9e44155c82c7e33b07dc040678d1d6f1915a25d80a5db0bbd 482a17643a2dedc658bdc88ca54e2ffb40166833acfc42adf452364226e51dc6]
	I0819 19:17:48.780168  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:48.784558  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:48.788803  438001 logs.go:123] Gathering logs for kube-apiserver [cdac290df2d44c9b30a9c4378f98137a73e603fccd18bc228cca5d017f0a7094] ...
	I0819 19:17:48.788826  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cdac290df2d44c9b30a9c4378f98137a73e603fccd18bc228cca5d017f0a7094"
	I0819 19:17:48.843469  438001 logs.go:123] Gathering logs for kube-scheduler [123f84ccdc9cf1aa830891307b79d42c9166f018bff19b498a5107e428feb92f] ...
	I0819 19:17:48.843501  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 123f84ccdc9cf1aa830891307b79d42c9166f018bff19b498a5107e428feb92f"
	I0819 19:17:48.884461  438001 logs.go:123] Gathering logs for kube-proxy [236b4296ad713b251ca958489ebfc4ce41bd2cb64d538cf0cf5f72cc9243e94a] ...
	I0819 19:17:48.884495  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 236b4296ad713b251ca958489ebfc4ce41bd2cb64d538cf0cf5f72cc9243e94a"
	I0819 19:17:48.927064  438001 logs.go:123] Gathering logs for storage-provisioner [fd16c88623359ff9e44155c82c7e33b07dc040678d1d6f1915a25d80a5db0bbd] ...
	I0819 19:17:48.927093  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd16c88623359ff9e44155c82c7e33b07dc040678d1d6f1915a25d80a5db0bbd"
	I0819 19:17:48.963812  438001 logs.go:123] Gathering logs for container status ...
	I0819 19:17:48.963845  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:17:49.017381  438001 logs.go:123] Gathering logs for kubelet ...
	I0819 19:17:49.017420  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:17:49.093572  438001 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:17:49.093614  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 19:17:49.236680  438001 logs.go:123] Gathering logs for coredns [6ad390cacd3d89ad9a5e7af71dab26d472a67971ffda086057b7cf0e0a9560aa] ...
	I0819 19:17:49.236721  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ad390cacd3d89ad9a5e7af71dab26d472a67971ffda086057b7cf0e0a9560aa"
	I0819 19:17:49.274636  438001 logs.go:123] Gathering logs for kube-controller-manager [390aeac356048873634022bb4093a927ddaf293b994b7316b79cfc2c4c329346] ...
	I0819 19:17:49.274677  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 390aeac356048873634022bb4093a927ddaf293b994b7316b79cfc2c4c329346"
	I0819 19:17:49.326208  438001 logs.go:123] Gathering logs for storage-provisioner [482a17643a2dedc658bdc88ca54e2ffb40166833acfc42adf452364226e51dc6] ...
	I0819 19:17:49.326242  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 482a17643a2dedc658bdc88ca54e2ffb40166833acfc42adf452364226e51dc6"
	I0819 19:17:49.363589  438001 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:17:49.363628  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:17:49.841705  438001 logs.go:123] Gathering logs for dmesg ...
	I0819 19:17:49.841757  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:17:49.858466  438001 logs.go:123] Gathering logs for etcd [27d104597d0ca1b418bd0cab630536ff2d859717c314b48ea994680b21a5bd9a] ...
	I0819 19:17:49.858504  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27d104597d0ca1b418bd0cab630536ff2d859717c314b48ea994680b21a5bd9a"
	I0819 19:17:52.406197  438001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:17:52.422951  438001 api_server.go:72] duration metric: took 4m16.822246565s to wait for apiserver process to appear ...
	I0819 19:17:52.422981  438001 api_server.go:88] waiting for apiserver healthz status ...
	I0819 19:17:52.423019  438001 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:17:52.423075  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:17:52.464305  438001 cri.go:89] found id: "cdac290df2d44c9b30a9c4378f98137a73e603fccd18bc228cca5d017f0a7094"
	I0819 19:17:52.464327  438001 cri.go:89] found id: ""
	I0819 19:17:52.464335  438001 logs.go:276] 1 containers: [cdac290df2d44c9b30a9c4378f98137a73e603fccd18bc228cca5d017f0a7094]
	I0819 19:17:52.464387  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:52.468824  438001 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:17:52.468904  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:17:52.508907  438001 cri.go:89] found id: "27d104597d0ca1b418bd0cab630536ff2d859717c314b48ea994680b21a5bd9a"
	I0819 19:17:52.508929  438001 cri.go:89] found id: ""
	I0819 19:17:52.508937  438001 logs.go:276] 1 containers: [27d104597d0ca1b418bd0cab630536ff2d859717c314b48ea994680b21a5bd9a]
	I0819 19:17:52.508998  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:52.513206  438001 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:17:52.513281  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:17:52.553908  438001 cri.go:89] found id: "6ad390cacd3d89ad9a5e7af71dab26d472a67971ffda086057b7cf0e0a9560aa"
	I0819 19:17:52.553940  438001 cri.go:89] found id: ""
	I0819 19:17:52.553948  438001 logs.go:276] 1 containers: [6ad390cacd3d89ad9a5e7af71dab26d472a67971ffda086057b7cf0e0a9560aa]
	I0819 19:17:52.554007  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:52.558420  438001 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:17:52.558487  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:17:52.598450  438001 cri.go:89] found id: "123f84ccdc9cf1aa830891307b79d42c9166f018bff19b498a5107e428feb92f"
	I0819 19:17:52.598480  438001 cri.go:89] found id: ""
	I0819 19:17:52.598491  438001 logs.go:276] 1 containers: [123f84ccdc9cf1aa830891307b79d42c9166f018bff19b498a5107e428feb92f]
	I0819 19:17:52.598564  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:52.603421  438001 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:17:52.603485  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:17:52.639017  438001 cri.go:89] found id: "236b4296ad713b251ca958489ebfc4ce41bd2cb64d538cf0cf5f72cc9243e94a"
	I0819 19:17:52.639049  438001 cri.go:89] found id: ""
	I0819 19:17:52.639060  438001 logs.go:276] 1 containers: [236b4296ad713b251ca958489ebfc4ce41bd2cb64d538cf0cf5f72cc9243e94a]
	I0819 19:17:52.639129  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:52.645313  438001 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:17:52.645392  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:17:52.687266  438001 cri.go:89] found id: "390aeac356048873634022bb4093a927ddaf293b994b7316b79cfc2c4c329346"
	I0819 19:17:52.687296  438001 cri.go:89] found id: ""
	I0819 19:17:52.687305  438001 logs.go:276] 1 containers: [390aeac356048873634022bb4093a927ddaf293b994b7316b79cfc2c4c329346]
	I0819 19:17:52.687369  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:52.691770  438001 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:17:52.691830  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:17:52.734067  438001 cri.go:89] found id: ""
	I0819 19:17:52.734098  438001 logs.go:276] 0 containers: []
	W0819 19:17:52.734107  438001 logs.go:278] No container was found matching "kindnet"
	I0819 19:17:52.734113  438001 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 19:17:52.734171  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 19:17:52.781039  438001 cri.go:89] found id: "fd16c88623359ff9e44155c82c7e33b07dc040678d1d6f1915a25d80a5db0bbd"
	I0819 19:17:52.781062  438001 cri.go:89] found id: "482a17643a2dedc658bdc88ca54e2ffb40166833acfc42adf452364226e51dc6"
	I0819 19:17:52.781066  438001 cri.go:89] found id: ""
	I0819 19:17:52.781074  438001 logs.go:276] 2 containers: [fd16c88623359ff9e44155c82c7e33b07dc040678d1d6f1915a25d80a5db0bbd 482a17643a2dedc658bdc88ca54e2ffb40166833acfc42adf452364226e51dc6]
	I0819 19:17:52.781135  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:52.785730  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:52.789946  438001 logs.go:123] Gathering logs for kube-scheduler [123f84ccdc9cf1aa830891307b79d42c9166f018bff19b498a5107e428feb92f] ...
	I0819 19:17:52.789978  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 123f84ccdc9cf1aa830891307b79d42c9166f018bff19b498a5107e428feb92f"
	I0819 19:17:52.830509  438001 logs.go:123] Gathering logs for kube-controller-manager [390aeac356048873634022bb4093a927ddaf293b994b7316b79cfc2c4c329346] ...
	I0819 19:17:52.830541  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 390aeac356048873634022bb4093a927ddaf293b994b7316b79cfc2c4c329346"
	I0819 19:17:52.892964  438001 logs.go:123] Gathering logs for container status ...
	I0819 19:17:52.893017  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:17:52.947999  438001 logs.go:123] Gathering logs for kubelet ...
	I0819 19:17:52.948028  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:17:53.019377  438001 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:17:53.019423  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 19:17:53.134032  438001 logs.go:123] Gathering logs for kube-apiserver [cdac290df2d44c9b30a9c4378f98137a73e603fccd18bc228cca5d017f0a7094] ...
	I0819 19:17:53.134069  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cdac290df2d44c9b30a9c4378f98137a73e603fccd18bc228cca5d017f0a7094"
	I0819 19:17:53.186159  438001 logs.go:123] Gathering logs for etcd [27d104597d0ca1b418bd0cab630536ff2d859717c314b48ea994680b21a5bd9a] ...
	I0819 19:17:53.186193  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27d104597d0ca1b418bd0cab630536ff2d859717c314b48ea994680b21a5bd9a"
	I0819 19:17:53.236918  438001 logs.go:123] Gathering logs for storage-provisioner [482a17643a2dedc658bdc88ca54e2ffb40166833acfc42adf452364226e51dc6] ...
	I0819 19:17:53.236949  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 482a17643a2dedc658bdc88ca54e2ffb40166833acfc42adf452364226e51dc6"
	I0819 19:17:53.275211  438001 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:17:53.275242  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:17:53.710352  438001 logs.go:123] Gathering logs for dmesg ...
	I0819 19:17:53.710396  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:17:53.726691  438001 logs.go:123] Gathering logs for coredns [6ad390cacd3d89ad9a5e7af71dab26d472a67971ffda086057b7cf0e0a9560aa] ...
	I0819 19:17:53.726731  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ad390cacd3d89ad9a5e7af71dab26d472a67971ffda086057b7cf0e0a9560aa"
	I0819 19:17:53.768322  438001 logs.go:123] Gathering logs for kube-proxy [236b4296ad713b251ca958489ebfc4ce41bd2cb64d538cf0cf5f72cc9243e94a] ...
	I0819 19:17:53.768361  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 236b4296ad713b251ca958489ebfc4ce41bd2cb64d538cf0cf5f72cc9243e94a"
	I0819 19:17:53.808546  438001 logs.go:123] Gathering logs for storage-provisioner [fd16c88623359ff9e44155c82c7e33b07dc040678d1d6f1915a25d80a5db0bbd] ...
	I0819 19:17:53.808577  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd16c88623359ff9e44155c82c7e33b07dc040678d1d6f1915a25d80a5db0bbd"
	I0819 19:17:56.362339  438001 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8443/healthz ...
	I0819 19:17:56.366636  438001 api_server.go:279] https://192.168.39.106:8443/healthz returned 200:
	ok
	I0819 19:17:56.367838  438001 api_server.go:141] control plane version: v1.31.0
	I0819 19:17:56.367867  438001 api_server.go:131] duration metric: took 3.944877317s to wait for apiserver health ...
	I0819 19:17:56.367891  438001 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 19:17:56.367925  438001 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:17:56.367991  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:17:56.412151  438001 cri.go:89] found id: "cdac290df2d44c9b30a9c4378f98137a73e603fccd18bc228cca5d017f0a7094"
	I0819 19:17:56.412179  438001 cri.go:89] found id: ""
	I0819 19:17:56.412187  438001 logs.go:276] 1 containers: [cdac290df2d44c9b30a9c4378f98137a73e603fccd18bc228cca5d017f0a7094]
	I0819 19:17:56.412247  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:56.416620  438001 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:17:56.416795  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:17:56.456888  438001 cri.go:89] found id: "27d104597d0ca1b418bd0cab630536ff2d859717c314b48ea994680b21a5bd9a"
	I0819 19:17:56.456918  438001 cri.go:89] found id: ""
	I0819 19:17:56.456927  438001 logs.go:276] 1 containers: [27d104597d0ca1b418bd0cab630536ff2d859717c314b48ea994680b21a5bd9a]
	I0819 19:17:56.456984  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:56.461563  438001 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:17:56.461667  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:17:56.506990  438001 cri.go:89] found id: "6ad390cacd3d89ad9a5e7af71dab26d472a67971ffda086057b7cf0e0a9560aa"
	I0819 19:17:56.507018  438001 cri.go:89] found id: ""
	I0819 19:17:56.507028  438001 logs.go:276] 1 containers: [6ad390cacd3d89ad9a5e7af71dab26d472a67971ffda086057b7cf0e0a9560aa]
	I0819 19:17:56.507099  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:56.511547  438001 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:17:56.511616  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:17:56.551734  438001 cri.go:89] found id: "123f84ccdc9cf1aa830891307b79d42c9166f018bff19b498a5107e428feb92f"
	I0819 19:17:56.551761  438001 cri.go:89] found id: ""
	I0819 19:17:56.551772  438001 logs.go:276] 1 containers: [123f84ccdc9cf1aa830891307b79d42c9166f018bff19b498a5107e428feb92f]
	I0819 19:17:56.551837  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:56.556963  438001 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:17:56.557039  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:17:56.601862  438001 cri.go:89] found id: "236b4296ad713b251ca958489ebfc4ce41bd2cb64d538cf0cf5f72cc9243e94a"
	I0819 19:17:56.601892  438001 cri.go:89] found id: ""
	I0819 19:17:56.601902  438001 logs.go:276] 1 containers: [236b4296ad713b251ca958489ebfc4ce41bd2cb64d538cf0cf5f72cc9243e94a]
	I0819 19:17:56.601971  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:56.606618  438001 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:17:56.606706  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:17:56.649476  438001 cri.go:89] found id: "390aeac356048873634022bb4093a927ddaf293b994b7316b79cfc2c4c329346"
	I0819 19:17:56.649501  438001 cri.go:89] found id: ""
	I0819 19:17:56.649510  438001 logs.go:276] 1 containers: [390aeac356048873634022bb4093a927ddaf293b994b7316b79cfc2c4c329346]
	I0819 19:17:56.649561  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:56.654009  438001 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:17:56.654071  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:17:56.707479  438001 cri.go:89] found id: ""
	I0819 19:17:56.707506  438001 logs.go:276] 0 containers: []
	W0819 19:17:56.707518  438001 logs.go:278] No container was found matching "kindnet"
	I0819 19:17:56.707527  438001 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 19:17:56.707585  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 19:17:56.749937  438001 cri.go:89] found id: "fd16c88623359ff9e44155c82c7e33b07dc040678d1d6f1915a25d80a5db0bbd"
	I0819 19:17:56.749961  438001 cri.go:89] found id: "482a17643a2dedc658bdc88ca54e2ffb40166833acfc42adf452364226e51dc6"
	I0819 19:17:56.749966  438001 cri.go:89] found id: ""
	I0819 19:17:56.749973  438001 logs.go:276] 2 containers: [fd16c88623359ff9e44155c82c7e33b07dc040678d1d6f1915a25d80a5db0bbd 482a17643a2dedc658bdc88ca54e2ffb40166833acfc42adf452364226e51dc6]
	I0819 19:17:56.750026  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:56.754791  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:56.758672  438001 logs.go:123] Gathering logs for etcd [27d104597d0ca1b418bd0cab630536ff2d859717c314b48ea994680b21a5bd9a] ...
	I0819 19:17:56.758700  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27d104597d0ca1b418bd0cab630536ff2d859717c314b48ea994680b21a5bd9a"
	I0819 19:17:56.811420  438001 logs.go:123] Gathering logs for kube-controller-manager [390aeac356048873634022bb4093a927ddaf293b994b7316b79cfc2c4c329346] ...
	I0819 19:17:56.811461  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 390aeac356048873634022bb4093a927ddaf293b994b7316b79cfc2c4c329346"
	I0819 19:17:56.871550  438001 logs.go:123] Gathering logs for storage-provisioner [482a17643a2dedc658bdc88ca54e2ffb40166833acfc42adf452364226e51dc6] ...
	I0819 19:17:56.871588  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 482a17643a2dedc658bdc88ca54e2ffb40166833acfc42adf452364226e51dc6"
	I0819 19:17:56.918183  438001 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:17:56.918224  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:17:57.297614  438001 logs.go:123] Gathering logs for container status ...
	I0819 19:17:57.297653  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:17:57.339092  438001 logs.go:123] Gathering logs for dmesg ...
	I0819 19:17:57.339127  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:17:57.355787  438001 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:17:57.355820  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 19:17:57.486287  438001 logs.go:123] Gathering logs for kube-apiserver [cdac290df2d44c9b30a9c4378f98137a73e603fccd18bc228cca5d017f0a7094] ...
	I0819 19:17:57.486328  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cdac290df2d44c9b30a9c4378f98137a73e603fccd18bc228cca5d017f0a7094"
	I0819 19:17:57.535864  438001 logs.go:123] Gathering logs for coredns [6ad390cacd3d89ad9a5e7af71dab26d472a67971ffda086057b7cf0e0a9560aa] ...
	I0819 19:17:57.535903  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ad390cacd3d89ad9a5e7af71dab26d472a67971ffda086057b7cf0e0a9560aa"
	I0819 19:17:57.577211  438001 logs.go:123] Gathering logs for kube-scheduler [123f84ccdc9cf1aa830891307b79d42c9166f018bff19b498a5107e428feb92f] ...
	I0819 19:17:57.577248  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 123f84ccdc9cf1aa830891307b79d42c9166f018bff19b498a5107e428feb92f"
	I0819 19:17:57.615928  438001 logs.go:123] Gathering logs for kube-proxy [236b4296ad713b251ca958489ebfc4ce41bd2cb64d538cf0cf5f72cc9243e94a] ...
	I0819 19:17:57.615962  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 236b4296ad713b251ca958489ebfc4ce41bd2cb64d538cf0cf5f72cc9243e94a"
	I0819 19:17:57.655413  438001 logs.go:123] Gathering logs for storage-provisioner [fd16c88623359ff9e44155c82c7e33b07dc040678d1d6f1915a25d80a5db0bbd] ...
	I0819 19:17:57.655445  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd16c88623359ff9e44155c82c7e33b07dc040678d1d6f1915a25d80a5db0bbd"
	I0819 19:17:57.704470  438001 logs.go:123] Gathering logs for kubelet ...
	I0819 19:17:57.704502  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:18:00.281191  438001 system_pods.go:59] 8 kube-system pods found
	I0819 19:18:00.281223  438001 system_pods.go:61] "coredns-6f6b679f8f-22lbt" [c8a5cabd-41d4-41cb-91c1-2db1f3471db3] Running
	I0819 19:18:00.281228  438001 system_pods.go:61] "etcd-no-preload-278232" [36d555a1-33e4-4c6c-b24e-2fee4fd84f2b] Running
	I0819 19:18:00.281232  438001 system_pods.go:61] "kube-apiserver-no-preload-278232" [af7173e5-c4ac-4ece-b8b9-bb81cb6b9bfd] Running
	I0819 19:18:00.281235  438001 system_pods.go:61] "kube-controller-manager-no-preload-278232" [2463d97a-5221-40ce-8fd7-08151165d6f7] Running
	I0819 19:18:00.281238  438001 system_pods.go:61] "kube-proxy-rcf49" [85d5814a-1ba9-46be-ab11-17bf40c0f029] Running
	I0819 19:18:00.281241  438001 system_pods.go:61] "kube-scheduler-no-preload-278232" [3b327704-f70c-4d6f-a774-15427a305472] Running
	I0819 19:18:00.281247  438001 system_pods.go:61] "metrics-server-6867b74b74-vxwrs" [e8b74128-b393-4f0f-90fe-e05f20d54acd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 19:18:00.281252  438001 system_pods.go:61] "storage-provisioner" [24766475-1a5b-4f1a-9350-3e891b5272cc] Running
	I0819 19:18:00.281260  438001 system_pods.go:74] duration metric: took 3.913361626s to wait for pod list to return data ...
	I0819 19:18:00.281267  438001 default_sa.go:34] waiting for default service account to be created ...
	I0819 19:18:00.283873  438001 default_sa.go:45] found service account: "default"
	I0819 19:18:00.283898  438001 default_sa.go:55] duration metric: took 2.625775ms for default service account to be created ...
	I0819 19:18:00.283907  438001 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 19:18:00.288985  438001 system_pods.go:86] 8 kube-system pods found
	I0819 19:18:00.289012  438001 system_pods.go:89] "coredns-6f6b679f8f-22lbt" [c8a5cabd-41d4-41cb-91c1-2db1f3471db3] Running
	I0819 19:18:00.289018  438001 system_pods.go:89] "etcd-no-preload-278232" [36d555a1-33e4-4c6c-b24e-2fee4fd84f2b] Running
	I0819 19:18:00.289022  438001 system_pods.go:89] "kube-apiserver-no-preload-278232" [af7173e5-c4ac-4ece-b8b9-bb81cb6b9bfd] Running
	I0819 19:18:00.289028  438001 system_pods.go:89] "kube-controller-manager-no-preload-278232" [2463d97a-5221-40ce-8fd7-08151165d6f7] Running
	I0819 19:18:00.289033  438001 system_pods.go:89] "kube-proxy-rcf49" [85d5814a-1ba9-46be-ab11-17bf40c0f029] Running
	I0819 19:18:00.289038  438001 system_pods.go:89] "kube-scheduler-no-preload-278232" [3b327704-f70c-4d6f-a774-15427a305472] Running
	I0819 19:18:00.289047  438001 system_pods.go:89] "metrics-server-6867b74b74-vxwrs" [e8b74128-b393-4f0f-90fe-e05f20d54acd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 19:18:00.289056  438001 system_pods.go:89] "storage-provisioner" [24766475-1a5b-4f1a-9350-3e891b5272cc] Running
	I0819 19:18:00.289067  438001 system_pods.go:126] duration metric: took 5.154385ms to wait for k8s-apps to be running ...
	I0819 19:18:00.289081  438001 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 19:18:00.289132  438001 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 19:18:00.307128  438001 system_svc.go:56] duration metric: took 18.036826ms WaitForService to wait for kubelet
	I0819 19:18:00.307160  438001 kubeadm.go:582] duration metric: took 4m24.706461383s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 19:18:00.307183  438001 node_conditions.go:102] verifying NodePressure condition ...
	I0819 19:18:00.309818  438001 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 19:18:00.309866  438001 node_conditions.go:123] node cpu capacity is 2
	I0819 19:18:00.309879  438001 node_conditions.go:105] duration metric: took 2.691554ms to run NodePressure ...
	I0819 19:18:00.309892  438001 start.go:241] waiting for startup goroutines ...
	I0819 19:18:00.309901  438001 start.go:246] waiting for cluster config update ...
	I0819 19:18:00.309918  438001 start.go:255] writing updated cluster config ...
	I0819 19:18:00.310268  438001 ssh_runner.go:195] Run: rm -f paused
	I0819 19:18:00.366211  438001 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 19:18:00.368280  438001 out.go:177] * Done! kubectl is now configured to use "no-preload-278232" cluster and "default" namespace by default
	I0819 19:17:58.890611  438716 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 19:17:58.890832  438716 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 19:18:18.891960  438716 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 19:18:18.892243  438716 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 19:18:58.894609  438716 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 19:18:58.894854  438716 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 19:18:58.894869  438716 kubeadm.go:310] 
	I0819 19:18:58.894912  438716 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0819 19:18:58.894967  438716 kubeadm.go:310] 		timed out waiting for the condition
	I0819 19:18:58.894981  438716 kubeadm.go:310] 
	I0819 19:18:58.895024  438716 kubeadm.go:310] 	This error is likely caused by:
	I0819 19:18:58.895072  438716 kubeadm.go:310] 		- The kubelet is not running
	I0819 19:18:58.895344  438716 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0819 19:18:58.895388  438716 kubeadm.go:310] 
	I0819 19:18:58.895518  438716 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0819 19:18:58.895613  438716 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0819 19:18:58.895668  438716 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0819 19:18:58.895695  438716 kubeadm.go:310] 
	I0819 19:18:58.895839  438716 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0819 19:18:58.895959  438716 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0819 19:18:58.895972  438716 kubeadm.go:310] 
	I0819 19:18:58.896072  438716 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0819 19:18:58.896154  438716 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0819 19:18:58.896220  438716 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0819 19:18:58.896284  438716 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0819 19:18:58.896314  438716 kubeadm.go:310] 
	I0819 19:18:58.896819  438716 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 19:18:58.896946  438716 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0819 19:18:58.897028  438716 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0819 19:18:58.897193  438716 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0819 19:18:58.897249  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 19:18:59.361073  438716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 19:18:59.375791  438716 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 19:18:59.387650  438716 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 19:18:59.387697  438716 kubeadm.go:157] found existing configuration files:
	
	I0819 19:18:59.387756  438716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 19:18:59.397345  438716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 19:18:59.397409  438716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 19:18:59.408060  438716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 19:18:59.417658  438716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 19:18:59.417731  438716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 19:18:59.427765  438716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 19:18:59.437636  438716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 19:18:59.437712  438716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 19:18:59.447506  438716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 19:18:59.457100  438716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 19:18:59.457165  438716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 19:18:59.467185  438716 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 19:18:59.540706  438716 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0819 19:18:59.541005  438716 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 19:18:59.694109  438716 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 19:18:59.694238  438716 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 19:18:59.694350  438716 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0819 19:18:59.874268  438716 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 19:18:59.876259  438716 out.go:235]   - Generating certificates and keys ...
	I0819 19:18:59.876362  438716 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 19:18:59.876441  438716 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 19:18:59.876569  438716 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 19:18:59.876654  438716 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 19:18:59.876751  438716 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 19:18:59.876824  438716 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 19:18:59.876900  438716 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 19:18:59.877076  438716 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 19:18:59.877571  438716 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 19:18:59.877997  438716 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 19:18:59.878139  438716 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 19:18:59.878241  438716 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 19:19:00.153380  438716 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 19:19:00.359863  438716 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 19:19:00.470797  438716 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 19:19:00.590041  438716 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 19:19:00.614332  438716 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 19:19:00.615415  438716 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 19:19:00.615473  438716 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 19:19:00.756167  438716 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 19:19:00.757737  438716 out.go:235]   - Booting up control plane ...
	I0819 19:19:00.757873  438716 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 19:19:00.761484  438716 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 19:19:00.762431  438716 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 19:19:00.763241  438716 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 19:19:00.766155  438716 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0819 19:19:40.770166  438716 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0819 19:19:40.770378  438716 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 19:19:40.770543  438716 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 19:19:45.771352  438716 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 19:19:45.771587  438716 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 19:19:55.772027  438716 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 19:19:55.772243  438716 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 19:20:15.773008  438716 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 19:20:15.773238  438716 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 19:20:55.771311  438716 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 19:20:55.771517  438716 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 19:20:55.771530  438716 kubeadm.go:310] 
	I0819 19:20:55.771578  438716 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0819 19:20:55.771750  438716 kubeadm.go:310] 		timed out waiting for the condition
	I0819 19:20:55.771784  438716 kubeadm.go:310] 
	I0819 19:20:55.771845  438716 kubeadm.go:310] 	This error is likely caused by:
	I0819 19:20:55.771891  438716 kubeadm.go:310] 		- The kubelet is not running
	I0819 19:20:55.772014  438716 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0819 19:20:55.772027  438716 kubeadm.go:310] 
	I0819 19:20:55.772125  438716 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0819 19:20:55.772162  438716 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0819 19:20:55.772188  438716 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0819 19:20:55.772196  438716 kubeadm.go:310] 
	I0819 19:20:55.772272  438716 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0819 19:20:55.772336  438716 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0819 19:20:55.772343  438716 kubeadm.go:310] 
	I0819 19:20:55.772439  438716 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0819 19:20:55.772520  438716 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0819 19:20:55.772581  438716 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0819 19:20:55.772637  438716 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0819 19:20:55.772645  438716 kubeadm.go:310] 
	I0819 19:20:55.773758  438716 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 19:20:55.773880  438716 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0819 19:20:55.773971  438716 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0819 19:20:55.774067  438716 kubeadm.go:394] duration metric: took 7m57.361589371s to StartCluster
	I0819 19:20:55.774157  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:20:55.774243  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:20:55.818428  438716 cri.go:89] found id: ""
	I0819 19:20:55.818460  438716 logs.go:276] 0 containers: []
	W0819 19:20:55.818468  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:20:55.818475  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:20:55.818535  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:20:55.857714  438716 cri.go:89] found id: ""
	I0819 19:20:55.857747  438716 logs.go:276] 0 containers: []
	W0819 19:20:55.857758  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:20:55.857766  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:20:55.857841  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:20:55.891917  438716 cri.go:89] found id: ""
	I0819 19:20:55.891948  438716 logs.go:276] 0 containers: []
	W0819 19:20:55.891967  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:20:55.891976  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:20:55.892046  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:20:55.930608  438716 cri.go:89] found id: ""
	I0819 19:20:55.930643  438716 logs.go:276] 0 containers: []
	W0819 19:20:55.930656  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:20:55.930665  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:20:55.930734  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:20:55.966563  438716 cri.go:89] found id: ""
	I0819 19:20:55.966591  438716 logs.go:276] 0 containers: []
	W0819 19:20:55.966600  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:20:55.966607  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:20:55.966670  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:20:56.010392  438716 cri.go:89] found id: ""
	I0819 19:20:56.010421  438716 logs.go:276] 0 containers: []
	W0819 19:20:56.010430  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:20:56.010436  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:20:56.010491  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:20:56.066940  438716 cri.go:89] found id: ""
	I0819 19:20:56.066973  438716 logs.go:276] 0 containers: []
	W0819 19:20:56.066985  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:20:56.066994  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:20:56.067062  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:20:56.118852  438716 cri.go:89] found id: ""
	I0819 19:20:56.118881  438716 logs.go:276] 0 containers: []
	W0819 19:20:56.118894  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:20:56.118909  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:20:56.118925  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:20:56.158224  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:20:56.158263  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:20:56.211882  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:20:56.211925  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:20:56.228082  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:20:56.228124  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:20:56.307857  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:20:56.307880  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:20:56.307893  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0819 19:20:56.414797  438716 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0819 19:20:56.414885  438716 out.go:270] * 
	W0819 19:20:56.415020  438716 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0819 19:20:56.415039  438716 out.go:270] * 
	W0819 19:20:56.416031  438716 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 19:20:56.419869  438716 out.go:201] 
	W0819 19:20:56.421262  438716 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0819 19:20:56.421319  438716 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0819 19:20:56.421351  438716 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0819 19:20:56.422942  438716 out.go:201] 
	
	
	==> CRI-O <==
	Aug 19 19:30:01 old-k8s-version-104669 crio[655]: time="2024-08-19 19:30:01.930400160Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095801930366786,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8b532e7e-f103-484a-b295-8a334034d07d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:30:01 old-k8s-version-104669 crio[655]: time="2024-08-19 19:30:01.930972868Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=96d61b4e-b1bb-4903-8dac-2a2ab08f847d name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:30:01 old-k8s-version-104669 crio[655]: time="2024-08-19 19:30:01.931046523Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=96d61b4e-b1bb-4903-8dac-2a2ab08f847d name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:30:01 old-k8s-version-104669 crio[655]: time="2024-08-19 19:30:01.931120088Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=96d61b4e-b1bb-4903-8dac-2a2ab08f847d name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:30:01 old-k8s-version-104669 crio[655]: time="2024-08-19 19:30:01.965405746Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=01125033-6e8d-4b63-a701-d4f399499613 name=/runtime.v1.RuntimeService/Version
	Aug 19 19:30:01 old-k8s-version-104669 crio[655]: time="2024-08-19 19:30:01.965507633Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=01125033-6e8d-4b63-a701-d4f399499613 name=/runtime.v1.RuntimeService/Version
	Aug 19 19:30:01 old-k8s-version-104669 crio[655]: time="2024-08-19 19:30:01.966510008Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8a36e909-1d8b-45c0-8bd1-fca430bfa294 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:30:01 old-k8s-version-104669 crio[655]: time="2024-08-19 19:30:01.966931573Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095801966908066,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8a36e909-1d8b-45c0-8bd1-fca430bfa294 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:30:01 old-k8s-version-104669 crio[655]: time="2024-08-19 19:30:01.967682228Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5ec1a7db-6adb-4721-a719-531c5adacfc7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:30:01 old-k8s-version-104669 crio[655]: time="2024-08-19 19:30:01.967733649Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5ec1a7db-6adb-4721-a719-531c5adacfc7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:30:01 old-k8s-version-104669 crio[655]: time="2024-08-19 19:30:01.967767087Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=5ec1a7db-6adb-4721-a719-531c5adacfc7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:30:02 old-k8s-version-104669 crio[655]: time="2024-08-19 19:30:02.007688274Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1a170d2e-697d-40f6-9553-b99b64e5500c name=/runtime.v1.RuntimeService/Version
	Aug 19 19:30:02 old-k8s-version-104669 crio[655]: time="2024-08-19 19:30:02.007789460Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1a170d2e-697d-40f6-9553-b99b64e5500c name=/runtime.v1.RuntimeService/Version
	Aug 19 19:30:02 old-k8s-version-104669 crio[655]: time="2024-08-19 19:30:02.009221914Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=75e49251-f99d-43e8-ae3e-7e0aa424048b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:30:02 old-k8s-version-104669 crio[655]: time="2024-08-19 19:30:02.009850902Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095802009809382,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=75e49251-f99d-43e8-ae3e-7e0aa424048b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:30:02 old-k8s-version-104669 crio[655]: time="2024-08-19 19:30:02.010667918Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c37e44eb-7516-4f80-afb1-fcb09619d344 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:30:02 old-k8s-version-104669 crio[655]: time="2024-08-19 19:30:02.010738580Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c37e44eb-7516-4f80-afb1-fcb09619d344 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:30:02 old-k8s-version-104669 crio[655]: time="2024-08-19 19:30:02.010774777Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=c37e44eb-7516-4f80-afb1-fcb09619d344 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:30:02 old-k8s-version-104669 crio[655]: time="2024-08-19 19:30:02.043847416Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7c4a2522-ed83-412f-9529-69433f8c1e0b name=/runtime.v1.RuntimeService/Version
	Aug 19 19:30:02 old-k8s-version-104669 crio[655]: time="2024-08-19 19:30:02.043938416Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7c4a2522-ed83-412f-9529-69433f8c1e0b name=/runtime.v1.RuntimeService/Version
	Aug 19 19:30:02 old-k8s-version-104669 crio[655]: time="2024-08-19 19:30:02.045359014Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=58f70c0b-190e-431e-a9c2-3f66a6e16c6a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:30:02 old-k8s-version-104669 crio[655]: time="2024-08-19 19:30:02.045842065Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095802045813783,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=58f70c0b-190e-431e-a9c2-3f66a6e16c6a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:30:02 old-k8s-version-104669 crio[655]: time="2024-08-19 19:30:02.046489328Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=43a89f81-13eb-4628-809e-e08362638928 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:30:02 old-k8s-version-104669 crio[655]: time="2024-08-19 19:30:02.046566293Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=43a89f81-13eb-4628-809e-e08362638928 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:30:02 old-k8s-version-104669 crio[655]: time="2024-08-19 19:30:02.046604025Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=43a89f81-13eb-4628-809e-e08362638928 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Aug19 19:12] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050789] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041369] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.978049] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.658614] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.655874] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.305057] systemd-fstab-generator[577]: Ignoring "noauto" option for root device
	[  +0.056862] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065398] systemd-fstab-generator[589]: Ignoring "noauto" option for root device
	[  +0.183560] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.167037] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +0.268786] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +6.546314] systemd-fstab-generator[904]: Ignoring "noauto" option for root device
	[  +0.062802] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.070433] systemd-fstab-generator[1029]: Ignoring "noauto" option for root device
	[Aug19 19:13] kauditd_printk_skb: 46 callbacks suppressed
	[Aug19 19:17] systemd-fstab-generator[5079]: Ignoring "noauto" option for root device
	[Aug19 19:18] systemd-fstab-generator[5368]: Ignoring "noauto" option for root device
	[  +0.069874] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 19:30:02 up 17 min,  0 users,  load average: 0.02, 0.02, 0.04
	Linux old-k8s-version-104669 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Aug 19 19:30:02 old-k8s-version-104669 kubelet[6563]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Aug 19 19:30:02 old-k8s-version-104669 kubelet[6563]: goroutine 162 [select]:
	Aug 19 19:30:02 old-k8s-version-104669 kubelet[6563]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000a95ef0, 0x4f0ac20, 0xc000119c20, 0x1, 0xc0001000c0)
	Aug 19 19:30:02 old-k8s-version-104669 kubelet[6563]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Aug 19 19:30:02 old-k8s-version-104669 kubelet[6563]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc0005dcd20, 0xc0001000c0)
	Aug 19 19:30:02 old-k8s-version-104669 kubelet[6563]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Aug 19 19:30:02 old-k8s-version-104669 kubelet[6563]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Aug 19 19:30:02 old-k8s-version-104669 kubelet[6563]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Aug 19 19:30:02 old-k8s-version-104669 kubelet[6563]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000868670, 0xc0006b5d20)
	Aug 19 19:30:02 old-k8s-version-104669 kubelet[6563]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Aug 19 19:30:02 old-k8s-version-104669 kubelet[6563]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Aug 19 19:30:02 old-k8s-version-104669 kubelet[6563]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Aug 19 19:30:02 old-k8s-version-104669 kubelet[6563]: goroutine 167 [syscall]:
	Aug 19 19:30:02 old-k8s-version-104669 kubelet[6563]: syscall.Syscall6(0xe8, 0xc, 0xc000c8fb6c, 0x7, 0xffffffffffffffff, 0x0, 0x0, 0x0, 0x0, 0x0)
	Aug 19 19:30:02 old-k8s-version-104669 kubelet[6563]:         /usr/local/go/src/syscall/asm_linux_amd64.s:41 +0x5
	Aug 19 19:30:02 old-k8s-version-104669 kubelet[6563]: k8s.io/kubernetes/vendor/golang.org/x/sys/unix.EpollWait(0xc, 0xc000c8fb6c, 0x7, 0x7, 0xffffffffffffffff, 0x0, 0x0, 0x0)
	Aug 19 19:30:02 old-k8s-version-104669 kubelet[6563]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/sys/unix/zsyscall_linux_amd64.go:76 +0x72
	Aug 19 19:30:02 old-k8s-version-104669 kubelet[6563]: k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.(*fdPoller).wait(0xc000a9a460, 0x0, 0x0, 0x0)
	Aug 19 19:30:02 old-k8s-version-104669 kubelet[6563]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify_poller.go:86 +0x91
	Aug 19 19:30:02 old-k8s-version-104669 kubelet[6563]: k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.(*Watcher).readEvents(0xc0000500f0)
	Aug 19 19:30:02 old-k8s-version-104669 kubelet[6563]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify.go:192 +0x206
	Aug 19 19:30:02 old-k8s-version-104669 kubelet[6563]: created by k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.NewWatcher
	Aug 19 19:30:02 old-k8s-version-104669 kubelet[6563]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify.go:59 +0x1a8
	Aug 19 19:30:02 old-k8s-version-104669 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Aug 19 19:30:02 old-k8s-version-104669 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-104669 -n old-k8s-version-104669
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-104669 -n old-k8s-version-104669: exit status 2 (241.290582ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-104669" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.60s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (442.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-982795 -n default-k8s-diff-port-982795
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-08-19 19:33:50.660260584 +0000 UTC m=+6577.860187402
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-982795 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-982795 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.959µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-982795 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-982795 -n default-k8s-diff-port-982795
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-982795 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-982795 logs -n 25: (1.192435s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p embed-certs-024748            | embed-certs-024748           | jenkins | v1.33.1 | 19 Aug 24 19:04 UTC | 19 Aug 24 19:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-024748                                  | embed-certs-024748           | jenkins | v1.33.1 | 19 Aug 24 19:04 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-104669        | old-k8s-version-104669       | jenkins | v1.33.1 | 19 Aug 24 19:06 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-278232                  | no-preload-278232            | jenkins | v1.33.1 | 19 Aug 24 19:07 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-278232                                   | no-preload-278232            | jenkins | v1.33.1 | 19 Aug 24 19:07 UTC | 19 Aug 24 19:18 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-982795       | default-k8s-diff-port-982795 | jenkins | v1.33.1 | 19 Aug 24 19:07 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-024748                 | embed-certs-024748           | jenkins | v1.33.1 | 19 Aug 24 19:07 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-982795 | jenkins | v1.33.1 | 19 Aug 24 19:07 UTC | 19 Aug 24 19:17 UTC |
	|         | default-k8s-diff-port-982795                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-024748                                  | embed-certs-024748           | jenkins | v1.33.1 | 19 Aug 24 19:07 UTC | 19 Aug 24 19:17 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-104669                              | old-k8s-version-104669       | jenkins | v1.33.1 | 19 Aug 24 19:08 UTC | 19 Aug 24 19:08 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-104669             | old-k8s-version-104669       | jenkins | v1.33.1 | 19 Aug 24 19:08 UTC | 19 Aug 24 19:08 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-104669                              | old-k8s-version-104669       | jenkins | v1.33.1 | 19 Aug 24 19:08 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-104669                              | old-k8s-version-104669       | jenkins | v1.33.1 | 19 Aug 24 19:32 UTC | 19 Aug 24 19:32 UTC |
	| start   | -p newest-cni-125279 --memory=2200 --alsologtostderr   | newest-cni-125279            | jenkins | v1.33.1 | 19 Aug 24 19:32 UTC | 19 Aug 24 19:32 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| delete  | -p no-preload-278232                                   | no-preload-278232            | jenkins | v1.33.1 | 19 Aug 24 19:32 UTC | 19 Aug 24 19:32 UTC |
	| addons  | enable metrics-server -p newest-cni-125279             | newest-cni-125279            | jenkins | v1.33.1 | 19 Aug 24 19:32 UTC | 19 Aug 24 19:32 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-125279                                   | newest-cni-125279            | jenkins | v1.33.1 | 19 Aug 24 19:32 UTC | 19 Aug 24 19:33 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-024748                                  | embed-certs-024748           | jenkins | v1.33.1 | 19 Aug 24 19:33 UTC | 19 Aug 24 19:33 UTC |
	| addons  | enable dashboard -p newest-cni-125279                  | newest-cni-125279            | jenkins | v1.33.1 | 19 Aug 24 19:33 UTC | 19 Aug 24 19:33 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-125279 --memory=2200 --alsologtostderr   | newest-cni-125279            | jenkins | v1.33.1 | 19 Aug 24 19:33 UTC | 19 Aug 24 19:33 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| image   | newest-cni-125279 image list                           | newest-cni-125279            | jenkins | v1.33.1 | 19 Aug 24 19:33 UTC | 19 Aug 24 19:33 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-125279                                   | newest-cni-125279            | jenkins | v1.33.1 | 19 Aug 24 19:33 UTC | 19 Aug 24 19:33 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-125279                                   | newest-cni-125279            | jenkins | v1.33.1 | 19 Aug 24 19:33 UTC | 19 Aug 24 19:33 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-125279                                   | newest-cni-125279            | jenkins | v1.33.1 | 19 Aug 24 19:33 UTC | 19 Aug 24 19:33 UTC |
	| delete  | -p newest-cni-125279                                   | newest-cni-125279            | jenkins | v1.33.1 | 19 Aug 24 19:33 UTC | 19 Aug 24 19:33 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 19:33:06
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 19:33:06.331459  446353 out.go:345] Setting OutFile to fd 1 ...
	I0819 19:33:06.331577  446353 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:33:06.331582  446353 out.go:358] Setting ErrFile to fd 2...
	I0819 19:33:06.331587  446353 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:33:06.331798  446353 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19468-372744/.minikube/bin
	I0819 19:33:06.332377  446353 out.go:352] Setting JSON to false
	I0819 19:33:06.333335  446353 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":11729,"bootTime":1724084257,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 19:33:06.333401  446353 start.go:139] virtualization: kvm guest
	I0819 19:33:06.335516  446353 out.go:177] * [newest-cni-125279] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 19:33:06.337050  446353 out.go:177]   - MINIKUBE_LOCATION=19468
	I0819 19:33:06.337095  446353 notify.go:220] Checking for updates...
	I0819 19:33:06.339213  446353 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 19:33:06.340538  446353 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19468-372744/kubeconfig
	I0819 19:33:06.341694  446353 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19468-372744/.minikube
	I0819 19:33:06.342889  446353 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 19:33:06.343960  446353 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 19:33:06.345534  446353 config.go:182] Loaded profile config "newest-cni-125279": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:33:06.345973  446353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:33:06.346023  446353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:33:06.361454  446353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38855
	I0819 19:33:06.361864  446353 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:33:06.362552  446353 main.go:141] libmachine: Using API Version  1
	I0819 19:33:06.362579  446353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:33:06.362971  446353 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:33:06.363171  446353 main.go:141] libmachine: (newest-cni-125279) Calling .DriverName
	I0819 19:33:06.363408  446353 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 19:33:06.363793  446353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:33:06.363831  446353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:33:06.379004  446353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44871
	I0819 19:33:06.379537  446353 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:33:06.380039  446353 main.go:141] libmachine: Using API Version  1
	I0819 19:33:06.380066  446353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:33:06.380389  446353 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:33:06.380599  446353 main.go:141] libmachine: (newest-cni-125279) Calling .DriverName
	I0819 19:33:06.416598  446353 out.go:177] * Using the kvm2 driver based on existing profile
	I0819 19:33:06.417868  446353 start.go:297] selected driver: kvm2
	I0819 19:33:06.417904  446353 start.go:901] validating driver "kvm2" against &{Name:newest-cni-125279 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:newest-cni-125279 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.232 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] St
artHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 19:33:06.418076  446353 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 19:33:06.418798  446353 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 19:33:06.418896  446353 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19468-372744/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 19:33:06.435738  446353 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 19:33:06.436186  446353 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0819 19:33:06.436223  446353 cni.go:84] Creating CNI manager for ""
	I0819 19:33:06.436234  446353 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 19:33:06.436273  446353 start.go:340] cluster config:
	{Name:newest-cni-125279 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:newest-cni-125279 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.232 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network
: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 19:33:06.436406  446353 iso.go:125] acquiring lock: {Name:mk4c0ac1c3202b1a296739df622960e7a0bd8566 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 19:33:06.438417  446353 out.go:177] * Starting "newest-cni-125279" primary control-plane node in "newest-cni-125279" cluster
	I0819 19:33:06.439889  446353 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 19:33:06.439926  446353 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0819 19:33:06.439941  446353 cache.go:56] Caching tarball of preloaded images
	I0819 19:33:06.440023  446353 preload.go:172] Found /home/jenkins/minikube-integration/19468-372744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 19:33:06.440032  446353 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 19:33:06.440134  446353 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/newest-cni-125279/config.json ...
	I0819 19:33:06.440314  446353 start.go:360] acquireMachinesLock for newest-cni-125279: {Name:mk24ba67a747357e9ce40f1e460d2bb0bc59cc75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 19:33:06.440354  446353 start.go:364] duration metric: took 22.354µs to acquireMachinesLock for "newest-cni-125279"
	I0819 19:33:06.440367  446353 start.go:96] Skipping create...Using existing machine configuration
	I0819 19:33:06.440375  446353 fix.go:54] fixHost starting: 
	I0819 19:33:06.440634  446353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:33:06.440665  446353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:33:06.455597  446353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34851
	I0819 19:33:06.456050  446353 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:33:06.456510  446353 main.go:141] libmachine: Using API Version  1
	I0819 19:33:06.456530  446353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:33:06.456850  446353 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:33:06.457094  446353 main.go:141] libmachine: (newest-cni-125279) Calling .DriverName
	I0819 19:33:06.457271  446353 main.go:141] libmachine: (newest-cni-125279) Calling .GetState
	I0819 19:33:06.458746  446353 fix.go:112] recreateIfNeeded on newest-cni-125279: state=Stopped err=<nil>
	I0819 19:33:06.458772  446353 main.go:141] libmachine: (newest-cni-125279) Calling .DriverName
	W0819 19:33:06.458918  446353 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 19:33:06.460828  446353 out.go:177] * Restarting existing kvm2 VM for "newest-cni-125279" ...
	I0819 19:33:06.462288  446353 main.go:141] libmachine: (newest-cni-125279) Calling .Start
	I0819 19:33:06.462470  446353 main.go:141] libmachine: (newest-cni-125279) Ensuring networks are active...
	I0819 19:33:06.463399  446353 main.go:141] libmachine: (newest-cni-125279) Ensuring network default is active
	I0819 19:33:06.463796  446353 main.go:141] libmachine: (newest-cni-125279) Ensuring network mk-newest-cni-125279 is active
	I0819 19:33:06.464257  446353 main.go:141] libmachine: (newest-cni-125279) Getting domain xml...
	I0819 19:33:06.465287  446353 main.go:141] libmachine: (newest-cni-125279) Creating domain...
	I0819 19:33:07.691405  446353 main.go:141] libmachine: (newest-cni-125279) Waiting to get IP...
	I0819 19:33:07.692488  446353 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:33:07.692920  446353 main.go:141] libmachine: (newest-cni-125279) DBG | unable to find current IP address of domain newest-cni-125279 in network mk-newest-cni-125279
	I0819 19:33:07.692997  446353 main.go:141] libmachine: (newest-cni-125279) DBG | I0819 19:33:07.692902  446388 retry.go:31] will retry after 192.266962ms: waiting for machine to come up
	I0819 19:33:07.887350  446353 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:33:07.887901  446353 main.go:141] libmachine: (newest-cni-125279) DBG | unable to find current IP address of domain newest-cni-125279 in network mk-newest-cni-125279
	I0819 19:33:07.887936  446353 main.go:141] libmachine: (newest-cni-125279) DBG | I0819 19:33:07.887850  446388 retry.go:31] will retry after 271.430056ms: waiting for machine to come up
	I0819 19:33:08.161356  446353 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:33:08.161875  446353 main.go:141] libmachine: (newest-cni-125279) DBG | unable to find current IP address of domain newest-cni-125279 in network mk-newest-cni-125279
	I0819 19:33:08.161908  446353 main.go:141] libmachine: (newest-cni-125279) DBG | I0819 19:33:08.161818  446388 retry.go:31] will retry after 380.075179ms: waiting for machine to come up
	I0819 19:33:08.543129  446353 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:33:08.543644  446353 main.go:141] libmachine: (newest-cni-125279) DBG | unable to find current IP address of domain newest-cni-125279 in network mk-newest-cni-125279
	I0819 19:33:08.543681  446353 main.go:141] libmachine: (newest-cni-125279) DBG | I0819 19:33:08.543618  446388 retry.go:31] will retry after 406.675611ms: waiting for machine to come up
	I0819 19:33:08.952305  446353 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:33:08.952727  446353 main.go:141] libmachine: (newest-cni-125279) DBG | unable to find current IP address of domain newest-cni-125279 in network mk-newest-cni-125279
	I0819 19:33:08.952759  446353 main.go:141] libmachine: (newest-cni-125279) DBG | I0819 19:33:08.952680  446388 retry.go:31] will retry after 645.428847ms: waiting for machine to come up
	I0819 19:33:09.599612  446353 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:33:09.599986  446353 main.go:141] libmachine: (newest-cni-125279) DBG | unable to find current IP address of domain newest-cni-125279 in network mk-newest-cni-125279
	I0819 19:33:09.600010  446353 main.go:141] libmachine: (newest-cni-125279) DBG | I0819 19:33:09.599944  446388 retry.go:31] will retry after 583.579765ms: waiting for machine to come up
	I0819 19:33:10.184710  446353 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:33:10.185132  446353 main.go:141] libmachine: (newest-cni-125279) DBG | unable to find current IP address of domain newest-cni-125279 in network mk-newest-cni-125279
	I0819 19:33:10.185180  446353 main.go:141] libmachine: (newest-cni-125279) DBG | I0819 19:33:10.185095  446388 retry.go:31] will retry after 952.376866ms: waiting for machine to come up
	I0819 19:33:11.139093  446353 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:33:11.139496  446353 main.go:141] libmachine: (newest-cni-125279) DBG | unable to find current IP address of domain newest-cni-125279 in network mk-newest-cni-125279
	I0819 19:33:11.139523  446353 main.go:141] libmachine: (newest-cni-125279) DBG | I0819 19:33:11.139440  446388 retry.go:31] will retry after 1.391753309s: waiting for machine to come up
	I0819 19:33:12.532621  446353 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:33:12.533165  446353 main.go:141] libmachine: (newest-cni-125279) DBG | unable to find current IP address of domain newest-cni-125279 in network mk-newest-cni-125279
	I0819 19:33:12.533209  446353 main.go:141] libmachine: (newest-cni-125279) DBG | I0819 19:33:12.533011  446388 retry.go:31] will retry after 1.403352011s: waiting for machine to come up
	I0819 19:33:13.938803  446353 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:33:13.939286  446353 main.go:141] libmachine: (newest-cni-125279) DBG | unable to find current IP address of domain newest-cni-125279 in network mk-newest-cni-125279
	I0819 19:33:13.939306  446353 main.go:141] libmachine: (newest-cni-125279) DBG | I0819 19:33:13.939256  446388 retry.go:31] will retry after 1.688857429s: waiting for machine to come up
	I0819 19:33:15.630187  446353 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:33:15.630730  446353 main.go:141] libmachine: (newest-cni-125279) DBG | unable to find current IP address of domain newest-cni-125279 in network mk-newest-cni-125279
	I0819 19:33:15.630765  446353 main.go:141] libmachine: (newest-cni-125279) DBG | I0819 19:33:15.630672  446388 retry.go:31] will retry after 2.135772922s: waiting for machine to come up
	I0819 19:33:17.768431  446353 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:33:17.768942  446353 main.go:141] libmachine: (newest-cni-125279) DBG | unable to find current IP address of domain newest-cni-125279 in network mk-newest-cni-125279
	I0819 19:33:17.768975  446353 main.go:141] libmachine: (newest-cni-125279) DBG | I0819 19:33:17.768905  446388 retry.go:31] will retry after 3.000805901s: waiting for machine to come up
	I0819 19:33:20.773180  446353 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:33:20.773674  446353 main.go:141] libmachine: (newest-cni-125279) DBG | unable to find current IP address of domain newest-cni-125279 in network mk-newest-cni-125279
	I0819 19:33:20.773707  446353 main.go:141] libmachine: (newest-cni-125279) DBG | I0819 19:33:20.773642  446388 retry.go:31] will retry after 3.116101033s: waiting for machine to come up
	I0819 19:33:23.892833  446353 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:33:23.893378  446353 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has current primary IP address 192.168.50.232 and MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:33:23.893401  446353 main.go:141] libmachine: (newest-cni-125279) Found IP for machine: 192.168.50.232
	I0819 19:33:23.893414  446353 main.go:141] libmachine: (newest-cni-125279) Reserving static IP address...
	I0819 19:33:23.893788  446353 main.go:141] libmachine: (newest-cni-125279) DBG | found host DHCP lease matching {name: "newest-cni-125279", mac: "52:54:00:65:45:fc", ip: "192.168.50.232"} in network mk-newest-cni-125279: {Iface:virbr2 ExpiryTime:2024-08-19 20:33:17 +0000 UTC Type:0 Mac:52:54:00:65:45:fc Iaid: IPaddr:192.168.50.232 Prefix:24 Hostname:newest-cni-125279 Clientid:01:52:54:00:65:45:fc}
	I0819 19:33:23.893816  446353 main.go:141] libmachine: (newest-cni-125279) DBG | skip adding static IP to network mk-newest-cni-125279 - found existing host DHCP lease matching {name: "newest-cni-125279", mac: "52:54:00:65:45:fc", ip: "192.168.50.232"}
	I0819 19:33:23.893837  446353 main.go:141] libmachine: (newest-cni-125279) Reserved static IP address: 192.168.50.232
	I0819 19:33:23.893851  446353 main.go:141] libmachine: (newest-cni-125279) Waiting for SSH to be available...
	I0819 19:33:23.893865  446353 main.go:141] libmachine: (newest-cni-125279) DBG | Getting to WaitForSSH function...
	I0819 19:33:23.896452  446353 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:33:23.896830  446353 main.go:141] libmachine: (newest-cni-125279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:45:fc", ip: ""} in network mk-newest-cni-125279: {Iface:virbr2 ExpiryTime:2024-08-19 20:33:17 +0000 UTC Type:0 Mac:52:54:00:65:45:fc Iaid: IPaddr:192.168.50.232 Prefix:24 Hostname:newest-cni-125279 Clientid:01:52:54:00:65:45:fc}
	I0819 19:33:23.896863  446353 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined IP address 192.168.50.232 and MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:33:23.896927  446353 main.go:141] libmachine: (newest-cni-125279) DBG | Using SSH client type: external
	I0819 19:33:23.896979  446353 main.go:141] libmachine: (newest-cni-125279) DBG | Using SSH private key: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/newest-cni-125279/id_rsa (-rw-------)
	I0819 19:33:23.897020  446353 main.go:141] libmachine: (newest-cni-125279) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.232 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19468-372744/.minikube/machines/newest-cni-125279/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 19:33:23.897039  446353 main.go:141] libmachine: (newest-cni-125279) DBG | About to run SSH command:
	I0819 19:33:23.897060  446353 main.go:141] libmachine: (newest-cni-125279) DBG | exit 0
	I0819 19:33:24.019910  446353 main.go:141] libmachine: (newest-cni-125279) DBG | SSH cmd err, output: <nil>: 
	I0819 19:33:24.020301  446353 main.go:141] libmachine: (newest-cni-125279) Calling .GetConfigRaw
	I0819 19:33:24.021002  446353 main.go:141] libmachine: (newest-cni-125279) Calling .GetIP
	I0819 19:33:24.023819  446353 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:33:24.024423  446353 main.go:141] libmachine: (newest-cni-125279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:45:fc", ip: ""} in network mk-newest-cni-125279: {Iface:virbr2 ExpiryTime:2024-08-19 20:33:17 +0000 UTC Type:0 Mac:52:54:00:65:45:fc Iaid: IPaddr:192.168.50.232 Prefix:24 Hostname:newest-cni-125279 Clientid:01:52:54:00:65:45:fc}
	I0819 19:33:24.024451  446353 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined IP address 192.168.50.232 and MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:33:24.024761  446353 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/newest-cni-125279/config.json ...
	I0819 19:33:24.025033  446353 machine.go:93] provisionDockerMachine start ...
	I0819 19:33:24.025060  446353 main.go:141] libmachine: (newest-cni-125279) Calling .DriverName
	I0819 19:33:24.025307  446353 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHHostname
	I0819 19:33:24.027583  446353 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:33:24.028038  446353 main.go:141] libmachine: (newest-cni-125279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:45:fc", ip: ""} in network mk-newest-cni-125279: {Iface:virbr2 ExpiryTime:2024-08-19 20:33:17 +0000 UTC Type:0 Mac:52:54:00:65:45:fc Iaid: IPaddr:192.168.50.232 Prefix:24 Hostname:newest-cni-125279 Clientid:01:52:54:00:65:45:fc}
	I0819 19:33:24.028070  446353 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined IP address 192.168.50.232 and MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:33:24.028203  446353 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHPort
	I0819 19:33:24.028366  446353 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHKeyPath
	I0819 19:33:24.028477  446353 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHKeyPath
	I0819 19:33:24.028648  446353 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHUsername
	I0819 19:33:24.028811  446353 main.go:141] libmachine: Using SSH client type: native
	I0819 19:33:24.029014  446353 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.232 22 <nil> <nil>}
	I0819 19:33:24.029027  446353 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 19:33:24.136100  446353 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 19:33:24.136134  446353 main.go:141] libmachine: (newest-cni-125279) Calling .GetMachineName
	I0819 19:33:24.136421  446353 buildroot.go:166] provisioning hostname "newest-cni-125279"
	I0819 19:33:24.136448  446353 main.go:141] libmachine: (newest-cni-125279) Calling .GetMachineName
	I0819 19:33:24.136664  446353 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHHostname
	I0819 19:33:24.139316  446353 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:33:24.139719  446353 main.go:141] libmachine: (newest-cni-125279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:45:fc", ip: ""} in network mk-newest-cni-125279: {Iface:virbr2 ExpiryTime:2024-08-19 20:33:17 +0000 UTC Type:0 Mac:52:54:00:65:45:fc Iaid: IPaddr:192.168.50.232 Prefix:24 Hostname:newest-cni-125279 Clientid:01:52:54:00:65:45:fc}
	I0819 19:33:24.139744  446353 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined IP address 192.168.50.232 and MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:33:24.139858  446353 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHPort
	I0819 19:33:24.140053  446353 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHKeyPath
	I0819 19:33:24.140228  446353 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHKeyPath
	I0819 19:33:24.140369  446353 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHUsername
	I0819 19:33:24.140560  446353 main.go:141] libmachine: Using SSH client type: native
	I0819 19:33:24.140817  446353 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.232 22 <nil> <nil>}
	I0819 19:33:24.140835  446353 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-125279 && echo "newest-cni-125279" | sudo tee /etc/hostname
	I0819 19:33:24.262453  446353 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-125279
	
	I0819 19:33:24.262497  446353 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHHostname
	I0819 19:33:24.265342  446353 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:33:24.265690  446353 main.go:141] libmachine: (newest-cni-125279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:45:fc", ip: ""} in network mk-newest-cni-125279: {Iface:virbr2 ExpiryTime:2024-08-19 20:33:17 +0000 UTC Type:0 Mac:52:54:00:65:45:fc Iaid: IPaddr:192.168.50.232 Prefix:24 Hostname:newest-cni-125279 Clientid:01:52:54:00:65:45:fc}
	I0819 19:33:24.265730  446353 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined IP address 192.168.50.232 and MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:33:24.265954  446353 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHPort
	I0819 19:33:24.266180  446353 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHKeyPath
	I0819 19:33:24.266360  446353 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHKeyPath
	I0819 19:33:24.266547  446353 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHUsername
	I0819 19:33:24.266706  446353 main.go:141] libmachine: Using SSH client type: native
	I0819 19:33:24.266903  446353 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.232 22 <nil> <nil>}
	I0819 19:33:24.266921  446353 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-125279' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-125279/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-125279' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 19:33:24.381756  446353 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 19:33:24.381809  446353 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19468-372744/.minikube CaCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19468-372744/.minikube}
	I0819 19:33:24.381864  446353 buildroot.go:174] setting up certificates
	I0819 19:33:24.381881  446353 provision.go:84] configureAuth start
	I0819 19:33:24.381912  446353 main.go:141] libmachine: (newest-cni-125279) Calling .GetMachineName
	I0819 19:33:24.382236  446353 main.go:141] libmachine: (newest-cni-125279) Calling .GetIP
	I0819 19:33:24.384970  446353 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:33:24.385310  446353 main.go:141] libmachine: (newest-cni-125279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:45:fc", ip: ""} in network mk-newest-cni-125279: {Iface:virbr2 ExpiryTime:2024-08-19 20:33:17 +0000 UTC Type:0 Mac:52:54:00:65:45:fc Iaid: IPaddr:192.168.50.232 Prefix:24 Hostname:newest-cni-125279 Clientid:01:52:54:00:65:45:fc}
	I0819 19:33:24.385339  446353 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined IP address 192.168.50.232 and MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:33:24.385506  446353 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHHostname
	I0819 19:33:24.387534  446353 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:33:24.388012  446353 main.go:141] libmachine: (newest-cni-125279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:45:fc", ip: ""} in network mk-newest-cni-125279: {Iface:virbr2 ExpiryTime:2024-08-19 20:33:17 +0000 UTC Type:0 Mac:52:54:00:65:45:fc Iaid: IPaddr:192.168.50.232 Prefix:24 Hostname:newest-cni-125279 Clientid:01:52:54:00:65:45:fc}
	I0819 19:33:24.388039  446353 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined IP address 192.168.50.232 and MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:33:24.388162  446353 provision.go:143] copyHostCerts
	I0819 19:33:24.388226  446353 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem, removing ...
	I0819 19:33:24.388250  446353 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem
	I0819 19:33:24.388330  446353 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem (1123 bytes)
	I0819 19:33:24.388439  446353 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem, removing ...
	I0819 19:33:24.388452  446353 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem
	I0819 19:33:24.388490  446353 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem (1675 bytes)
	I0819 19:33:24.388561  446353 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem, removing ...
	I0819 19:33:24.388571  446353 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem
	I0819 19:33:24.388614  446353 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem (1082 bytes)
	I0819 19:33:24.388680  446353 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem org=jenkins.newest-cni-125279 san=[127.0.0.1 192.168.50.232 localhost minikube newest-cni-125279]
	I0819 19:33:24.670158  446353 provision.go:177] copyRemoteCerts
	I0819 19:33:24.670229  446353 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 19:33:24.670259  446353 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHHostname
	I0819 19:33:24.672906  446353 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:33:24.673264  446353 main.go:141] libmachine: (newest-cni-125279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:45:fc", ip: ""} in network mk-newest-cni-125279: {Iface:virbr2 ExpiryTime:2024-08-19 20:33:17 +0000 UTC Type:0 Mac:52:54:00:65:45:fc Iaid: IPaddr:192.168.50.232 Prefix:24 Hostname:newest-cni-125279 Clientid:01:52:54:00:65:45:fc}
	I0819 19:33:24.673320  446353 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined IP address 192.168.50.232 and MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:33:24.673452  446353 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHPort
	I0819 19:33:24.673672  446353 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHKeyPath
	I0819 19:33:24.673865  446353 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHUsername
	I0819 19:33:24.674023  446353 sshutil.go:53] new ssh client: &{IP:192.168.50.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/newest-cni-125279/id_rsa Username:docker}
	I0819 19:33:24.758308  446353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 19:33:24.782798  446353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0819 19:33:24.806821  446353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 19:33:24.830768  446353 provision.go:87] duration metric: took 448.865885ms to configureAuth
	I0819 19:33:24.830807  446353 buildroot.go:189] setting minikube options for container-runtime
	I0819 19:33:24.831055  446353 config.go:182] Loaded profile config "newest-cni-125279": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:33:24.831143  446353 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHHostname
	I0819 19:33:24.833873  446353 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:33:24.834300  446353 main.go:141] libmachine: (newest-cni-125279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:45:fc", ip: ""} in network mk-newest-cni-125279: {Iface:virbr2 ExpiryTime:2024-08-19 20:33:17 +0000 UTC Type:0 Mac:52:54:00:65:45:fc Iaid: IPaddr:192.168.50.232 Prefix:24 Hostname:newest-cni-125279 Clientid:01:52:54:00:65:45:fc}
	I0819 19:33:24.834336  446353 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined IP address 192.168.50.232 and MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:33:24.834459  446353 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHPort
	I0819 19:33:24.834670  446353 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHKeyPath
	I0819 19:33:24.834793  446353 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHKeyPath
	I0819 19:33:24.834897  446353 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHUsername
	I0819 19:33:24.835103  446353 main.go:141] libmachine: Using SSH client type: native
	I0819 19:33:24.835273  446353 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.232 22 <nil> <nil>}
	I0819 19:33:24.835295  446353 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 19:33:25.107771  446353 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 19:33:25.107803  446353 machine.go:96] duration metric: took 1.082752237s to provisionDockerMachine
	I0819 19:33:25.107818  446353 start.go:293] postStartSetup for "newest-cni-125279" (driver="kvm2")
	I0819 19:33:25.107832  446353 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 19:33:25.107850  446353 main.go:141] libmachine: (newest-cni-125279) Calling .DriverName
	I0819 19:33:25.108205  446353 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 19:33:25.108243  446353 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHHostname
	I0819 19:33:25.111439  446353 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:33:25.111800  446353 main.go:141] libmachine: (newest-cni-125279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:45:fc", ip: ""} in network mk-newest-cni-125279: {Iface:virbr2 ExpiryTime:2024-08-19 20:33:17 +0000 UTC Type:0 Mac:52:54:00:65:45:fc Iaid: IPaddr:192.168.50.232 Prefix:24 Hostname:newest-cni-125279 Clientid:01:52:54:00:65:45:fc}
	I0819 19:33:25.111864  446353 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined IP address 192.168.50.232 and MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:33:25.112087  446353 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHPort
	I0819 19:33:25.112268  446353 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHKeyPath
	I0819 19:33:25.112401  446353 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHUsername
	I0819 19:33:25.112562  446353 sshutil.go:53] new ssh client: &{IP:192.168.50.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/newest-cni-125279/id_rsa Username:docker}
	I0819 19:33:25.198159  446353 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 19:33:25.202736  446353 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 19:33:25.202764  446353 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-372744/.minikube/addons for local assets ...
	I0819 19:33:25.202832  446353 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-372744/.minikube/files for local assets ...
	I0819 19:33:25.202920  446353 filesync.go:149] local asset: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem -> 3800092.pem in /etc/ssl/certs
	I0819 19:33:25.203058  446353 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 19:33:25.212449  446353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem --> /etc/ssl/certs/3800092.pem (1708 bytes)
	I0819 19:33:25.236139  446353 start.go:296] duration metric: took 128.306819ms for postStartSetup
	I0819 19:33:25.236180  446353 fix.go:56] duration metric: took 18.79580491s for fixHost
	I0819 19:33:25.236202  446353 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHHostname
	I0819 19:33:25.238868  446353 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:33:25.239294  446353 main.go:141] libmachine: (newest-cni-125279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:45:fc", ip: ""} in network mk-newest-cni-125279: {Iface:virbr2 ExpiryTime:2024-08-19 20:33:17 +0000 UTC Type:0 Mac:52:54:00:65:45:fc Iaid: IPaddr:192.168.50.232 Prefix:24 Hostname:newest-cni-125279 Clientid:01:52:54:00:65:45:fc}
	I0819 19:33:25.239323  446353 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined IP address 192.168.50.232 and MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:33:25.239487  446353 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHPort
	I0819 19:33:25.239709  446353 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHKeyPath
	I0819 19:33:25.239907  446353 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHKeyPath
	I0819 19:33:25.240058  446353 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHUsername
	I0819 19:33:25.240220  446353 main.go:141] libmachine: Using SSH client type: native
	I0819 19:33:25.240375  446353 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.232 22 <nil> <nil>}
	I0819 19:33:25.240385  446353 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 19:33:25.352519  446353 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724096005.326018620
	
	I0819 19:33:25.352544  446353 fix.go:216] guest clock: 1724096005.326018620
	I0819 19:33:25.352552  446353 fix.go:229] Guest: 2024-08-19 19:33:25.32601862 +0000 UTC Remote: 2024-08-19 19:33:25.236185125 +0000 UTC m=+18.940579234 (delta=89.833495ms)
	I0819 19:33:25.352605  446353 fix.go:200] guest clock delta is within tolerance: 89.833495ms
	I0819 19:33:25.352615  446353 start.go:83] releasing machines lock for "newest-cni-125279", held for 18.912251844s
	I0819 19:33:25.352643  446353 main.go:141] libmachine: (newest-cni-125279) Calling .DriverName
	I0819 19:33:25.352892  446353 main.go:141] libmachine: (newest-cni-125279) Calling .GetIP
	I0819 19:33:25.355240  446353 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:33:25.355595  446353 main.go:141] libmachine: (newest-cni-125279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:45:fc", ip: ""} in network mk-newest-cni-125279: {Iface:virbr2 ExpiryTime:2024-08-19 20:33:17 +0000 UTC Type:0 Mac:52:54:00:65:45:fc Iaid: IPaddr:192.168.50.232 Prefix:24 Hostname:newest-cni-125279 Clientid:01:52:54:00:65:45:fc}
	I0819 19:33:25.355629  446353 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined IP address 192.168.50.232 and MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:33:25.355748  446353 main.go:141] libmachine: (newest-cni-125279) Calling .DriverName
	I0819 19:33:25.356252  446353 main.go:141] libmachine: (newest-cni-125279) Calling .DriverName
	I0819 19:33:25.356496  446353 main.go:141] libmachine: (newest-cni-125279) Calling .DriverName
	I0819 19:33:25.356581  446353 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 19:33:25.356655  446353 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHHostname
	I0819 19:33:25.356745  446353 ssh_runner.go:195] Run: cat /version.json
	I0819 19:33:25.356771  446353 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHHostname
	I0819 19:33:25.359036  446353 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:33:25.359203  446353 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:33:25.359410  446353 main.go:141] libmachine: (newest-cni-125279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:45:fc", ip: ""} in network mk-newest-cni-125279: {Iface:virbr2 ExpiryTime:2024-08-19 20:33:17 +0000 UTC Type:0 Mac:52:54:00:65:45:fc Iaid: IPaddr:192.168.50.232 Prefix:24 Hostname:newest-cni-125279 Clientid:01:52:54:00:65:45:fc}
	I0819 19:33:25.359432  446353 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined IP address 192.168.50.232 and MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:33:25.359572  446353 main.go:141] libmachine: (newest-cni-125279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:45:fc", ip: ""} in network mk-newest-cni-125279: {Iface:virbr2 ExpiryTime:2024-08-19 20:33:17 +0000 UTC Type:0 Mac:52:54:00:65:45:fc Iaid: IPaddr:192.168.50.232 Prefix:24 Hostname:newest-cni-125279 Clientid:01:52:54:00:65:45:fc}
	I0819 19:33:25.359604  446353 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHPort
	I0819 19:33:25.359613  446353 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined IP address 192.168.50.232 and MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:33:25.359858  446353 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHPort
	I0819 19:33:25.359863  446353 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHKeyPath
	I0819 19:33:25.360016  446353 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHKeyPath
	I0819 19:33:25.360046  446353 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHUsername
	I0819 19:33:25.360196  446353 sshutil.go:53] new ssh client: &{IP:192.168.50.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/newest-cni-125279/id_rsa Username:docker}
	I0819 19:33:25.360264  446353 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHUsername
	I0819 19:33:25.360419  446353 sshutil.go:53] new ssh client: &{IP:192.168.50.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/newest-cni-125279/id_rsa Username:docker}
	I0819 19:33:25.445225  446353 ssh_runner.go:195] Run: systemctl --version
	I0819 19:33:25.470075  446353 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 19:33:25.615233  446353 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 19:33:25.621689  446353 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 19:33:25.621768  446353 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 19:33:25.638825  446353 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 19:33:25.638853  446353 start.go:495] detecting cgroup driver to use...
	I0819 19:33:25.638931  446353 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 19:33:25.656224  446353 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 19:33:25.671271  446353 docker.go:217] disabling cri-docker service (if available) ...
	I0819 19:33:25.671324  446353 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 19:33:25.685421  446353 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 19:33:25.699024  446353 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 19:33:25.813922  446353 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 19:33:25.970934  446353 docker.go:233] disabling docker service ...
	I0819 19:33:25.971011  446353 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 19:33:25.986244  446353 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 19:33:26.000034  446353 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 19:33:26.135782  446353 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 19:33:26.253063  446353 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 19:33:26.267631  446353 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 19:33:26.287278  446353 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 19:33:26.287335  446353 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:33:26.297772  446353 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 19:33:26.297856  446353 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:33:26.308216  446353 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:33:26.318510  446353 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:33:26.329126  446353 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 19:33:26.339867  446353 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:33:26.350868  446353 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:33:26.368871  446353 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:33:26.379508  446353 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 19:33:26.388864  446353 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 19:33:26.388926  446353 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 19:33:26.402351  446353 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 19:33:26.412070  446353 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:33:26.528254  446353 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 19:33:26.660519  446353 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 19:33:26.660610  446353 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 19:33:26.666514  446353 start.go:563] Will wait 60s for crictl version
	I0819 19:33:26.666583  446353 ssh_runner.go:195] Run: which crictl
	I0819 19:33:26.670348  446353 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 19:33:26.707344  446353 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 19:33:26.707432  446353 ssh_runner.go:195] Run: crio --version
	I0819 19:33:26.734405  446353 ssh_runner.go:195] Run: crio --version
	I0819 19:33:26.764838  446353 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 19:33:26.766245  446353 main.go:141] libmachine: (newest-cni-125279) Calling .GetIP
	I0819 19:33:26.769020  446353 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:33:26.769294  446353 main.go:141] libmachine: (newest-cni-125279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:45:fc", ip: ""} in network mk-newest-cni-125279: {Iface:virbr2 ExpiryTime:2024-08-19 20:33:17 +0000 UTC Type:0 Mac:52:54:00:65:45:fc Iaid: IPaddr:192.168.50.232 Prefix:24 Hostname:newest-cni-125279 Clientid:01:52:54:00:65:45:fc}
	I0819 19:33:26.769318  446353 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined IP address 192.168.50.232 and MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:33:26.769520  446353 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0819 19:33:26.773886  446353 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 19:33:26.788446  446353 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0819 19:33:26.789644  446353 kubeadm.go:883] updating cluster {Name:newest-cni-125279 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:newest-cni-125279 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.232 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:
6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 19:33:26.789767  446353 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 19:33:26.789827  446353 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 19:33:26.827084  446353 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0819 19:33:26.827165  446353 ssh_runner.go:195] Run: which lz4
	I0819 19:33:26.831107  446353 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 19:33:26.835631  446353 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 19:33:26.835664  446353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0819 19:33:28.207500  446353 crio.go:462] duration metric: took 1.376423103s to copy over tarball
	I0819 19:33:28.207572  446353 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 19:33:30.320804  446353 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.113204218s)
	I0819 19:33:30.320831  446353 crio.go:469] duration metric: took 2.113304169s to extract the tarball
	I0819 19:33:30.320839  446353 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 19:33:30.358188  446353 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 19:33:30.403229  446353 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 19:33:30.403254  446353 cache_images.go:84] Images are preloaded, skipping loading
	I0819 19:33:30.403263  446353 kubeadm.go:934] updating node { 192.168.50.232 8443 v1.31.0 crio true true} ...
	I0819 19:33:30.403378  446353 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-125279 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.232
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:newest-cni-125279 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 19:33:30.403446  446353 ssh_runner.go:195] Run: crio config
	I0819 19:33:30.447299  446353 cni.go:84] Creating CNI manager for ""
	I0819 19:33:30.447328  446353 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 19:33:30.447347  446353 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0819 19:33:30.447378  446353 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.50.232 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-125279 NodeName:newest-cni-125279 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.232"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:ma
p[] NodeIP:192.168.50.232 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 19:33:30.447547  446353 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.232
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-125279"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.232
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.232"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 19:33:30.447640  446353 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 19:33:30.460092  446353 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 19:33:30.460188  446353 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 19:33:30.470265  446353 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I0819 19:33:30.486881  446353 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 19:33:30.503462  446353 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2285 bytes)
	I0819 19:33:30.520824  446353 ssh_runner.go:195] Run: grep 192.168.50.232	control-plane.minikube.internal$ /etc/hosts
	I0819 19:33:30.524568  446353 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.232	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 19:33:30.536972  446353 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:33:30.672612  446353 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 19:33:30.699195  446353 certs.go:68] Setting up /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/newest-cni-125279 for IP: 192.168.50.232
	I0819 19:33:30.699230  446353 certs.go:194] generating shared ca certs ...
	I0819 19:33:30.699253  446353 certs.go:226] acquiring lock for ca certs: {Name:mk639e03f593e0bccac045f6e9f5ba3b96cc81e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:33:30.699465  446353 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key
	I0819 19:33:30.699543  446353 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key
	I0819 19:33:30.699557  446353 certs.go:256] generating profile certs ...
	I0819 19:33:30.699645  446353 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/newest-cni-125279/client.key
	I0819 19:33:30.699737  446353 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/newest-cni-125279/apiserver.key.84e3bbbc
	I0819 19:33:30.699778  446353 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/newest-cni-125279/proxy-client.key
	I0819 19:33:30.699912  446353 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem (1338 bytes)
	W0819 19:33:30.699960  446353 certs.go:480] ignoring /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009_empty.pem, impossibly tiny 0 bytes
	I0819 19:33:30.699985  446353 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 19:33:30.700017  446353 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem (1082 bytes)
	I0819 19:33:30.700049  446353 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem (1123 bytes)
	I0819 19:33:30.700085  446353 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem (1675 bytes)
	I0819 19:33:30.700143  446353 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem (1708 bytes)
	I0819 19:33:30.701008  446353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 19:33:30.746500  446353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 19:33:30.781448  446353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 19:33:30.810834  446353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 19:33:30.839369  446353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/newest-cni-125279/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0819 19:33:30.876386  446353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/newest-cni-125279/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 19:33:30.900814  446353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/newest-cni-125279/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 19:33:30.924292  446353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/newest-cni-125279/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 19:33:30.947769  446353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 19:33:30.973393  446353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem --> /usr/share/ca-certificates/380009.pem (1338 bytes)
	I0819 19:33:30.999138  446353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem --> /usr/share/ca-certificates/3800092.pem (1708 bytes)
	I0819 19:33:31.022994  446353 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 19:33:31.040958  446353 ssh_runner.go:195] Run: openssl version
	I0819 19:33:31.046702  446353 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 19:33:31.058075  446353 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:33:31.062725  446353 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:33:31.062780  446353 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:33:31.068627  446353 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 19:33:31.079886  446353 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/380009.pem && ln -fs /usr/share/ca-certificates/380009.pem /etc/ssl/certs/380009.pem"
	I0819 19:33:31.090985  446353 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/380009.pem
	I0819 19:33:31.095581  446353 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 17:56 /usr/share/ca-certificates/380009.pem
	I0819 19:33:31.095643  446353 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/380009.pem
	I0819 19:33:31.101449  446353 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/380009.pem /etc/ssl/certs/51391683.0"
	I0819 19:33:31.113328  446353 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3800092.pem && ln -fs /usr/share/ca-certificates/3800092.pem /etc/ssl/certs/3800092.pem"
	I0819 19:33:31.124294  446353 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3800092.pem
	I0819 19:33:31.128633  446353 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 17:56 /usr/share/ca-certificates/3800092.pem
	I0819 19:33:31.128693  446353 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3800092.pem
	I0819 19:33:31.134548  446353 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3800092.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 19:33:31.145671  446353 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 19:33:31.150629  446353 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 19:33:31.156671  446353 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 19:33:31.162626  446353 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 19:33:31.168613  446353 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 19:33:31.174480  446353 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 19:33:31.180314  446353 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 19:33:31.185951  446353 kubeadm.go:392] StartCluster: {Name:newest-cni-125279 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:newest-cni-125279 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.232 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0
s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 19:33:31.186060  446353 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 19:33:31.186130  446353 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 19:33:31.223720  446353 cri.go:89] found id: ""
	I0819 19:33:31.223793  446353 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 19:33:31.234597  446353 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 19:33:31.234617  446353 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 19:33:31.234662  446353 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 19:33:31.244963  446353 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 19:33:31.245581  446353 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-125279" does not appear in /home/jenkins/minikube-integration/19468-372744/kubeconfig
	I0819 19:33:31.245929  446353 kubeconfig.go:62] /home/jenkins/minikube-integration/19468-372744/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-125279" cluster setting kubeconfig missing "newest-cni-125279" context setting]
	I0819 19:33:31.246497  446353 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/kubeconfig: {Name:mk8e7b4e1bb7da665111d2acd83eb48882c66853 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:33:31.247852  446353 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 19:33:31.257974  446353 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.232
	I0819 19:33:31.258003  446353 kubeadm.go:1160] stopping kube-system containers ...
	I0819 19:33:31.258014  446353 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0819 19:33:31.258057  446353 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 19:33:31.293965  446353 cri.go:89] found id: ""
	I0819 19:33:31.294066  446353 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0819 19:33:31.311051  446353 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 19:33:31.321155  446353 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 19:33:31.321196  446353 kubeadm.go:157] found existing configuration files:
	
	I0819 19:33:31.321242  446353 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 19:33:31.330822  446353 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 19:33:31.330909  446353 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 19:33:31.341235  446353 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 19:33:31.350721  446353 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 19:33:31.350805  446353 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 19:33:31.360416  446353 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 19:33:31.369675  446353 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 19:33:31.369724  446353 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 19:33:31.379458  446353 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 19:33:31.388969  446353 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 19:33:31.389024  446353 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 19:33:31.398936  446353 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 19:33:31.408675  446353 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:33:31.530842  446353 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:33:32.601046  446353 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.070160553s)
	I0819 19:33:32.601090  446353 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:33:32.797438  446353 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:33:32.861680  446353 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:33:32.976562  446353 api_server.go:52] waiting for apiserver process to appear ...
	I0819 19:33:32.976678  446353 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:33:33.476927  446353 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:33:33.977525  446353 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:33:34.477685  446353 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:33:34.976762  446353 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:33:35.017579  446353 api_server.go:72] duration metric: took 2.041034045s to wait for apiserver process to appear ...
	I0819 19:33:35.017610  446353 api_server.go:88] waiting for apiserver healthz status ...
	I0819 19:33:35.017662  446353 api_server.go:253] Checking apiserver healthz at https://192.168.50.232:8443/healthz ...
	I0819 19:33:35.018197  446353 api_server.go:269] stopped: https://192.168.50.232:8443/healthz: Get "https://192.168.50.232:8443/healthz": dial tcp 192.168.50.232:8443: connect: connection refused
	I0819 19:33:35.517763  446353 api_server.go:253] Checking apiserver healthz at https://192.168.50.232:8443/healthz ...
	I0819 19:33:38.158338  446353 api_server.go:279] https://192.168.50.232:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 19:33:38.158379  446353 api_server.go:103] status: https://192.168.50.232:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 19:33:38.158413  446353 api_server.go:253] Checking apiserver healthz at https://192.168.50.232:8443/healthz ...
	I0819 19:33:38.173522  446353 api_server.go:279] https://192.168.50.232:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 19:33:38.173554  446353 api_server.go:103] status: https://192.168.50.232:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 19:33:38.517823  446353 api_server.go:253] Checking apiserver healthz at https://192.168.50.232:8443/healthz ...
	I0819 19:33:38.522307  446353 api_server.go:279] https://192.168.50.232:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 19:33:38.522340  446353 api_server.go:103] status: https://192.168.50.232:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 19:33:39.017853  446353 api_server.go:253] Checking apiserver healthz at https://192.168.50.232:8443/healthz ...
	I0819 19:33:39.026006  446353 api_server.go:279] https://192.168.50.232:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 19:33:39.026040  446353 api_server.go:103] status: https://192.168.50.232:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 19:33:39.518738  446353 api_server.go:253] Checking apiserver healthz at https://192.168.50.232:8443/healthz ...
	I0819 19:33:39.524364  446353 api_server.go:279] https://192.168.50.232:8443/healthz returned 200:
	ok
	I0819 19:33:39.533699  446353 api_server.go:141] control plane version: v1.31.0
	I0819 19:33:39.533729  446353 api_server.go:131] duration metric: took 4.51611103s to wait for apiserver health ...
	I0819 19:33:39.533739  446353 cni.go:84] Creating CNI manager for ""
	I0819 19:33:39.533746  446353 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 19:33:39.535496  446353 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 19:33:39.536873  446353 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 19:33:39.561747  446353 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 19:33:39.601978  446353 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 19:33:39.620170  446353 system_pods.go:59] 8 kube-system pods found
	I0819 19:33:39.620206  446353 system_pods.go:61] "coredns-6f6b679f8f-dcvb8" [3d0efe89-70c3-43c2-9504-c29339089833] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0819 19:33:39.620214  446353 system_pods.go:61] "etcd-newest-cni-125279" [b094a90f-a524-48fe-9401-3865e147c3a9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0819 19:33:39.620223  446353 system_pods.go:61] "kube-apiserver-newest-cni-125279" [88785c36-87aa-48da-b7e3-75ddcf969dfa] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0819 19:33:39.620229  446353 system_pods.go:61] "kube-controller-manager-newest-cni-125279" [9444e4cc-f675-4201-9ed3-8a69fa70a3cf] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0819 19:33:39.620236  446353 system_pods.go:61] "kube-proxy-df7d9" [4e056f03-fc39-4070-8192-1ec53669bc43] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0819 19:33:39.620241  446353 system_pods.go:61] "kube-scheduler-newest-cni-125279" [0fa163ed-419b-4a4e-81d2-194ff57e91d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0819 19:33:39.620246  446353 system_pods.go:61] "metrics-server-6867b74b74-7p5bz" [23e8059a-a4ed-47e7-978e-c9d3e692a3bf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 19:33:39.620251  446353 system_pods.go:61] "storage-provisioner" [d97409ec-ee3e-40a0-9054-e0fce384047a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0819 19:33:39.620257  446353 system_pods.go:74] duration metric: took 18.258312ms to wait for pod list to return data ...
	I0819 19:33:39.620264  446353 node_conditions.go:102] verifying NodePressure condition ...
	I0819 19:33:39.624783  446353 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 19:33:39.624815  446353 node_conditions.go:123] node cpu capacity is 2
	I0819 19:33:39.624831  446353 node_conditions.go:105] duration metric: took 4.560972ms to run NodePressure ...
	I0819 19:33:39.624854  446353 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:33:39.918736  446353 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 19:33:39.930492  446353 ops.go:34] apiserver oom_adj: -16
	I0819 19:33:39.930523  446353 kubeadm.go:597] duration metric: took 8.695898063s to restartPrimaryControlPlane
	I0819 19:33:39.930537  446353 kubeadm.go:394] duration metric: took 8.744594709s to StartCluster
	I0819 19:33:39.930558  446353 settings.go:142] acquiring lock: {Name:mk396fcf49a1d0e69583cf37ff3c819e37118163 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:33:39.930641  446353 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19468-372744/kubeconfig
	I0819 19:33:39.931577  446353 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/kubeconfig: {Name:mk8e7b4e1bb7da665111d2acd83eb48882c66853 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:33:39.931843  446353 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.232 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 19:33:39.931947  446353 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 19:33:39.932058  446353 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-125279"
	I0819 19:33:39.932093  446353 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-125279"
	I0819 19:33:39.932088  446353 addons.go:69] Setting default-storageclass=true in profile "newest-cni-125279"
	W0819 19:33:39.932104  446353 addons.go:243] addon storage-provisioner should already be in state true
	I0819 19:33:39.932110  446353 config.go:182] Loaded profile config "newest-cni-125279": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:33:39.932130  446353 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-125279"
	I0819 19:33:39.932144  446353 addons.go:69] Setting metrics-server=true in profile "newest-cni-125279"
	I0819 19:33:39.932160  446353 host.go:66] Checking if "newest-cni-125279" exists ...
	I0819 19:33:39.932167  446353 addons.go:234] Setting addon metrics-server=true in "newest-cni-125279"
	W0819 19:33:39.932181  446353 addons.go:243] addon metrics-server should already be in state true
	I0819 19:33:39.932214  446353 host.go:66] Checking if "newest-cni-125279" exists ...
	I0819 19:33:39.932125  446353 addons.go:69] Setting dashboard=true in profile "newest-cni-125279"
	I0819 19:33:39.932271  446353 addons.go:234] Setting addon dashboard=true in "newest-cni-125279"
	W0819 19:33:39.932289  446353 addons.go:243] addon dashboard should already be in state true
	I0819 19:33:39.932322  446353 host.go:66] Checking if "newest-cni-125279" exists ...
	I0819 19:33:39.932541  446353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:33:39.932569  446353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:33:39.932600  446353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:33:39.932645  446353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:33:39.932658  446353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:33:39.932680  446353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:33:39.932607  446353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:33:39.932727  446353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:33:39.934059  446353 out.go:177] * Verifying Kubernetes components...
	I0819 19:33:39.935490  446353 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:33:39.955840  446353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43493
	I0819 19:33:39.955900  446353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37801
	I0819 19:33:39.955840  446353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45087
	I0819 19:33:39.956019  446353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44181
	I0819 19:33:39.956549  446353 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:33:39.956600  446353 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:33:39.956660  446353 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:33:39.956802  446353 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:33:39.957173  446353 main.go:141] libmachine: Using API Version  1
	I0819 19:33:39.957194  446353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:33:39.957308  446353 main.go:141] libmachine: Using API Version  1
	I0819 19:33:39.957319  446353 main.go:141] libmachine: Using API Version  1
	I0819 19:33:39.957320  446353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:33:39.957330  446353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:33:39.957449  446353 main.go:141] libmachine: Using API Version  1
	I0819 19:33:39.957463  446353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:33:39.957613  446353 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:33:39.957793  446353 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:33:39.957868  446353 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:33:39.958020  446353 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:33:39.958239  446353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:33:39.958286  446353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:33:39.958318  446353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:33:39.958346  446353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:33:39.958380  446353 main.go:141] libmachine: (newest-cni-125279) Calling .GetState
	I0819 19:33:39.958972  446353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:33:39.959024  446353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:33:39.961973  446353 addons.go:234] Setting addon default-storageclass=true in "newest-cni-125279"
	W0819 19:33:39.961991  446353 addons.go:243] addon default-storageclass should already be in state true
	I0819 19:33:39.962014  446353 host.go:66] Checking if "newest-cni-125279" exists ...
	I0819 19:33:39.962255  446353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:33:39.962292  446353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:33:39.979658  446353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34269
	I0819 19:33:39.980162  446353 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:33:39.980395  446353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44703
	I0819 19:33:39.980864  446353 main.go:141] libmachine: Using API Version  1
	I0819 19:33:39.980887  446353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:33:39.980966  446353 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:33:39.981438  446353 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:33:39.981478  446353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34855
	I0819 19:33:39.981631  446353 main.go:141] libmachine: (newest-cni-125279) Calling .GetState
	I0819 19:33:39.981712  446353 main.go:141] libmachine: Using API Version  1
	I0819 19:33:39.981729  446353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:33:39.981891  446353 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:33:39.982356  446353 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:33:39.982408  446353 main.go:141] libmachine: Using API Version  1
	I0819 19:33:39.982422  446353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:33:39.982574  446353 main.go:141] libmachine: (newest-cni-125279) Calling .GetState
	I0819 19:33:39.983101  446353 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:33:39.983336  446353 main.go:141] libmachine: (newest-cni-125279) Calling .GetState
	I0819 19:33:39.984041  446353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34097
	I0819 19:33:39.984548  446353 main.go:141] libmachine: (newest-cni-125279) Calling .DriverName
	I0819 19:33:39.984597  446353 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:33:39.984617  446353 main.go:141] libmachine: (newest-cni-125279) Calling .DriverName
	I0819 19:33:39.985108  446353 main.go:141] libmachine: Using API Version  1
	I0819 19:33:39.985138  446353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:33:39.985445  446353 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:33:39.985893  446353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:33:39.985910  446353 main.go:141] libmachine: (newest-cni-125279) Calling .DriverName
	I0819 19:33:39.985921  446353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:33:39.986505  446353 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:33:39.986527  446353 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0819 19:33:39.987392  446353 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0819 19:33:39.987467  446353 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 19:33:39.987833  446353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 19:33:39.987857  446353 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHHostname
	I0819 19:33:39.988243  446353 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0819 19:33:39.988266  446353 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0819 19:33:39.988288  446353 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHHostname
	I0819 19:33:39.990203  446353 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0819 19:33:39.991471  446353 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0819 19:33:39.991486  446353 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0819 19:33:39.991511  446353 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHHostname
	I0819 19:33:39.997153  446353 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:33:39.997597  446353 main.go:141] libmachine: (newest-cni-125279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:45:fc", ip: ""} in network mk-newest-cni-125279: {Iface:virbr2 ExpiryTime:2024-08-19 20:33:17 +0000 UTC Type:0 Mac:52:54:00:65:45:fc Iaid: IPaddr:192.168.50.232 Prefix:24 Hostname:newest-cni-125279 Clientid:01:52:54:00:65:45:fc}
	I0819 19:33:39.997618  446353 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined IP address 192.168.50.232 and MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:33:39.997899  446353 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHPort
	I0819 19:33:39.997957  446353 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:33:39.998075  446353 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHKeyPath
	I0819 19:33:39.998192  446353 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHUsername
	I0819 19:33:39.998236  446353 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:33:39.998341  446353 sshutil.go:53] new ssh client: &{IP:192.168.50.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/newest-cni-125279/id_rsa Username:docker}
	I0819 19:33:39.998482  446353 main.go:141] libmachine: (newest-cni-125279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:45:fc", ip: ""} in network mk-newest-cni-125279: {Iface:virbr2 ExpiryTime:2024-08-19 20:33:17 +0000 UTC Type:0 Mac:52:54:00:65:45:fc Iaid: IPaddr:192.168.50.232 Prefix:24 Hostname:newest-cni-125279 Clientid:01:52:54:00:65:45:fc}
	I0819 19:33:39.998505  446353 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined IP address 192.168.50.232 and MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:33:39.998540  446353 main.go:141] libmachine: (newest-cni-125279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:45:fc", ip: ""} in network mk-newest-cni-125279: {Iface:virbr2 ExpiryTime:2024-08-19 20:33:17 +0000 UTC Type:0 Mac:52:54:00:65:45:fc Iaid: IPaddr:192.168.50.232 Prefix:24 Hostname:newest-cni-125279 Clientid:01:52:54:00:65:45:fc}
	I0819 19:33:39.998610  446353 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined IP address 192.168.50.232 and MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:33:39.998757  446353 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHPort
	I0819 19:33:39.998815  446353 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHPort
	I0819 19:33:39.998915  446353 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHKeyPath
	I0819 19:33:39.998989  446353 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHKeyPath
	I0819 19:33:39.999081  446353 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHUsername
	I0819 19:33:39.999159  446353 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHUsername
	I0819 19:33:39.999201  446353 sshutil.go:53] new ssh client: &{IP:192.168.50.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/newest-cni-125279/id_rsa Username:docker}
	I0819 19:33:39.999435  446353 sshutil.go:53] new ssh client: &{IP:192.168.50.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/newest-cni-125279/id_rsa Username:docker}
	I0819 19:33:40.002169  446353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39299
	I0819 19:33:40.002486  446353 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:33:40.002907  446353 main.go:141] libmachine: Using API Version  1
	I0819 19:33:40.002924  446353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:33:40.003189  446353 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:33:40.003376  446353 main.go:141] libmachine: (newest-cni-125279) Calling .GetState
	I0819 19:33:40.004697  446353 main.go:141] libmachine: (newest-cni-125279) Calling .DriverName
	I0819 19:33:40.004918  446353 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 19:33:40.004933  446353 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 19:33:40.004951  446353 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHHostname
	I0819 19:33:40.007263  446353 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:33:40.007650  446353 main.go:141] libmachine: (newest-cni-125279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:45:fc", ip: ""} in network mk-newest-cni-125279: {Iface:virbr2 ExpiryTime:2024-08-19 20:33:17 +0000 UTC Type:0 Mac:52:54:00:65:45:fc Iaid: IPaddr:192.168.50.232 Prefix:24 Hostname:newest-cni-125279 Clientid:01:52:54:00:65:45:fc}
	I0819 19:33:40.007693  446353 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined IP address 192.168.50.232 and MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:33:40.007852  446353 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHPort
	I0819 19:33:40.008028  446353 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHKeyPath
	I0819 19:33:40.008168  446353 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHUsername
	I0819 19:33:40.008292  446353 sshutil.go:53] new ssh client: &{IP:192.168.50.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/newest-cni-125279/id_rsa Username:docker}
	I0819 19:33:40.167040  446353 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 19:33:40.197371  446353 api_server.go:52] waiting for apiserver process to appear ...
	I0819 19:33:40.197472  446353 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:33:40.262698  446353 api_server.go:72] duration metric: took 330.815426ms to wait for apiserver process to appear ...
	I0819 19:33:40.262741  446353 api_server.go:88] waiting for apiserver healthz status ...
	I0819 19:33:40.262767  446353 api_server.go:253] Checking apiserver healthz at https://192.168.50.232:8443/healthz ...
	I0819 19:33:40.274903  446353 api_server.go:279] https://192.168.50.232:8443/healthz returned 200:
	ok
	I0819 19:33:40.276395  446353 api_server.go:141] control plane version: v1.31.0
	I0819 19:33:40.276429  446353 api_server.go:131] duration metric: took 13.679942ms to wait for apiserver health ...
	I0819 19:33:40.276441  446353 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 19:33:40.278611  446353 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0819 19:33:40.278638  446353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0819 19:33:40.283615  446353 system_pods.go:59] 8 kube-system pods found
	I0819 19:33:40.283655  446353 system_pods.go:61] "coredns-6f6b679f8f-dcvb8" [3d0efe89-70c3-43c2-9504-c29339089833] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0819 19:33:40.283667  446353 system_pods.go:61] "etcd-newest-cni-125279" [b094a90f-a524-48fe-9401-3865e147c3a9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0819 19:33:40.283693  446353 system_pods.go:61] "kube-apiserver-newest-cni-125279" [88785c36-87aa-48da-b7e3-75ddcf969dfa] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0819 19:33:40.283746  446353 system_pods.go:61] "kube-controller-manager-newest-cni-125279" [9444e4cc-f675-4201-9ed3-8a69fa70a3cf] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0819 19:33:40.283762  446353 system_pods.go:61] "kube-proxy-df7d9" [4e056f03-fc39-4070-8192-1ec53669bc43] Running
	I0819 19:33:40.283772  446353 system_pods.go:61] "kube-scheduler-newest-cni-125279" [0fa163ed-419b-4a4e-81d2-194ff57e91d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0819 19:33:40.283783  446353 system_pods.go:61] "metrics-server-6867b74b74-7p5bz" [23e8059a-a4ed-47e7-978e-c9d3e692a3bf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 19:33:40.283795  446353 system_pods.go:61] "storage-provisioner" [d97409ec-ee3e-40a0-9054-e0fce384047a] Running
	I0819 19:33:40.283809  446353 system_pods.go:74] duration metric: took 7.358859ms to wait for pod list to return data ...
	I0819 19:33:40.283822  446353 default_sa.go:34] waiting for default service account to be created ...
	I0819 19:33:40.295272  446353 default_sa.go:45] found service account: "default"
	I0819 19:33:40.295304  446353 default_sa.go:55] duration metric: took 11.474533ms for default service account to be created ...
	I0819 19:33:40.295319  446353 kubeadm.go:582] duration metric: took 363.443771ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0819 19:33:40.295341  446353 node_conditions.go:102] verifying NodePressure condition ...
	I0819 19:33:40.303658  446353 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 19:33:40.303702  446353 node_conditions.go:123] node cpu capacity is 2
	I0819 19:33:40.303716  446353 node_conditions.go:105] duration metric: took 8.365868ms to run NodePressure ...
	I0819 19:33:40.303731  446353 start.go:241] waiting for startup goroutines ...
	I0819 19:33:40.308204  446353 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 19:33:40.340111  446353 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0819 19:33:40.340137  446353 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0819 19:33:40.398515  446353 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 19:33:40.398542  446353 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0819 19:33:40.416829  446353 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 19:33:40.425848  446353 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0819 19:33:40.425881  446353 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0819 19:33:40.439196  446353 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 19:33:40.496713  446353 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0819 19:33:40.496747  446353 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0819 19:33:40.519474  446353 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0819 19:33:40.519510  446353 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0819 19:33:40.573480  446353 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0819 19:33:40.573510  446353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0819 19:33:40.714591  446353 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0819 19:33:40.714620  446353 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0819 19:33:40.745888  446353 main.go:141] libmachine: Making call to close driver server
	I0819 19:33:40.745918  446353 main.go:141] libmachine: (newest-cni-125279) Calling .Close
	I0819 19:33:40.746295  446353 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:33:40.746317  446353 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:33:40.746332  446353 main.go:141] libmachine: Making call to close driver server
	I0819 19:33:40.746341  446353 main.go:141] libmachine: (newest-cni-125279) Calling .Close
	I0819 19:33:40.746648  446353 main.go:141] libmachine: (newest-cni-125279) DBG | Closing plugin on server side
	I0819 19:33:40.746698  446353 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:33:40.746710  446353 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:33:40.753061  446353 main.go:141] libmachine: Making call to close driver server
	I0819 19:33:40.753082  446353 main.go:141] libmachine: (newest-cni-125279) Calling .Close
	I0819 19:33:40.753368  446353 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:33:40.753390  446353 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:33:40.753410  446353 main.go:141] libmachine: (newest-cni-125279) DBG | Closing plugin on server side
	I0819 19:33:40.789647  446353 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0819 19:33:40.789681  446353 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0819 19:33:40.823465  446353 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0819 19:33:40.823510  446353 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0819 19:33:40.853452  446353 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0819 19:33:40.853478  446353 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0819 19:33:40.921343  446353 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0819 19:33:40.921372  446353 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0819 19:33:40.961389  446353 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0819 19:33:42.067411  446353 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.650531105s)
	I0819 19:33:42.067483  446353 main.go:141] libmachine: Making call to close driver server
	I0819 19:33:42.067496  446353 main.go:141] libmachine: (newest-cni-125279) Calling .Close
	I0819 19:33:42.067498  446353 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.628262159s)
	I0819 19:33:42.067535  446353 main.go:141] libmachine: Making call to close driver server
	I0819 19:33:42.067551  446353 main.go:141] libmachine: (newest-cni-125279) Calling .Close
	I0819 19:33:42.067824  446353 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:33:42.067879  446353 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:33:42.067906  446353 main.go:141] libmachine: Making call to close driver server
	I0819 19:33:42.067919  446353 main.go:141] libmachine: (newest-cni-125279) Calling .Close
	I0819 19:33:42.067926  446353 main.go:141] libmachine: (newest-cni-125279) DBG | Closing plugin on server side
	I0819 19:33:42.067937  446353 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:33:42.067955  446353 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:33:42.067963  446353 main.go:141] libmachine: Making call to close driver server
	I0819 19:33:42.067974  446353 main.go:141] libmachine: (newest-cni-125279) Calling .Close
	I0819 19:33:42.068236  446353 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:33:42.068251  446353 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:33:42.068292  446353 main.go:141] libmachine: (newest-cni-125279) DBG | Closing plugin on server side
	I0819 19:33:42.068323  446353 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:33:42.068340  446353 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:33:42.068355  446353 addons.go:475] Verifying addon metrics-server=true in "newest-cni-125279"
	I0819 19:33:42.571474  446353 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.610026033s)
	I0819 19:33:42.571548  446353 main.go:141] libmachine: Making call to close driver server
	I0819 19:33:42.571564  446353 main.go:141] libmachine: (newest-cni-125279) Calling .Close
	I0819 19:33:42.572034  446353 main.go:141] libmachine: (newest-cni-125279) DBG | Closing plugin on server side
	I0819 19:33:42.572044  446353 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:33:42.572061  446353 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:33:42.572071  446353 main.go:141] libmachine: Making call to close driver server
	I0819 19:33:42.572086  446353 main.go:141] libmachine: (newest-cni-125279) Calling .Close
	I0819 19:33:42.572369  446353 main.go:141] libmachine: (newest-cni-125279) DBG | Closing plugin on server side
	I0819 19:33:42.572399  446353 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:33:42.572409  446353 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:33:42.574078  446353 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-125279 addons enable metrics-server
	
	I0819 19:33:42.575756  446353 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0819 19:33:42.577483  446353 addons.go:510] duration metric: took 2.645566434s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0819 19:33:42.577519  446353 start.go:246] waiting for cluster config update ...
	I0819 19:33:42.577530  446353 start.go:255] writing updated cluster config ...
	I0819 19:33:42.577759  446353 ssh_runner.go:195] Run: rm -f paused
	I0819 19:33:42.632438  446353 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 19:33:42.634203  446353 out.go:177] * Done! kubectl is now configured to use "newest-cni-125279" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 19 19:33:51 default-k8s-diff-port-982795 crio[730]: time="2024-08-19 19:33:51.272776240Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0cbafeb9-6ca5-40b9-9305-700d563dbee8 name=/runtime.v1.RuntimeService/Version
	Aug 19 19:33:51 default-k8s-diff-port-982795 crio[730]: time="2024-08-19 19:33:51.276646134Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=08736de3-80cf-4a43-bae5-74e07f3d9515 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:33:51 default-k8s-diff-port-982795 crio[730]: time="2024-08-19 19:33:51.277116685Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724096031277073015,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=08736de3-80cf-4a43-bae5-74e07f3d9515 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:33:51 default-k8s-diff-port-982795 crio[730]: time="2024-08-19 19:33:51.277726881Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0f169adf-9ee9-4ad8-9dd5-cfe0a03cc10a name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:33:51 default-k8s-diff-port-982795 crio[730]: time="2024-08-19 19:33:51.277778450Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0f169adf-9ee9-4ad8-9dd5-cfe0a03cc10a name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:33:51 default-k8s-diff-port-982795 crio[730]: time="2024-08-19 19:33:51.277964776Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:969ba38e33a57295fe0ae35077eb098948d9ad14a5eadeb75d70d2df5295289a,PodSandboxId:9fc5843fbb153651155598b90a297dc31af3ac7da5cf76946dc7bafad7908fda,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724095036707990631,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23fcea86-977e-4eb1-9e5a-23d6bdfb09c0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b9401ae3bfc5674655e1d13bb7496bd41ccca20b8d278eab6f128e796427c95,PodSandboxId:4d054d1fbff16b77f9957a639a00f5eafaf828414580bbd6e0930987680b80d5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724095036066638960,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-tlxtt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 150ac4be-bef1-4f0a-ab16-f085284686cb,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74c639aa1e86b4636654569e0285a63b33a3e00dc9fe1f174401d1f5b786fa6f,PodSandboxId:43809f9e43e622bfc096fb062f9c820f2a9a4133e01e9834ac65acb2d5baa7c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724095036114813559,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-845gx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 95155dd2-d46c-4445-b735-26eae16aaff9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fd4382f412f381b51dabe019ff226d5c821e8a8ff170e0870e36423dfba1070,PodSandboxId:33766eb0695bf1448d52e8380ba516bb7cbfb3623c2af674317df9995a9a5c26,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING
,CreatedAt:1724095035417699307,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2v4hk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 042d5d54-6557-4d8e-8f4e-2d56e95882ce,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ad4e1a87c8dd92fb96e88c93c58df7a4370d5c0378707ef83c3589ab5634291,PodSandboxId:aa68f2ac4de4e6586a17408820a3721534fbf7afa250e286b33d905c8a9e553b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724095024442132437,Labels:ma
p[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-982795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20a29fea437a40035b4f2101b4f2c4a4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d64840f8fd90aeb0fa49f228ebece3532df4e5564f7e40e26bb86d9aaf6dcfb5,PodSandboxId:f89efc21d3dcdf6467aeda97c1fac841e1763c22eb314cd0e76f0a71c72347aa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724095024382648303,Labels:map[string]string{io.kub
ernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-982795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b2cf4c315e3b88d41e6dc986691274f,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:494eae14eb51724902d43998d3f810f2370cd662ade302b988430c49a2785885,PodSandboxId:e7a2601c52192cf94a970f9f4944af034bf847d70d4801df7da08e0c655f46b9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724095024336594430,Labels:map[string]string
{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-982795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a253c1469ea0e730b1065f9d733602a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2f82cdbdd75559bad4071795545a9340bffc837ab2b039e2b7b11e799cd2c1d,PodSandboxId:285a74dbebcdb93622c6bc3448972534eb4080f3defc38febd351dc052763d9b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724095024346517617,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-982795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b56b6e9c850523092949a2b7ecd02a24,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30d8daf89a4b14fad81c80fd2205c514469407210646e4b9675cfa492e267324,PodSandboxId:b92379252bea8c871f830691bc4109470cd1db1d675e62e2fa3efc30511bc314,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724094737429475109,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-982795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b2cf4c315e3b88d41e6dc986691274f,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0f169adf-9ee9-4ad8-9dd5-cfe0a03cc10a name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:33:51 default-k8s-diff-port-982795 crio[730]: time="2024-08-19 19:33:51.315967488Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c80f06f7-e9f3-4dfd-a4a5-01b19bc61042 name=/runtime.v1.RuntimeService/Version
	Aug 19 19:33:51 default-k8s-diff-port-982795 crio[730]: time="2024-08-19 19:33:51.316041152Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c80f06f7-e9f3-4dfd-a4a5-01b19bc61042 name=/runtime.v1.RuntimeService/Version
	Aug 19 19:33:51 default-k8s-diff-port-982795 crio[730]: time="2024-08-19 19:33:51.317263176Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f83cd93b-e77e-43fb-b1fd-22d6a096fb94 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:33:51 default-k8s-diff-port-982795 crio[730]: time="2024-08-19 19:33:51.317784998Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724096031317761625,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f83cd93b-e77e-43fb-b1fd-22d6a096fb94 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:33:51 default-k8s-diff-port-982795 crio[730]: time="2024-08-19 19:33:51.318163767Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8f0cc145-350f-4e05-b186-b845d22b9846 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:33:51 default-k8s-diff-port-982795 crio[730]: time="2024-08-19 19:33:51.318209987Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8f0cc145-350f-4e05-b186-b845d22b9846 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:33:51 default-k8s-diff-port-982795 crio[730]: time="2024-08-19 19:33:51.318497835Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:969ba38e33a57295fe0ae35077eb098948d9ad14a5eadeb75d70d2df5295289a,PodSandboxId:9fc5843fbb153651155598b90a297dc31af3ac7da5cf76946dc7bafad7908fda,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724095036707990631,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23fcea86-977e-4eb1-9e5a-23d6bdfb09c0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b9401ae3bfc5674655e1d13bb7496bd41ccca20b8d278eab6f128e796427c95,PodSandboxId:4d054d1fbff16b77f9957a639a00f5eafaf828414580bbd6e0930987680b80d5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724095036066638960,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-tlxtt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 150ac4be-bef1-4f0a-ab16-f085284686cb,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74c639aa1e86b4636654569e0285a63b33a3e00dc9fe1f174401d1f5b786fa6f,PodSandboxId:43809f9e43e622bfc096fb062f9c820f2a9a4133e01e9834ac65acb2d5baa7c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724095036114813559,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-845gx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 95155dd2-d46c-4445-b735-26eae16aaff9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fd4382f412f381b51dabe019ff226d5c821e8a8ff170e0870e36423dfba1070,PodSandboxId:33766eb0695bf1448d52e8380ba516bb7cbfb3623c2af674317df9995a9a5c26,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING
,CreatedAt:1724095035417699307,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2v4hk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 042d5d54-6557-4d8e-8f4e-2d56e95882ce,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ad4e1a87c8dd92fb96e88c93c58df7a4370d5c0378707ef83c3589ab5634291,PodSandboxId:aa68f2ac4de4e6586a17408820a3721534fbf7afa250e286b33d905c8a9e553b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724095024442132437,Labels:ma
p[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-982795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20a29fea437a40035b4f2101b4f2c4a4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d64840f8fd90aeb0fa49f228ebece3532df4e5564f7e40e26bb86d9aaf6dcfb5,PodSandboxId:f89efc21d3dcdf6467aeda97c1fac841e1763c22eb314cd0e76f0a71c72347aa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724095024382648303,Labels:map[string]string{io.kub
ernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-982795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b2cf4c315e3b88d41e6dc986691274f,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:494eae14eb51724902d43998d3f810f2370cd662ade302b988430c49a2785885,PodSandboxId:e7a2601c52192cf94a970f9f4944af034bf847d70d4801df7da08e0c655f46b9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724095024336594430,Labels:map[string]string
{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-982795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a253c1469ea0e730b1065f9d733602a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2f82cdbdd75559bad4071795545a9340bffc837ab2b039e2b7b11e799cd2c1d,PodSandboxId:285a74dbebcdb93622c6bc3448972534eb4080f3defc38febd351dc052763d9b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724095024346517617,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-982795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b56b6e9c850523092949a2b7ecd02a24,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30d8daf89a4b14fad81c80fd2205c514469407210646e4b9675cfa492e267324,PodSandboxId:b92379252bea8c871f830691bc4109470cd1db1d675e62e2fa3efc30511bc314,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724094737429475109,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-982795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b2cf4c315e3b88d41e6dc986691274f,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8f0cc145-350f-4e05-b186-b845d22b9846 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:33:51 default-k8s-diff-port-982795 crio[730]: time="2024-08-19 19:33:51.350457543Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ddabaedb-291a-445f-b67f-fae2ee4ec704 name=/runtime.v1.RuntimeService/Version
	Aug 19 19:33:51 default-k8s-diff-port-982795 crio[730]: time="2024-08-19 19:33:51.350549943Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ddabaedb-291a-445f-b67f-fae2ee4ec704 name=/runtime.v1.RuntimeService/Version
	Aug 19 19:33:51 default-k8s-diff-port-982795 crio[730]: time="2024-08-19 19:33:51.351765966Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=726e853b-10c9-4f6e-8f69-835099b026be name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:33:51 default-k8s-diff-port-982795 crio[730]: time="2024-08-19 19:33:51.352135617Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724096031352115479,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=726e853b-10c9-4f6e-8f69-835099b026be name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:33:51 default-k8s-diff-port-982795 crio[730]: time="2024-08-19 19:33:51.352782612Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6aefcd81-e222-4168-8a36-30a4d1d9bd31 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:33:51 default-k8s-diff-port-982795 crio[730]: time="2024-08-19 19:33:51.352837776Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6aefcd81-e222-4168-8a36-30a4d1d9bd31 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:33:51 default-k8s-diff-port-982795 crio[730]: time="2024-08-19 19:33:51.353031374Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:969ba38e33a57295fe0ae35077eb098948d9ad14a5eadeb75d70d2df5295289a,PodSandboxId:9fc5843fbb153651155598b90a297dc31af3ac7da5cf76946dc7bafad7908fda,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724095036707990631,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23fcea86-977e-4eb1-9e5a-23d6bdfb09c0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b9401ae3bfc5674655e1d13bb7496bd41ccca20b8d278eab6f128e796427c95,PodSandboxId:4d054d1fbff16b77f9957a639a00f5eafaf828414580bbd6e0930987680b80d5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724095036066638960,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-tlxtt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 150ac4be-bef1-4f0a-ab16-f085284686cb,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74c639aa1e86b4636654569e0285a63b33a3e00dc9fe1f174401d1f5b786fa6f,PodSandboxId:43809f9e43e622bfc096fb062f9c820f2a9a4133e01e9834ac65acb2d5baa7c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724095036114813559,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-845gx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 95155dd2-d46c-4445-b735-26eae16aaff9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fd4382f412f381b51dabe019ff226d5c821e8a8ff170e0870e36423dfba1070,PodSandboxId:33766eb0695bf1448d52e8380ba516bb7cbfb3623c2af674317df9995a9a5c26,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING
,CreatedAt:1724095035417699307,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2v4hk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 042d5d54-6557-4d8e-8f4e-2d56e95882ce,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ad4e1a87c8dd92fb96e88c93c58df7a4370d5c0378707ef83c3589ab5634291,PodSandboxId:aa68f2ac4de4e6586a17408820a3721534fbf7afa250e286b33d905c8a9e553b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724095024442132437,Labels:ma
p[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-982795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20a29fea437a40035b4f2101b4f2c4a4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d64840f8fd90aeb0fa49f228ebece3532df4e5564f7e40e26bb86d9aaf6dcfb5,PodSandboxId:f89efc21d3dcdf6467aeda97c1fac841e1763c22eb314cd0e76f0a71c72347aa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724095024382648303,Labels:map[string]string{io.kub
ernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-982795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b2cf4c315e3b88d41e6dc986691274f,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:494eae14eb51724902d43998d3f810f2370cd662ade302b988430c49a2785885,PodSandboxId:e7a2601c52192cf94a970f9f4944af034bf847d70d4801df7da08e0c655f46b9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724095024336594430,Labels:map[string]string
{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-982795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a253c1469ea0e730b1065f9d733602a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2f82cdbdd75559bad4071795545a9340bffc837ab2b039e2b7b11e799cd2c1d,PodSandboxId:285a74dbebcdb93622c6bc3448972534eb4080f3defc38febd351dc052763d9b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724095024346517617,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-982795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b56b6e9c850523092949a2b7ecd02a24,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30d8daf89a4b14fad81c80fd2205c514469407210646e4b9675cfa492e267324,PodSandboxId:b92379252bea8c871f830691bc4109470cd1db1d675e62e2fa3efc30511bc314,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724094737429475109,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-982795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b2cf4c315e3b88d41e6dc986691274f,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6aefcd81-e222-4168-8a36-30a4d1d9bd31 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:33:51 default-k8s-diff-port-982795 crio[730]: time="2024-08-19 19:33:51.353787567Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=04705d82-0113-46af-a6a0-ac2894f5e6ce name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 19 19:33:51 default-k8s-diff-port-982795 crio[730]: time="2024-08-19 19:33:51.354036661Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:54bd2f9e0706a0202311210e995a3c31fbc96ed96c50d210ccda70e76c06a6b9,Metadata:&PodSandboxMetadata{Name:metrics-server-6867b74b74-2dp5r,Uid:04e0ce68-d9a2-426a-a0e9-47f6f7867efd,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724095036626620511,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-6867b74b74-2dp5r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04e0ce68-d9a2-426a-a0e9-47f6f7867efd,k8s-app: metrics-server,pod-template-hash: 6867b74b74,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-19T19:17:16.319826418Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9fc5843fbb153651155598b90a297dc31af3ac7da5cf76946dc7bafad7908fda,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:23fcea86-977e-4eb1-9e5a-23d6
bdfb09c0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724095036491483125,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23fcea86-977e-4eb1-9e5a-23d6bdfb09c0,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provision
er\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-08-19T19:17:16.180511863Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4d054d1fbff16b77f9957a639a00f5eafaf828414580bbd6e0930987680b80d5,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-tlxtt,Uid:150ac4be-bef1-4f0a-ab16-f085284686cb,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724095035267790356,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-tlxtt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 150ac4be-bef1-4f0a-ab16-f085284686cb,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-19T19:17:14.923725415Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:43809f9e43e622bfc096fb062f9c820f2a9a4133e01e9834ac65acb2d5baa7c8,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-845gx,Uid:95155dd2
-d46c-4445-b735-26eae16aaff9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724095035240037425,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-845gx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95155dd2-d46c-4445-b735-26eae16aaff9,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-19T19:17:14.896266679Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:33766eb0695bf1448d52e8380ba516bb7cbfb3623c2af674317df9995a9a5c26,Metadata:&PodSandboxMetadata{Name:kube-proxy-2v4hk,Uid:042d5d54-6557-4d8e-8f4e-2d56e95882ce,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724095035112872713,Labels:map[string]string{controller-revision-hash: 5976bc5f75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-2v4hk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 042d5d54-6557-4d8e-8f4e-2d56e95882ce,k8s-app: kube-pro
xy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-19T19:17:14.804468307Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:285a74dbebcdb93622c6bc3448972534eb4080f3defc38febd351dc052763d9b,Metadata:&PodSandboxMetadata{Name:kube-scheduler-default-k8s-diff-port-982795,Uid:b56b6e9c850523092949a2b7ecd02a24,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724095024157814062,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-982795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b56b6e9c850523092949a2b7ecd02a24,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b56b6e9c850523092949a2b7ecd02a24,kubernetes.io/config.seen: 2024-08-19T19:17:03.701265551Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f89efc21d3dcdf6467aeda97c1fac841e1763c22eb314cd0e76f0a71c72347aa,Metadata:&PodSandb
oxMetadata{Name:kube-apiserver-default-k8s-diff-port-982795,Uid:2b2cf4c315e3b88d41e6dc986691274f,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724095024151784947,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-982795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b2cf4c315e3b88d41e6dc986691274f,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.48:8444,kubernetes.io/config.hash: 2b2cf4c315e3b88d41e6dc986691274f,kubernetes.io/config.seen: 2024-08-19T19:17:03.701269407Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:aa68f2ac4de4e6586a17408820a3721534fbf7afa250e286b33d905c8a9e553b,Metadata:&PodSandboxMetadata{Name:etcd-default-k8s-diff-port-982795,Uid:20a29fea437a40035b4f2101b4f2c4a4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724095024145052968,Labels:map[strin
g]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-default-k8s-diff-port-982795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20a29fea437a40035b4f2101b4f2c4a4,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.61.48:2379,kubernetes.io/config.hash: 20a29fea437a40035b4f2101b4f2c4a4,kubernetes.io/config.seen: 2024-08-19T19:17:03.701267688Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e7a2601c52192cf94a970f9f4944af034bf847d70d4801df7da08e0c655f46b9,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-default-k8s-diff-port-982795,Uid:7a253c1469ea0e730b1065f9d733602a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724095024144842551,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-982795,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 7a253c1469ea0e730b1065f9d733602a,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 7a253c1469ea0e730b1065f9d733602a,kubernetes.io/config.seen: 2024-08-19T19:17:03.701257325Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b92379252bea8c871f830691bc4109470cd1db1d675e62e2fa3efc30511bc314,Metadata:&PodSandboxMetadata{Name:kube-apiserver-default-k8s-diff-port-982795,Uid:2b2cf4c315e3b88d41e6dc986691274f,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1724094737159800235,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-982795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b2cf4c315e3b88d41e6dc986691274f,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.48:8444,kubernetes.io/config.hash: 2b2cf4c315e3b88d41e6dc986691274f,kubernetes.io/config.seen
: 2024-08-19T19:12:16.682050203Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=04705d82-0113-46af-a6a0-ac2894f5e6ce name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 19 19:33:51 default-k8s-diff-port-982795 crio[730]: time="2024-08-19 19:33:51.355451825Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9bac09f9-422a-4767-92e2-de963a45746d name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:33:51 default-k8s-diff-port-982795 crio[730]: time="2024-08-19 19:33:51.355527470Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9bac09f9-422a-4767-92e2-de963a45746d name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:33:51 default-k8s-diff-port-982795 crio[730]: time="2024-08-19 19:33:51.355705303Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:969ba38e33a57295fe0ae35077eb098948d9ad14a5eadeb75d70d2df5295289a,PodSandboxId:9fc5843fbb153651155598b90a297dc31af3ac7da5cf76946dc7bafad7908fda,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724095036707990631,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23fcea86-977e-4eb1-9e5a-23d6bdfb09c0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b9401ae3bfc5674655e1d13bb7496bd41ccca20b8d278eab6f128e796427c95,PodSandboxId:4d054d1fbff16b77f9957a639a00f5eafaf828414580bbd6e0930987680b80d5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724095036066638960,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-tlxtt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 150ac4be-bef1-4f0a-ab16-f085284686cb,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74c639aa1e86b4636654569e0285a63b33a3e00dc9fe1f174401d1f5b786fa6f,PodSandboxId:43809f9e43e622bfc096fb062f9c820f2a9a4133e01e9834ac65acb2d5baa7c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724095036114813559,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-845gx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 95155dd2-d46c-4445-b735-26eae16aaff9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fd4382f412f381b51dabe019ff226d5c821e8a8ff170e0870e36423dfba1070,PodSandboxId:33766eb0695bf1448d52e8380ba516bb7cbfb3623c2af674317df9995a9a5c26,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING
,CreatedAt:1724095035417699307,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2v4hk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 042d5d54-6557-4d8e-8f4e-2d56e95882ce,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ad4e1a87c8dd92fb96e88c93c58df7a4370d5c0378707ef83c3589ab5634291,PodSandboxId:aa68f2ac4de4e6586a17408820a3721534fbf7afa250e286b33d905c8a9e553b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724095024442132437,Labels:ma
p[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-982795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20a29fea437a40035b4f2101b4f2c4a4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d64840f8fd90aeb0fa49f228ebece3532df4e5564f7e40e26bb86d9aaf6dcfb5,PodSandboxId:f89efc21d3dcdf6467aeda97c1fac841e1763c22eb314cd0e76f0a71c72347aa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724095024382648303,Labels:map[string]string{io.kub
ernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-982795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b2cf4c315e3b88d41e6dc986691274f,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:494eae14eb51724902d43998d3f810f2370cd662ade302b988430c49a2785885,PodSandboxId:e7a2601c52192cf94a970f9f4944af034bf847d70d4801df7da08e0c655f46b9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724095024336594430,Labels:map[string]string
{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-982795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a253c1469ea0e730b1065f9d733602a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2f82cdbdd75559bad4071795545a9340bffc837ab2b039e2b7b11e799cd2c1d,PodSandboxId:285a74dbebcdb93622c6bc3448972534eb4080f3defc38febd351dc052763d9b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724095024346517617,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-982795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b56b6e9c850523092949a2b7ecd02a24,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30d8daf89a4b14fad81c80fd2205c514469407210646e4b9675cfa492e267324,PodSandboxId:b92379252bea8c871f830691bc4109470cd1db1d675e62e2fa3efc30511bc314,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724094737429475109,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-982795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b2cf4c315e3b88d41e6dc986691274f,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9bac09f9-422a-4767-92e2-de963a45746d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	969ba38e33a57       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 minutes ago      Running             storage-provisioner       0                   9fc5843fbb153       storage-provisioner
	74c639aa1e86b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   16 minutes ago      Running             coredns                   0                   43809f9e43e62       coredns-6f6b679f8f-845gx
	8b9401ae3bfc5       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   16 minutes ago      Running             coredns                   0                   4d054d1fbff16       coredns-6f6b679f8f-tlxtt
	5fd4382f412f3       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   16 minutes ago      Running             kube-proxy                0                   33766eb0695bf       kube-proxy-2v4hk
	0ad4e1a87c8dd       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   16 minutes ago      Running             etcd                      2                   aa68f2ac4de4e       etcd-default-k8s-diff-port-982795
	d64840f8fd90a       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   16 minutes ago      Running             kube-apiserver            2                   f89efc21d3dcd       kube-apiserver-default-k8s-diff-port-982795
	a2f82cdbdd755       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   16 minutes ago      Running             kube-scheduler            2                   285a74dbebcdb       kube-scheduler-default-k8s-diff-port-982795
	494eae14eb517       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   16 minutes ago      Running             kube-controller-manager   2                   e7a2601c52192       kube-controller-manager-default-k8s-diff-port-982795
	30d8daf89a4b1       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   21 minutes ago      Exited              kube-apiserver            1                   b92379252bea8       kube-apiserver-default-k8s-diff-port-982795
	
	
	==> coredns [74c639aa1e86b4636654569e0285a63b33a3e00dc9fe1f174401d1f5b786fa6f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [8b9401ae3bfc5674655e1d13bb7496bd41ccca20b8d278eab6f128e796427c95] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-982795
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-982795
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9c2db9d51ec33b5c53a86e9ba3d384ee332e3411
	                    minikube.k8s.io/name=default-k8s-diff-port-982795
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T19_17_10_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 19:17:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-982795
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 19:33:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 19:32:38 +0000   Mon, 19 Aug 2024 19:17:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 19:32:38 +0000   Mon, 19 Aug 2024 19:17:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 19:32:38 +0000   Mon, 19 Aug 2024 19:17:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 19:32:38 +0000   Mon, 19 Aug 2024 19:17:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.48
	  Hostname:    default-k8s-diff-port-982795
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5fe42ac5581841238013e0b5a8d735d5
	  System UUID:                5fe42ac5-5818-4123-8013-e0b5a8d735d5
	  Boot ID:                    0ef2d057-cbc7-4e03-9f12-efbb79dcf255
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-845gx                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 coredns-6f6b679f8f-tlxtt                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 etcd-default-k8s-diff-port-982795                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         16m
	  kube-system                 kube-apiserver-default-k8s-diff-port-982795             250m (12%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-982795    200m (10%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-2v4hk                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-default-k8s-diff-port-982795             100m (5%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 metrics-server-6867b74b74-2dp5r                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         16m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 16m   kube-proxy       
	  Normal  Starting                 16m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  16m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  16m   kubelet          Node default-k8s-diff-port-982795 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m   kubelet          Node default-k8s-diff-port-982795 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m   kubelet          Node default-k8s-diff-port-982795 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           16m   node-controller  Node default-k8s-diff-port-982795 event: Registered Node default-k8s-diff-port-982795 in Controller
	
	
	==> dmesg <==
	[  +0.050691] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040093] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.788295] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.554258] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[Aug19 19:12] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.614118] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[  +0.059749] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065134] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[  +0.166021] systemd-fstab-generator[672]: Ignoring "noauto" option for root device
	[  +0.130130] systemd-fstab-generator[684]: Ignoring "noauto" option for root device
	[  +0.311198] systemd-fstab-generator[713]: Ignoring "noauto" option for root device
	[  +4.251324] systemd-fstab-generator[811]: Ignoring "noauto" option for root device
	[  +0.070295] kauditd_printk_skb: 148 callbacks suppressed
	[  +2.297766] systemd-fstab-generator[931]: Ignoring "noauto" option for root device
	[  +4.589476] kauditd_printk_skb: 79 callbacks suppressed
	[  +6.948777] kauditd_printk_skb: 85 callbacks suppressed
	[Aug19 19:17] systemd-fstab-generator[2590]: Ignoring "noauto" option for root device
	[  +0.062731] kauditd_printk_skb: 8 callbacks suppressed
	[  +6.012148] systemd-fstab-generator[2912]: Ignoring "noauto" option for root device
	[  +0.076737] kauditd_printk_skb: 54 callbacks suppressed
	[  +5.338268] systemd-fstab-generator[3044]: Ignoring "noauto" option for root device
	[  +0.127902] kauditd_printk_skb: 12 callbacks suppressed
	[  +8.851934] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [0ad4e1a87c8dd92fb96e88c93c58df7a4370d5c0378707ef83c3589ab5634291] <==
	{"level":"info","ts":"2024-08-19T19:32:38.362049Z","caller":"traceutil/trace.go:171","msg":"trace[319001936] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1199; }","duration":"136.768573ms","start":"2024-08-19T19:32:38.225267Z","end":"2024-08-19T19:32:38.362036Z","steps":["trace[319001936] 'range keys from in-memory index tree'  (duration: 136.45407ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T19:32:38.362139Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"167.296568ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T19:32:38.362207Z","caller":"traceutil/trace.go:171","msg":"trace[1639950217] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1199; }","duration":"167.398553ms","start":"2024-08-19T19:32:38.194797Z","end":"2024-08-19T19:32:38.362196Z","steps":["trace[1639950217] 'range keys from in-memory index tree'  (duration: 167.22916ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T19:32:38.756649Z","caller":"traceutil/trace.go:171","msg":"trace[1031387135] linearizableReadLoop","detail":"{readStateIndex:1399; appliedIndex:1398; }","duration":"166.587684ms","start":"2024-08-19T19:32:38.590047Z","end":"2024-08-19T19:32:38.756634Z","steps":["trace[1031387135] 'read index received'  (duration: 166.402017ms)","trace[1031387135] 'applied index is now lower than readState.Index'  (duration: 185.122µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-19T19:32:38.756784Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"166.715609ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/rolebindings/\" range_end:\"/registry/rolebindings0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-08-19T19:32:38.756823Z","caller":"traceutil/trace.go:171","msg":"trace[162057534] range","detail":"{range_begin:/registry/rolebindings/; range_end:/registry/rolebindings0; response_count:0; response_revision:1200; }","duration":"166.77376ms","start":"2024-08-19T19:32:38.590042Z","end":"2024-08-19T19:32:38.756816Z","steps":["trace[162057534] 'agreement among raft nodes before linearized reading'  (duration: 166.662711ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T19:32:38.757005Z","caller":"traceutil/trace.go:171","msg":"trace[163167999] transaction","detail":"{read_only:false; response_revision:1200; number_of_response:1; }","duration":"371.782072ms","start":"2024-08-19T19:32:38.385216Z","end":"2024-08-19T19:32:38.756998Z","steps":["trace[163167999] 'process raft request'  (duration: 371.286889ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T19:32:38.757755Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T19:32:38.385199Z","time spent":"371.843836ms","remote":"127.0.0.1:52006","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5738,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/minions/default-k8s-diff-port-982795\" mod_revision:952 > success:<request_put:<key:\"/registry/minions/default-k8s-diff-port-982795\" value_size:5684 >> failure:<request_range:<key:\"/registry/minions/default-k8s-diff-port-982795\" > >"}
	{"level":"warn","ts":"2024-08-19T19:32:39.008194Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"121.980443ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T19:32:39.008417Z","caller":"traceutil/trace.go:171","msg":"trace[383919272] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1200; }","duration":"122.207875ms","start":"2024-08-19T19:32:38.886194Z","end":"2024-08-19T19:32:39.008402Z","steps":["trace[383919272] 'range keys from in-memory index tree'  (duration: 121.933797ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T19:32:39.008469Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"170.544392ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.61.48\" ","response":"range_response_count:1 size:133"}
	{"level":"info","ts":"2024-08-19T19:32:39.008539Z","caller":"traceutil/trace.go:171","msg":"trace[1918395405] range","detail":"{range_begin:/registry/masterleases/192.168.61.48; range_end:; response_count:1; response_revision:1200; }","duration":"170.62547ms","start":"2024-08-19T19:32:38.837902Z","end":"2024-08-19T19:32:39.008528Z","steps":["trace[1918395405] 'range keys from in-memory index tree'  (duration: 170.088132ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T19:33:34.393448Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"131.861416ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11661668178472108265 > lease_revoke:<id:21d6916c124e808c>","response":"size:28"}
	{"level":"info","ts":"2024-08-19T19:33:34.393813Z","caller":"traceutil/trace.go:171","msg":"trace[1062071139] linearizableReadLoop","detail":"{readStateIndex:1454; appliedIndex:1453; }","duration":"277.981319ms","start":"2024-08-19T19:33:34.115817Z","end":"2024-08-19T19:33:34.393798Z","steps":["trace[1062071139] 'read index received'  (duration: 145.457673ms)","trace[1062071139] 'applied index is now lower than readState.Index'  (duration: 132.522323ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-19T19:33:34.394551Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"204.687864ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T19:33:34.394621Z","caller":"traceutil/trace.go:171","msg":"trace[930045489] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1243; }","duration":"204.754103ms","start":"2024-08-19T19:33:34.189857Z","end":"2024-08-19T19:33:34.394611Z","steps":["trace[930045489] 'agreement among raft nodes before linearized reading'  (duration: 204.670575ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T19:33:34.394381Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"278.541649ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/statefulsets/\" range_end:\"/registry/statefulsets0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T19:33:34.395042Z","caller":"traceutil/trace.go:171","msg":"trace[122111402] range","detail":"{range_begin:/registry/statefulsets/; range_end:/registry/statefulsets0; response_count:0; response_revision:1243; }","duration":"279.218695ms","start":"2024-08-19T19:33:34.115812Z","end":"2024-08-19T19:33:34.395031Z","steps":["trace[122111402] 'agreement among raft nodes before linearized reading'  (duration: 278.454643ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T19:33:34.395282Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"170.599565ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T19:33:34.395407Z","caller":"traceutil/trace.go:171","msg":"trace[1645142038] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1243; }","duration":"170.72588ms","start":"2024-08-19T19:33:34.224669Z","end":"2024-08-19T19:33:34.395395Z","steps":["trace[1645142038] 'agreement among raft nodes before linearized reading'  (duration: 170.586541ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T19:33:34.707696Z","caller":"traceutil/trace.go:171","msg":"trace[949922685] linearizableReadLoop","detail":"{readStateIndex:1455; appliedIndex:1454; }","duration":"133.696486ms","start":"2024-08-19T19:33:34.573982Z","end":"2024-08-19T19:33:34.707678Z","steps":["trace[949922685] 'read index received'  (duration: 133.509148ms)","trace[949922685] 'applied index is now lower than readState.Index'  (duration: 186.78µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-19T19:33:34.707818Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"133.819108ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-6867b74b74-2dp5r\" ","response":"range_response_count:1 size:4353"}
	{"level":"info","ts":"2024-08-19T19:33:34.707840Z","caller":"traceutil/trace.go:171","msg":"trace[1836131628] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-6867b74b74-2dp5r; range_end:; response_count:1; response_revision:1244; }","duration":"133.854877ms","start":"2024-08-19T19:33:34.573978Z","end":"2024-08-19T19:33:34.707833Z","steps":["trace[1836131628] 'agreement among raft nodes before linearized reading'  (duration: 133.7789ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T19:33:34.708028Z","caller":"traceutil/trace.go:171","msg":"trace[858300491] transaction","detail":"{read_only:false; response_revision:1244; number_of_response:1; }","duration":"308.267689ms","start":"2024-08-19T19:33:34.399739Z","end":"2024-08-19T19:33:34.708006Z","steps":["trace[858300491] 'process raft request'  (duration: 307.792687ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T19:33:34.708195Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T19:33:34.399716Z","time spent":"308.373713ms","remote":"127.0.0.1:52002","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1113,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1243 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1040 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	
	
	==> kernel <==
	 19:33:51 up 21 min,  0 users,  load average: 0.13, 0.15, 0.11
	Linux default-k8s-diff-port-982795 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [30d8daf89a4b14fad81c80fd2205c514469407210646e4b9675cfa492e267324] <==
	W0819 19:16:57.273761       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:16:57.291939       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:16:57.327531       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:16:57.359369       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:16:57.420464       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:16:57.440951       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:16:57.473158       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:16:57.513524       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:16:57.515031       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:16:57.566601       1 logging.go:55] [core] [Channel #2 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:16:57.635403       1 logging.go:55] [core] [Channel #9 SubChannel #10]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:16:57.660378       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:16:57.719013       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:16:57.783024       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:16:57.833956       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:16:57.874545       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:16:57.947765       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:16:58.083189       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:16:58.160783       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:16:58.167264       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:16:58.312596       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:16:58.325773       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:16:58.460880       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:17:01.656387       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 19:17:01.978055       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [d64840f8fd90aeb0fa49f228ebece3532df4e5564f7e40e26bb86d9aaf6dcfb5] <==
	I0819 19:30:08.068129       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0819 19:30:08.068209       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0819 19:32:07.066897       1 handler_proxy.go:99] no RequestInfo found in the context
	E0819 19:32:07.067003       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0819 19:32:08.069008       1 handler_proxy.go:99] no RequestInfo found in the context
	W0819 19:32:08.069091       1 handler_proxy.go:99] no RequestInfo found in the context
	E0819 19:32:08.069158       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0819 19:32:08.069192       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0819 19:32:08.070358       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0819 19:32:08.070403       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0819 19:33:08.070608       1 handler_proxy.go:99] no RequestInfo found in the context
	E0819 19:33:08.070663       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0819 19:33:08.070811       1 handler_proxy.go:99] no RequestInfo found in the context
	E0819 19:33:08.070947       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0819 19:33:08.071811       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0819 19:33:08.072925       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [494eae14eb51724902d43998d3f810f2370cd662ade302b988430c49a2785885] <==
	I0819 19:28:44.586327       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="136.342µs"
	I0819 19:28:44.622581       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 19:29:14.153086       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 19:29:14.630700       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 19:29:44.160116       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 19:29:44.637925       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 19:30:14.168517       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 19:30:14.648112       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 19:30:44.175461       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 19:30:44.657901       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 19:31:14.182788       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 19:31:14.665834       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 19:31:44.189399       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 19:31:44.675577       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 19:32:14.197861       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 19:32:14.684245       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0819 19:32:38.761138       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-982795"
	E0819 19:32:44.203848       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 19:32:44.692162       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 19:33:14.209697       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 19:33:14.702252       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0819 19:33:34.723258       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="268µs"
	E0819 19:33:44.216909       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 19:33:44.711727       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0819 19:33:48.587432       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="158.394µs"
	
	
	==> kube-proxy [5fd4382f412f381b51dabe019ff226d5c821e8a8ff170e0870e36423dfba1070] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 19:17:16.067455       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0819 19:17:16.115926       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.48"]
	E0819 19:17:16.116011       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 19:17:16.392901       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 19:17:16.396726       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 19:17:16.396879       1 server_linux.go:169] "Using iptables Proxier"
	I0819 19:17:16.405847       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 19:17:16.406223       1 server.go:483] "Version info" version="v1.31.0"
	I0819 19:17:16.406281       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 19:17:16.417002       1 config.go:197] "Starting service config controller"
	I0819 19:17:16.417200       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 19:17:16.417350       1 config.go:104] "Starting endpoint slice config controller"
	I0819 19:17:16.417377       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 19:17:16.420542       1 config.go:326] "Starting node config controller"
	I0819 19:17:16.420654       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 19:17:16.526471       1 shared_informer.go:320] Caches are synced for node config
	I0819 19:17:16.526527       1 shared_informer.go:320] Caches are synced for service config
	I0819 19:17:16.526571       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [a2f82cdbdd75559bad4071795545a9340bffc837ab2b039e2b7b11e799cd2c1d] <==
	E0819 19:17:07.115471       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0819 19:17:07.115651       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 19:17:07.110696       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0819 19:17:07.115689       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 19:17:07.110885       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0819 19:17:07.115777       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0819 19:17:07.936235       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0819 19:17:07.936328       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 19:17:08.049412       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0819 19:17:08.049642       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0819 19:17:08.106830       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0819 19:17:08.106884       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 19:17:08.161548       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0819 19:17:08.161603       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 19:17:08.183931       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0819 19:17:08.183964       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 19:17:08.204353       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0819 19:17:08.204415       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0819 19:17:08.240854       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0819 19:17:08.240922       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 19:17:08.297136       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0819 19:17:08.297233       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 19:17:08.328797       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0819 19:17:08.328863       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0819 19:17:09.892179       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 19 19:32:53 default-k8s-diff-port-982795 kubelet[2919]: E0819 19:32:53.572386    2919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-2dp5r" podUID="04e0ce68-d9a2-426a-a0e9-47f6f7867efd"
	Aug 19 19:32:59 default-k8s-diff-port-982795 kubelet[2919]: E0819 19:32:59.862755    2919 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095979862040753,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:32:59 default-k8s-diff-port-982795 kubelet[2919]: E0819 19:32:59.862802    2919 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095979862040753,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:33:05 default-k8s-diff-port-982795 kubelet[2919]: E0819 19:33:05.572224    2919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-2dp5r" podUID="04e0ce68-d9a2-426a-a0e9-47f6f7867efd"
	Aug 19 19:33:09 default-k8s-diff-port-982795 kubelet[2919]: E0819 19:33:09.590344    2919 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 19:33:09 default-k8s-diff-port-982795 kubelet[2919]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 19:33:09 default-k8s-diff-port-982795 kubelet[2919]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 19:33:09 default-k8s-diff-port-982795 kubelet[2919]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 19:33:09 default-k8s-diff-port-982795 kubelet[2919]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 19:33:09 default-k8s-diff-port-982795 kubelet[2919]: E0819 19:33:09.865199    2919 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095989864670026,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:33:09 default-k8s-diff-port-982795 kubelet[2919]: E0819 19:33:09.865263    2919 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095989864670026,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:33:19 default-k8s-diff-port-982795 kubelet[2919]: E0819 19:33:19.590445    2919 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Aug 19 19:33:19 default-k8s-diff-port-982795 kubelet[2919]: E0819 19:33:19.590573    2919 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Aug 19 19:33:19 default-k8s-diff-port-982795 kubelet[2919]: E0819 19:33:19.590770    2919 kuberuntime_manager.go:1272] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z4hjm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountP
ropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile
:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-6867b74b74-2dp5r_kube-system(04e0ce68-d9a2-426a-a0e9-47f6f7867efd): ErrImagePull: pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Aug 19 19:33:19 default-k8s-diff-port-982795 kubelet[2919]: E0819 19:33:19.592127    2919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-6867b74b74-2dp5r" podUID="04e0ce68-d9a2-426a-a0e9-47f6f7867efd"
	Aug 19 19:33:19 default-k8s-diff-port-982795 kubelet[2919]: E0819 19:33:19.867263    2919 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095999866703049,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:33:19 default-k8s-diff-port-982795 kubelet[2919]: E0819 19:33:19.867349    2919 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095999866703049,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:33:29 default-k8s-diff-port-982795 kubelet[2919]: E0819 19:33:29.868905    2919 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724096009868448858,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:33:29 default-k8s-diff-port-982795 kubelet[2919]: E0819 19:33:29.868949    2919 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724096009868448858,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:33:34 default-k8s-diff-port-982795 kubelet[2919]: E0819 19:33:34.572397    2919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-2dp5r" podUID="04e0ce68-d9a2-426a-a0e9-47f6f7867efd"
	Aug 19 19:33:39 default-k8s-diff-port-982795 kubelet[2919]: E0819 19:33:39.871106    2919 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724096019870136280,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:33:39 default-k8s-diff-port-982795 kubelet[2919]: E0819 19:33:39.871889    2919 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724096019870136280,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:33:48 default-k8s-diff-port-982795 kubelet[2919]: E0819 19:33:48.572581    2919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-2dp5r" podUID="04e0ce68-d9a2-426a-a0e9-47f6f7867efd"
	Aug 19 19:33:49 default-k8s-diff-port-982795 kubelet[2919]: E0819 19:33:49.873427    2919 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724096029873127843,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:33:49 default-k8s-diff-port-982795 kubelet[2919]: E0819 19:33:49.873472    2919 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724096029873127843,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [969ba38e33a57295fe0ae35077eb098948d9ad14a5eadeb75d70d2df5295289a] <==
	I0819 19:17:16.881890       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0819 19:17:16.904892       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0819 19:17:16.904949       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0819 19:17:16.924712       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0819 19:17:16.924902       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-982795_35cf21ea-e4cf-494e-9cf4-a85c0b6ad5c5!
	I0819 19:17:16.927587       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3ad7ea45-7ee9-466d-bf0b-37c20ee983b7", APIVersion:"v1", ResourceVersion:"396", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-982795_35cf21ea-e4cf-494e-9cf4-a85c0b6ad5c5 became leader
	I0819 19:17:17.025699       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-982795_35cf21ea-e4cf-494e-9cf4-a85c0b6ad5c5!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-982795 -n default-k8s-diff-port-982795
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-982795 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-2dp5r
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-982795 describe pod metrics-server-6867b74b74-2dp5r
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-982795 describe pod metrics-server-6867b74b74-2dp5r: exit status 1 (60.885879ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-2dp5r" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-982795 describe pod metrics-server-6867b74b74-2dp5r: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (442.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (388.47s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-024748 -n embed-certs-024748
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-08-19 19:33:01.178299159 +0000 UTC m=+6528.378225979
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-024748 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-024748 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.963µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-024748 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-024748 -n embed-certs-024748
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-024748 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-024748 logs -n 25: (1.227697305s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-571803                           | enable-default-cni-571803    | jenkins | v1.33.1 | 19 Aug 24 19:03 UTC | 19 Aug 24 19:03 UTC |
	|         | sudo crio config                                       |                              |         |         |                     |                     |
	| delete  | -p enable-default-cni-571803                           | enable-default-cni-571803    | jenkins | v1.33.1 | 19 Aug 24 19:03 UTC | 19 Aug 24 19:03 UTC |
	| delete  | -p                                                     | disable-driver-mounts-737091 | jenkins | v1.33.1 | 19 Aug 24 19:03 UTC | 19 Aug 24 19:03 UTC |
	|         | disable-driver-mounts-737091                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-982795 | jenkins | v1.33.1 | 19 Aug 24 19:03 UTC | 19 Aug 24 19:04 UTC |
	|         | default-k8s-diff-port-982795                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-278232             | no-preload-278232            | jenkins | v1.33.1 | 19 Aug 24 19:04 UTC | 19 Aug 24 19:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-278232                                   | no-preload-278232            | jenkins | v1.33.1 | 19 Aug 24 19:04 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-982795  | default-k8s-diff-port-982795 | jenkins | v1.33.1 | 19 Aug 24 19:04 UTC | 19 Aug 24 19:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-982795 | jenkins | v1.33.1 | 19 Aug 24 19:04 UTC |                     |
	|         | default-k8s-diff-port-982795                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-024748            | embed-certs-024748           | jenkins | v1.33.1 | 19 Aug 24 19:04 UTC | 19 Aug 24 19:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-024748                                  | embed-certs-024748           | jenkins | v1.33.1 | 19 Aug 24 19:04 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-104669        | old-k8s-version-104669       | jenkins | v1.33.1 | 19 Aug 24 19:06 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-278232                  | no-preload-278232            | jenkins | v1.33.1 | 19 Aug 24 19:07 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-278232                                   | no-preload-278232            | jenkins | v1.33.1 | 19 Aug 24 19:07 UTC | 19 Aug 24 19:18 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-982795       | default-k8s-diff-port-982795 | jenkins | v1.33.1 | 19 Aug 24 19:07 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-024748                 | embed-certs-024748           | jenkins | v1.33.1 | 19 Aug 24 19:07 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-982795 | jenkins | v1.33.1 | 19 Aug 24 19:07 UTC | 19 Aug 24 19:17 UTC |
	|         | default-k8s-diff-port-982795                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-024748                                  | embed-certs-024748           | jenkins | v1.33.1 | 19 Aug 24 19:07 UTC | 19 Aug 24 19:17 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-104669                              | old-k8s-version-104669       | jenkins | v1.33.1 | 19 Aug 24 19:08 UTC | 19 Aug 24 19:08 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-104669             | old-k8s-version-104669       | jenkins | v1.33.1 | 19 Aug 24 19:08 UTC | 19 Aug 24 19:08 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-104669                              | old-k8s-version-104669       | jenkins | v1.33.1 | 19 Aug 24 19:08 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-104669                              | old-k8s-version-104669       | jenkins | v1.33.1 | 19 Aug 24 19:32 UTC | 19 Aug 24 19:32 UTC |
	| start   | -p newest-cni-125279 --memory=2200 --alsologtostderr   | newest-cni-125279            | jenkins | v1.33.1 | 19 Aug 24 19:32 UTC | 19 Aug 24 19:32 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| delete  | -p no-preload-278232                                   | no-preload-278232            | jenkins | v1.33.1 | 19 Aug 24 19:32 UTC | 19 Aug 24 19:32 UTC |
	| addons  | enable metrics-server -p newest-cni-125279             | newest-cni-125279            | jenkins | v1.33.1 | 19 Aug 24 19:32 UTC | 19 Aug 24 19:32 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-125279                                   | newest-cni-125279            | jenkins | v1.33.1 | 19 Aug 24 19:32 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 19:32:02
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 19:32:02.317801  445411 out.go:345] Setting OutFile to fd 1 ...
	I0819 19:32:02.317947  445411 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:32:02.317956  445411 out.go:358] Setting ErrFile to fd 2...
	I0819 19:32:02.317960  445411 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:32:02.318140  445411 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19468-372744/.minikube/bin
	I0819 19:32:02.318721  445411 out.go:352] Setting JSON to false
	I0819 19:32:02.319959  445411 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":11665,"bootTime":1724084257,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 19:32:02.320029  445411 start.go:139] virtualization: kvm guest
	I0819 19:32:02.323301  445411 out.go:177] * [newest-cni-125279] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 19:32:02.324805  445411 out.go:177]   - MINIKUBE_LOCATION=19468
	I0819 19:32:02.324907  445411 notify.go:220] Checking for updates...
	I0819 19:32:02.327570  445411 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 19:32:02.328867  445411 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19468-372744/kubeconfig
	I0819 19:32:02.330030  445411 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19468-372744/.minikube
	I0819 19:32:02.331155  445411 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 19:32:02.332531  445411 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 19:32:02.334284  445411 config.go:182] Loaded profile config "default-k8s-diff-port-982795": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:32:02.334382  445411 config.go:182] Loaded profile config "embed-certs-024748": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:32:02.334468  445411 config.go:182] Loaded profile config "no-preload-278232": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:32:02.334557  445411 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 19:32:02.372274  445411 out.go:177] * Using the kvm2 driver based on user configuration
	I0819 19:32:02.373727  445411 start.go:297] selected driver: kvm2
	I0819 19:32:02.373751  445411 start.go:901] validating driver "kvm2" against <nil>
	I0819 19:32:02.373765  445411 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 19:32:02.374812  445411 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 19:32:02.374907  445411 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19468-372744/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 19:32:02.391377  445411 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 19:32:02.391431  445411 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0819 19:32:02.391476  445411 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0819 19:32:02.391744  445411 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0819 19:32:02.391833  445411 cni.go:84] Creating CNI manager for ""
	I0819 19:32:02.391853  445411 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 19:32:02.391867  445411 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 19:32:02.391941  445411 start.go:340] cluster config:
	{Name:newest-cni-125279 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:newest-cni-125279 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 19:32:02.392094  445411 iso.go:125] acquiring lock: {Name:mk4c0ac1c3202b1a296739df622960e7a0bd8566 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 19:32:02.394156  445411 out.go:177] * Starting "newest-cni-125279" primary control-plane node in "newest-cni-125279" cluster
	I0819 19:32:02.395383  445411 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 19:32:02.395423  445411 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0819 19:32:02.395432  445411 cache.go:56] Caching tarball of preloaded images
	I0819 19:32:02.395526  445411 preload.go:172] Found /home/jenkins/minikube-integration/19468-372744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 19:32:02.395540  445411 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 19:32:02.395701  445411 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/newest-cni-125279/config.json ...
	I0819 19:32:02.395728  445411 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/newest-cni-125279/config.json: {Name:mk56c54824bf8b7ba5a8e97517d1b3bc99bf8d31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:32:02.395952  445411 start.go:360] acquireMachinesLock for newest-cni-125279: {Name:mk24ba67a747357e9ce40f1e460d2bb0bc59cc75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 19:32:02.395997  445411 start.go:364] duration metric: took 24.973µs to acquireMachinesLock for "newest-cni-125279"
	I0819 19:32:02.396022  445411 start.go:93] Provisioning new machine with config: &{Name:newest-cni-125279 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.0 ClusterName:newest-cni-125279 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 19:32:02.396105  445411 start.go:125] createHost starting for "" (driver="kvm2")
	I0819 19:32:02.397869  445411 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 19:32:02.398005  445411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:32:02.398039  445411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:32:02.413037  445411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36907
	I0819 19:32:02.413486  445411 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:32:02.414025  445411 main.go:141] libmachine: Using API Version  1
	I0819 19:32:02.414046  445411 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:32:02.414404  445411 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:32:02.414604  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetMachineName
	I0819 19:32:02.414754  445411 main.go:141] libmachine: (newest-cni-125279) Calling .DriverName
	I0819 19:32:02.414893  445411 start.go:159] libmachine.API.Create for "newest-cni-125279" (driver="kvm2")
	I0819 19:32:02.414922  445411 client.go:168] LocalClient.Create starting
	I0819 19:32:02.414957  445411 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem
	I0819 19:32:02.414993  445411 main.go:141] libmachine: Decoding PEM data...
	I0819 19:32:02.415011  445411 main.go:141] libmachine: Parsing certificate...
	I0819 19:32:02.415074  445411 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem
	I0819 19:32:02.415093  445411 main.go:141] libmachine: Decoding PEM data...
	I0819 19:32:02.415106  445411 main.go:141] libmachine: Parsing certificate...
	I0819 19:32:02.415124  445411 main.go:141] libmachine: Running pre-create checks...
	I0819 19:32:02.415133  445411 main.go:141] libmachine: (newest-cni-125279) Calling .PreCreateCheck
	I0819 19:32:02.415482  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetConfigRaw
	I0819 19:32:02.415858  445411 main.go:141] libmachine: Creating machine...
	I0819 19:32:02.415872  445411 main.go:141] libmachine: (newest-cni-125279) Calling .Create
	I0819 19:32:02.416006  445411 main.go:141] libmachine: (newest-cni-125279) Creating KVM machine...
	I0819 19:32:02.417294  445411 main.go:141] libmachine: (newest-cni-125279) DBG | found existing default KVM network
	I0819 19:32:02.418650  445411 main.go:141] libmachine: (newest-cni-125279) DBG | I0819 19:32:02.418450  445433 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:c6:65:2e} reservation:<nil>}
	I0819 19:32:02.420057  445411 main.go:141] libmachine: (newest-cni-125279) DBG | I0819 19:32:02.419936  445433 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002a8720}
	I0819 19:32:02.420087  445411 main.go:141] libmachine: (newest-cni-125279) DBG | created network xml: 
	I0819 19:32:02.420097  445411 main.go:141] libmachine: (newest-cni-125279) DBG | <network>
	I0819 19:32:02.420106  445411 main.go:141] libmachine: (newest-cni-125279) DBG |   <name>mk-newest-cni-125279</name>
	I0819 19:32:02.420115  445411 main.go:141] libmachine: (newest-cni-125279) DBG |   <dns enable='no'/>
	I0819 19:32:02.420130  445411 main.go:141] libmachine: (newest-cni-125279) DBG |   
	I0819 19:32:02.420157  445411 main.go:141] libmachine: (newest-cni-125279) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0819 19:32:02.420173  445411 main.go:141] libmachine: (newest-cni-125279) DBG |     <dhcp>
	I0819 19:32:02.420184  445411 main.go:141] libmachine: (newest-cni-125279) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0819 19:32:02.420202  445411 main.go:141] libmachine: (newest-cni-125279) DBG |     </dhcp>
	I0819 19:32:02.420215  445411 main.go:141] libmachine: (newest-cni-125279) DBG |   </ip>
	I0819 19:32:02.420226  445411 main.go:141] libmachine: (newest-cni-125279) DBG |   
	I0819 19:32:02.420235  445411 main.go:141] libmachine: (newest-cni-125279) DBG | </network>
	I0819 19:32:02.420244  445411 main.go:141] libmachine: (newest-cni-125279) DBG | 
	I0819 19:32:02.425975  445411 main.go:141] libmachine: (newest-cni-125279) DBG | trying to create private KVM network mk-newest-cni-125279 192.168.50.0/24...
	I0819 19:32:02.500440  445411 main.go:141] libmachine: (newest-cni-125279) DBG | private KVM network mk-newest-cni-125279 192.168.50.0/24 created
	I0819 19:32:02.500473  445411 main.go:141] libmachine: (newest-cni-125279) DBG | I0819 19:32:02.500424  445433 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19468-372744/.minikube
	I0819 19:32:02.500483  445411 main.go:141] libmachine: (newest-cni-125279) Setting up store path in /home/jenkins/minikube-integration/19468-372744/.minikube/machines/newest-cni-125279 ...
	I0819 19:32:02.500494  445411 main.go:141] libmachine: (newest-cni-125279) Building disk image from file:///home/jenkins/minikube-integration/19468-372744/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0819 19:32:02.500640  445411 main.go:141] libmachine: (newest-cni-125279) Downloading /home/jenkins/minikube-integration/19468-372744/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19468-372744/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 19:32:02.798963  445411 main.go:141] libmachine: (newest-cni-125279) DBG | I0819 19:32:02.798783  445433 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/newest-cni-125279/id_rsa...
	I0819 19:32:03.381276  445411 main.go:141] libmachine: (newest-cni-125279) DBG | I0819 19:32:03.381113  445433 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/newest-cni-125279/newest-cni-125279.rawdisk...
	I0819 19:32:03.381317  445411 main.go:141] libmachine: (newest-cni-125279) DBG | Writing magic tar header
	I0819 19:32:03.381337  445411 main.go:141] libmachine: (newest-cni-125279) DBG | Writing SSH key tar header
	I0819 19:32:03.381352  445411 main.go:141] libmachine: (newest-cni-125279) DBG | I0819 19:32:03.381273  445433 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19468-372744/.minikube/machines/newest-cni-125279 ...
	I0819 19:32:03.381457  445411 main.go:141] libmachine: (newest-cni-125279) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/newest-cni-125279
	I0819 19:32:03.381486  445411 main.go:141] libmachine: (newest-cni-125279) Setting executable bit set on /home/jenkins/minikube-integration/19468-372744/.minikube/machines/newest-cni-125279 (perms=drwx------)
	I0819 19:32:03.381497  445411 main.go:141] libmachine: (newest-cni-125279) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19468-372744/.minikube/machines
	I0819 19:32:03.381516  445411 main.go:141] libmachine: (newest-cni-125279) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19468-372744/.minikube
	I0819 19:32:03.381533  445411 main.go:141] libmachine: (newest-cni-125279) Setting executable bit set on /home/jenkins/minikube-integration/19468-372744/.minikube/machines (perms=drwxr-xr-x)
	I0819 19:32:03.381543  445411 main.go:141] libmachine: (newest-cni-125279) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19468-372744
	I0819 19:32:03.381554  445411 main.go:141] libmachine: (newest-cni-125279) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0819 19:32:03.381564  445411 main.go:141] libmachine: (newest-cni-125279) DBG | Checking permissions on dir: /home/jenkins
	I0819 19:32:03.381585  445411 main.go:141] libmachine: (newest-cni-125279) Setting executable bit set on /home/jenkins/minikube-integration/19468-372744/.minikube (perms=drwxr-xr-x)
	I0819 19:32:03.381647  445411 main.go:141] libmachine: (newest-cni-125279) DBG | Checking permissions on dir: /home
	I0819 19:32:03.381673  445411 main.go:141] libmachine: (newest-cni-125279) Setting executable bit set on /home/jenkins/minikube-integration/19468-372744 (perms=drwxrwxr-x)
	I0819 19:32:03.381679  445411 main.go:141] libmachine: (newest-cni-125279) DBG | Skipping /home - not owner
	I0819 19:32:03.381694  445411 main.go:141] libmachine: (newest-cni-125279) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0819 19:32:03.381712  445411 main.go:141] libmachine: (newest-cni-125279) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0819 19:32:03.381734  445411 main.go:141] libmachine: (newest-cni-125279) Creating domain...
	I0819 19:32:03.382890  445411 main.go:141] libmachine: (newest-cni-125279) define libvirt domain using xml: 
	I0819 19:32:03.382908  445411 main.go:141] libmachine: (newest-cni-125279) <domain type='kvm'>
	I0819 19:32:03.382921  445411 main.go:141] libmachine: (newest-cni-125279)   <name>newest-cni-125279</name>
	I0819 19:32:03.382926  445411 main.go:141] libmachine: (newest-cni-125279)   <memory unit='MiB'>2200</memory>
	I0819 19:32:03.382932  445411 main.go:141] libmachine: (newest-cni-125279)   <vcpu>2</vcpu>
	I0819 19:32:03.382936  445411 main.go:141] libmachine: (newest-cni-125279)   <features>
	I0819 19:32:03.382941  445411 main.go:141] libmachine: (newest-cni-125279)     <acpi/>
	I0819 19:32:03.382945  445411 main.go:141] libmachine: (newest-cni-125279)     <apic/>
	I0819 19:32:03.382957  445411 main.go:141] libmachine: (newest-cni-125279)     <pae/>
	I0819 19:32:03.382961  445411 main.go:141] libmachine: (newest-cni-125279)     
	I0819 19:32:03.382966  445411 main.go:141] libmachine: (newest-cni-125279)   </features>
	I0819 19:32:03.382971  445411 main.go:141] libmachine: (newest-cni-125279)   <cpu mode='host-passthrough'>
	I0819 19:32:03.382976  445411 main.go:141] libmachine: (newest-cni-125279)   
	I0819 19:32:03.382980  445411 main.go:141] libmachine: (newest-cni-125279)   </cpu>
	I0819 19:32:03.382985  445411 main.go:141] libmachine: (newest-cni-125279)   <os>
	I0819 19:32:03.382989  445411 main.go:141] libmachine: (newest-cni-125279)     <type>hvm</type>
	I0819 19:32:03.383027  445411 main.go:141] libmachine: (newest-cni-125279)     <boot dev='cdrom'/>
	I0819 19:32:03.383054  445411 main.go:141] libmachine: (newest-cni-125279)     <boot dev='hd'/>
	I0819 19:32:03.383064  445411 main.go:141] libmachine: (newest-cni-125279)     <bootmenu enable='no'/>
	I0819 19:32:03.383078  445411 main.go:141] libmachine: (newest-cni-125279)   </os>
	I0819 19:32:03.383088  445411 main.go:141] libmachine: (newest-cni-125279)   <devices>
	I0819 19:32:03.383094  445411 main.go:141] libmachine: (newest-cni-125279)     <disk type='file' device='cdrom'>
	I0819 19:32:03.383124  445411 main.go:141] libmachine: (newest-cni-125279)       <source file='/home/jenkins/minikube-integration/19468-372744/.minikube/machines/newest-cni-125279/boot2docker.iso'/>
	I0819 19:32:03.383132  445411 main.go:141] libmachine: (newest-cni-125279)       <target dev='hdc' bus='scsi'/>
	I0819 19:32:03.383139  445411 main.go:141] libmachine: (newest-cni-125279)       <readonly/>
	I0819 19:32:03.383146  445411 main.go:141] libmachine: (newest-cni-125279)     </disk>
	I0819 19:32:03.383153  445411 main.go:141] libmachine: (newest-cni-125279)     <disk type='file' device='disk'>
	I0819 19:32:03.383164  445411 main.go:141] libmachine: (newest-cni-125279)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0819 19:32:03.383174  445411 main.go:141] libmachine: (newest-cni-125279)       <source file='/home/jenkins/minikube-integration/19468-372744/.minikube/machines/newest-cni-125279/newest-cni-125279.rawdisk'/>
	I0819 19:32:03.383185  445411 main.go:141] libmachine: (newest-cni-125279)       <target dev='hda' bus='virtio'/>
	I0819 19:32:03.383195  445411 main.go:141] libmachine: (newest-cni-125279)     </disk>
	I0819 19:32:03.383201  445411 main.go:141] libmachine: (newest-cni-125279)     <interface type='network'>
	I0819 19:32:03.383208  445411 main.go:141] libmachine: (newest-cni-125279)       <source network='mk-newest-cni-125279'/>
	I0819 19:32:03.383214  445411 main.go:141] libmachine: (newest-cni-125279)       <model type='virtio'/>
	I0819 19:32:03.383221  445411 main.go:141] libmachine: (newest-cni-125279)     </interface>
	I0819 19:32:03.383228  445411 main.go:141] libmachine: (newest-cni-125279)     <interface type='network'>
	I0819 19:32:03.383234  445411 main.go:141] libmachine: (newest-cni-125279)       <source network='default'/>
	I0819 19:32:03.383241  445411 main.go:141] libmachine: (newest-cni-125279)       <model type='virtio'/>
	I0819 19:32:03.383247  445411 main.go:141] libmachine: (newest-cni-125279)     </interface>
	I0819 19:32:03.383254  445411 main.go:141] libmachine: (newest-cni-125279)     <serial type='pty'>
	I0819 19:32:03.383276  445411 main.go:141] libmachine: (newest-cni-125279)       <target port='0'/>
	I0819 19:32:03.383298  445411 main.go:141] libmachine: (newest-cni-125279)     </serial>
	I0819 19:32:03.383312  445411 main.go:141] libmachine: (newest-cni-125279)     <console type='pty'>
	I0819 19:32:03.383324  445411 main.go:141] libmachine: (newest-cni-125279)       <target type='serial' port='0'/>
	I0819 19:32:03.383335  445411 main.go:141] libmachine: (newest-cni-125279)     </console>
	I0819 19:32:03.383343  445411 main.go:141] libmachine: (newest-cni-125279)     <rng model='virtio'>
	I0819 19:32:03.383354  445411 main.go:141] libmachine: (newest-cni-125279)       <backend model='random'>/dev/random</backend>
	I0819 19:32:03.383365  445411 main.go:141] libmachine: (newest-cni-125279)     </rng>
	I0819 19:32:03.383373  445411 main.go:141] libmachine: (newest-cni-125279)     
	I0819 19:32:03.383398  445411 main.go:141] libmachine: (newest-cni-125279)     
	I0819 19:32:03.383411  445411 main.go:141] libmachine: (newest-cni-125279)   </devices>
	I0819 19:32:03.383418  445411 main.go:141] libmachine: (newest-cni-125279) </domain>
	I0819 19:32:03.383432  445411 main.go:141] libmachine: (newest-cni-125279) 
	I0819 19:32:03.388101  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:e9:4f:ba in network default
	I0819 19:32:03.388676  445411 main.go:141] libmachine: (newest-cni-125279) Ensuring networks are active...
	I0819 19:32:03.388697  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:03.389462  445411 main.go:141] libmachine: (newest-cni-125279) Ensuring network default is active
	I0819 19:32:03.389784  445411 main.go:141] libmachine: (newest-cni-125279) Ensuring network mk-newest-cni-125279 is active
	I0819 19:32:03.390393  445411 main.go:141] libmachine: (newest-cni-125279) Getting domain xml...
	I0819 19:32:03.391229  445411 main.go:141] libmachine: (newest-cni-125279) Creating domain...
	I0819 19:32:04.679193  445411 main.go:141] libmachine: (newest-cni-125279) Waiting to get IP...
	I0819 19:32:04.680262  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:04.680726  445411 main.go:141] libmachine: (newest-cni-125279) DBG | unable to find current IP address of domain newest-cni-125279 in network mk-newest-cni-125279
	I0819 19:32:04.680778  445411 main.go:141] libmachine: (newest-cni-125279) DBG | I0819 19:32:04.680693  445433 retry.go:31] will retry after 224.19994ms: waiting for machine to come up
	I0819 19:32:04.906192  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:04.906687  445411 main.go:141] libmachine: (newest-cni-125279) DBG | unable to find current IP address of domain newest-cni-125279 in network mk-newest-cni-125279
	I0819 19:32:04.906726  445411 main.go:141] libmachine: (newest-cni-125279) DBG | I0819 19:32:04.906631  445433 retry.go:31] will retry after 368.917614ms: waiting for machine to come up
	I0819 19:32:05.277245  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:05.277768  445411 main.go:141] libmachine: (newest-cni-125279) DBG | unable to find current IP address of domain newest-cni-125279 in network mk-newest-cni-125279
	I0819 19:32:05.277796  445411 main.go:141] libmachine: (newest-cni-125279) DBG | I0819 19:32:05.277717  445433 retry.go:31] will retry after 485.273357ms: waiting for machine to come up
	I0819 19:32:05.764588  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:05.765104  445411 main.go:141] libmachine: (newest-cni-125279) DBG | unable to find current IP address of domain newest-cni-125279 in network mk-newest-cni-125279
	I0819 19:32:05.765134  445411 main.go:141] libmachine: (newest-cni-125279) DBG | I0819 19:32:05.765062  445433 retry.go:31] will retry after 428.947871ms: waiting for machine to come up
	I0819 19:32:06.195692  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:06.196191  445411 main.go:141] libmachine: (newest-cni-125279) DBG | unable to find current IP address of domain newest-cni-125279 in network mk-newest-cni-125279
	I0819 19:32:06.196225  445411 main.go:141] libmachine: (newest-cni-125279) DBG | I0819 19:32:06.196132  445433 retry.go:31] will retry after 509.986197ms: waiting for machine to come up
	I0819 19:32:06.708134  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:06.708779  445411 main.go:141] libmachine: (newest-cni-125279) DBG | unable to find current IP address of domain newest-cni-125279 in network mk-newest-cni-125279
	I0819 19:32:06.708809  445411 main.go:141] libmachine: (newest-cni-125279) DBG | I0819 19:32:06.708708  445433 retry.go:31] will retry after 722.569889ms: waiting for machine to come up
	I0819 19:32:07.433380  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:07.433795  445411 main.go:141] libmachine: (newest-cni-125279) DBG | unable to find current IP address of domain newest-cni-125279 in network mk-newest-cni-125279
	I0819 19:32:07.433825  445411 main.go:141] libmachine: (newest-cni-125279) DBG | I0819 19:32:07.433723  445433 retry.go:31] will retry after 891.136923ms: waiting for machine to come up
	I0819 19:32:08.326855  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:08.327398  445411 main.go:141] libmachine: (newest-cni-125279) DBG | unable to find current IP address of domain newest-cni-125279 in network mk-newest-cni-125279
	I0819 19:32:08.327429  445411 main.go:141] libmachine: (newest-cni-125279) DBG | I0819 19:32:08.327341  445433 retry.go:31] will retry after 896.894835ms: waiting for machine to come up
	I0819 19:32:09.226343  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:09.226809  445411 main.go:141] libmachine: (newest-cni-125279) DBG | unable to find current IP address of domain newest-cni-125279 in network mk-newest-cni-125279
	I0819 19:32:09.226841  445411 main.go:141] libmachine: (newest-cni-125279) DBG | I0819 19:32:09.226758  445433 retry.go:31] will retry after 1.681643232s: waiting for machine to come up
	I0819 19:32:10.910683  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:10.911127  445411 main.go:141] libmachine: (newest-cni-125279) DBG | unable to find current IP address of domain newest-cni-125279 in network mk-newest-cni-125279
	I0819 19:32:10.911172  445411 main.go:141] libmachine: (newest-cni-125279) DBG | I0819 19:32:10.911068  445433 retry.go:31] will retry after 2.135746694s: waiting for machine to come up
	I0819 19:32:13.048343  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:13.048838  445411 main.go:141] libmachine: (newest-cni-125279) DBG | unable to find current IP address of domain newest-cni-125279 in network mk-newest-cni-125279
	I0819 19:32:13.048872  445411 main.go:141] libmachine: (newest-cni-125279) DBG | I0819 19:32:13.048778  445433 retry.go:31] will retry after 2.305017457s: waiting for machine to come up
	I0819 19:32:15.355145  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:15.355687  445411 main.go:141] libmachine: (newest-cni-125279) DBG | unable to find current IP address of domain newest-cni-125279 in network mk-newest-cni-125279
	I0819 19:32:15.355719  445411 main.go:141] libmachine: (newest-cni-125279) DBG | I0819 19:32:15.355596  445433 retry.go:31] will retry after 2.545066173s: waiting for machine to come up
	I0819 19:32:17.902054  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:17.902474  445411 main.go:141] libmachine: (newest-cni-125279) DBG | unable to find current IP address of domain newest-cni-125279 in network mk-newest-cni-125279
	I0819 19:32:17.902500  445411 main.go:141] libmachine: (newest-cni-125279) DBG | I0819 19:32:17.902429  445433 retry.go:31] will retry after 3.775157108s: waiting for machine to come up
	I0819 19:32:21.682467  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:21.682937  445411 main.go:141] libmachine: (newest-cni-125279) DBG | unable to find current IP address of domain newest-cni-125279 in network mk-newest-cni-125279
	I0819 19:32:21.682968  445411 main.go:141] libmachine: (newest-cni-125279) DBG | I0819 19:32:21.682882  445433 retry.go:31] will retry after 4.681714962s: waiting for machine to come up
	I0819 19:32:26.369533  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:26.370079  445411 main.go:141] libmachine: (newest-cni-125279) Found IP for machine: 192.168.50.232
	I0819 19:32:26.370123  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has current primary IP address 192.168.50.232 and MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:26.370133  445411 main.go:141] libmachine: (newest-cni-125279) Reserving static IP address...
	I0819 19:32:26.370514  445411 main.go:141] libmachine: (newest-cni-125279) DBG | unable to find host DHCP lease matching {name: "newest-cni-125279", mac: "52:54:00:65:45:fc", ip: "192.168.50.232"} in network mk-newest-cni-125279
	I0819 19:32:26.449045  445411 main.go:141] libmachine: (newest-cni-125279) DBG | Getting to WaitForSSH function...
	I0819 19:32:26.449080  445411 main.go:141] libmachine: (newest-cni-125279) Reserved static IP address: 192.168.50.232
	I0819 19:32:26.449095  445411 main.go:141] libmachine: (newest-cni-125279) Waiting for SSH to be available...
	I0819 19:32:26.451960  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:26.452361  445411 main.go:141] libmachine: (newest-cni-125279) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:65:45:fc", ip: ""} in network mk-newest-cni-125279
	I0819 19:32:26.452391  445411 main.go:141] libmachine: (newest-cni-125279) DBG | unable to find defined IP address of network mk-newest-cni-125279 interface with MAC address 52:54:00:65:45:fc
	I0819 19:32:26.452539  445411 main.go:141] libmachine: (newest-cni-125279) DBG | Using SSH client type: external
	I0819 19:32:26.452565  445411 main.go:141] libmachine: (newest-cni-125279) DBG | Using SSH private key: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/newest-cni-125279/id_rsa (-rw-------)
	I0819 19:32:26.452608  445411 main.go:141] libmachine: (newest-cni-125279) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19468-372744/.minikube/machines/newest-cni-125279/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 19:32:26.452626  445411 main.go:141] libmachine: (newest-cni-125279) DBG | About to run SSH command:
	I0819 19:32:26.452643  445411 main.go:141] libmachine: (newest-cni-125279) DBG | exit 0
	I0819 19:32:26.456705  445411 main.go:141] libmachine: (newest-cni-125279) DBG | SSH cmd err, output: exit status 255: 
	I0819 19:32:26.456734  445411 main.go:141] libmachine: (newest-cni-125279) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0819 19:32:26.456746  445411 main.go:141] libmachine: (newest-cni-125279) DBG | command : exit 0
	I0819 19:32:26.456753  445411 main.go:141] libmachine: (newest-cni-125279) DBG | err     : exit status 255
	I0819 19:32:26.456765  445411 main.go:141] libmachine: (newest-cni-125279) DBG | output  : 
	I0819 19:32:29.458674  445411 main.go:141] libmachine: (newest-cni-125279) DBG | Getting to WaitForSSH function...
	I0819 19:32:29.461202  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:29.461680  445411 main.go:141] libmachine: (newest-cni-125279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:45:fc", ip: ""} in network mk-newest-cni-125279: {Iface:virbr2 ExpiryTime:2024-08-19 20:32:17 +0000 UTC Type:0 Mac:52:54:00:65:45:fc Iaid: IPaddr:192.168.50.232 Prefix:24 Hostname:newest-cni-125279 Clientid:01:52:54:00:65:45:fc}
	I0819 19:32:29.461712  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined IP address 192.168.50.232 and MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:29.461830  445411 main.go:141] libmachine: (newest-cni-125279) DBG | Using SSH client type: external
	I0819 19:32:29.461857  445411 main.go:141] libmachine: (newest-cni-125279) DBG | Using SSH private key: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/newest-cni-125279/id_rsa (-rw-------)
	I0819 19:32:29.461917  445411 main.go:141] libmachine: (newest-cni-125279) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.232 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19468-372744/.minikube/machines/newest-cni-125279/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 19:32:29.461940  445411 main.go:141] libmachine: (newest-cni-125279) DBG | About to run SSH command:
	I0819 19:32:29.461953  445411 main.go:141] libmachine: (newest-cni-125279) DBG | exit 0
	I0819 19:32:29.583911  445411 main.go:141] libmachine: (newest-cni-125279) DBG | SSH cmd err, output: <nil>: 
	I0819 19:32:29.584154  445411 main.go:141] libmachine: (newest-cni-125279) KVM machine creation complete!
	I0819 19:32:29.584492  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetConfigRaw
	I0819 19:32:29.585123  445411 main.go:141] libmachine: (newest-cni-125279) Calling .DriverName
	I0819 19:32:29.585389  445411 main.go:141] libmachine: (newest-cni-125279) Calling .DriverName
	I0819 19:32:29.585598  445411 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0819 19:32:29.585613  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetState
	I0819 19:32:29.587203  445411 main.go:141] libmachine: Detecting operating system of created instance...
	I0819 19:32:29.587238  445411 main.go:141] libmachine: Waiting for SSH to be available...
	I0819 19:32:29.587247  445411 main.go:141] libmachine: Getting to WaitForSSH function...
	I0819 19:32:29.587260  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHHostname
	I0819 19:32:29.589944  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:29.590478  445411 main.go:141] libmachine: (newest-cni-125279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:45:fc", ip: ""} in network mk-newest-cni-125279: {Iface:virbr2 ExpiryTime:2024-08-19 20:32:17 +0000 UTC Type:0 Mac:52:54:00:65:45:fc Iaid: IPaddr:192.168.50.232 Prefix:24 Hostname:newest-cni-125279 Clientid:01:52:54:00:65:45:fc}
	I0819 19:32:29.590505  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined IP address 192.168.50.232 and MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:29.590641  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHPort
	I0819 19:32:29.590882  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHKeyPath
	I0819 19:32:29.591071  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHKeyPath
	I0819 19:32:29.591263  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHUsername
	I0819 19:32:29.591474  445411 main.go:141] libmachine: Using SSH client type: native
	I0819 19:32:29.591801  445411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.232 22 <nil> <nil>}
	I0819 19:32:29.591818  445411 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0819 19:32:29.691215  445411 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 19:32:29.691248  445411 main.go:141] libmachine: Detecting the provisioner...
	I0819 19:32:29.691260  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHHostname
	I0819 19:32:29.694391  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:29.694727  445411 main.go:141] libmachine: (newest-cni-125279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:45:fc", ip: ""} in network mk-newest-cni-125279: {Iface:virbr2 ExpiryTime:2024-08-19 20:32:17 +0000 UTC Type:0 Mac:52:54:00:65:45:fc Iaid: IPaddr:192.168.50.232 Prefix:24 Hostname:newest-cni-125279 Clientid:01:52:54:00:65:45:fc}
	I0819 19:32:29.694770  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined IP address 192.168.50.232 and MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:29.694916  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHPort
	I0819 19:32:29.695132  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHKeyPath
	I0819 19:32:29.695322  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHKeyPath
	I0819 19:32:29.695488  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHUsername
	I0819 19:32:29.695612  445411 main.go:141] libmachine: Using SSH client type: native
	I0819 19:32:29.695821  445411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.232 22 <nil> <nil>}
	I0819 19:32:29.695836  445411 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0819 19:32:29.796928  445411 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0819 19:32:29.797037  445411 main.go:141] libmachine: found compatible host: buildroot
	I0819 19:32:29.797054  445411 main.go:141] libmachine: Provisioning with buildroot...
	I0819 19:32:29.797066  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetMachineName
	I0819 19:32:29.797366  445411 buildroot.go:166] provisioning hostname "newest-cni-125279"
	I0819 19:32:29.797403  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetMachineName
	I0819 19:32:29.797632  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHHostname
	I0819 19:32:29.800770  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:29.801208  445411 main.go:141] libmachine: (newest-cni-125279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:45:fc", ip: ""} in network mk-newest-cni-125279: {Iface:virbr2 ExpiryTime:2024-08-19 20:32:17 +0000 UTC Type:0 Mac:52:54:00:65:45:fc Iaid: IPaddr:192.168.50.232 Prefix:24 Hostname:newest-cni-125279 Clientid:01:52:54:00:65:45:fc}
	I0819 19:32:29.801234  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined IP address 192.168.50.232 and MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:29.801430  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHPort
	I0819 19:32:29.801626  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHKeyPath
	I0819 19:32:29.801815  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHKeyPath
	I0819 19:32:29.802008  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHUsername
	I0819 19:32:29.802219  445411 main.go:141] libmachine: Using SSH client type: native
	I0819 19:32:29.802470  445411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.232 22 <nil> <nil>}
	I0819 19:32:29.802492  445411 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-125279 && echo "newest-cni-125279" | sudo tee /etc/hostname
	I0819 19:32:29.914936  445411 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-125279
	
	I0819 19:32:29.914992  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHHostname
	I0819 19:32:29.917969  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:29.918378  445411 main.go:141] libmachine: (newest-cni-125279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:45:fc", ip: ""} in network mk-newest-cni-125279: {Iface:virbr2 ExpiryTime:2024-08-19 20:32:17 +0000 UTC Type:0 Mac:52:54:00:65:45:fc Iaid: IPaddr:192.168.50.232 Prefix:24 Hostname:newest-cni-125279 Clientid:01:52:54:00:65:45:fc}
	I0819 19:32:29.918409  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined IP address 192.168.50.232 and MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:29.918597  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHPort
	I0819 19:32:29.918809  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHKeyPath
	I0819 19:32:29.919019  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHKeyPath
	I0819 19:32:29.919218  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHUsername
	I0819 19:32:29.919449  445411 main.go:141] libmachine: Using SSH client type: native
	I0819 19:32:29.919650  445411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.232 22 <nil> <nil>}
	I0819 19:32:29.919689  445411 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-125279' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-125279/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-125279' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 19:32:30.037919  445411 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 19:32:30.037960  445411 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19468-372744/.minikube CaCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19468-372744/.minikube}
	I0819 19:32:30.038027  445411 buildroot.go:174] setting up certificates
	I0819 19:32:30.038057  445411 provision.go:84] configureAuth start
	I0819 19:32:30.038078  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetMachineName
	I0819 19:32:30.038465  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetIP
	I0819 19:32:30.041517  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:30.041938  445411 main.go:141] libmachine: (newest-cni-125279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:45:fc", ip: ""} in network mk-newest-cni-125279: {Iface:virbr2 ExpiryTime:2024-08-19 20:32:17 +0000 UTC Type:0 Mac:52:54:00:65:45:fc Iaid: IPaddr:192.168.50.232 Prefix:24 Hostname:newest-cni-125279 Clientid:01:52:54:00:65:45:fc}
	I0819 19:32:30.041959  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined IP address 192.168.50.232 and MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:30.042125  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHHostname
	I0819 19:32:30.044600  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:30.044956  445411 main.go:141] libmachine: (newest-cni-125279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:45:fc", ip: ""} in network mk-newest-cni-125279: {Iface:virbr2 ExpiryTime:2024-08-19 20:32:17 +0000 UTC Type:0 Mac:52:54:00:65:45:fc Iaid: IPaddr:192.168.50.232 Prefix:24 Hostname:newest-cni-125279 Clientid:01:52:54:00:65:45:fc}
	I0819 19:32:30.044983  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined IP address 192.168.50.232 and MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:30.045163  445411 provision.go:143] copyHostCerts
	I0819 19:32:30.045241  445411 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem, removing ...
	I0819 19:32:30.045260  445411 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem
	I0819 19:32:30.045354  445411 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem (1123 bytes)
	I0819 19:32:30.045484  445411 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem, removing ...
	I0819 19:32:30.045497  445411 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem
	I0819 19:32:30.045536  445411 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem (1675 bytes)
	I0819 19:32:30.045646  445411 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem, removing ...
	I0819 19:32:30.045664  445411 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem
	I0819 19:32:30.045694  445411 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem (1082 bytes)
	I0819 19:32:30.045779  445411 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem org=jenkins.newest-cni-125279 san=[127.0.0.1 192.168.50.232 localhost minikube newest-cni-125279]
	I0819 19:32:30.126262  445411 provision.go:177] copyRemoteCerts
	I0819 19:32:30.126347  445411 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 19:32:30.126382  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHHostname
	I0819 19:32:30.129167  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:30.129464  445411 main.go:141] libmachine: (newest-cni-125279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:45:fc", ip: ""} in network mk-newest-cni-125279: {Iface:virbr2 ExpiryTime:2024-08-19 20:32:17 +0000 UTC Type:0 Mac:52:54:00:65:45:fc Iaid: IPaddr:192.168.50.232 Prefix:24 Hostname:newest-cni-125279 Clientid:01:52:54:00:65:45:fc}
	I0819 19:32:30.129498  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined IP address 192.168.50.232 and MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:30.129651  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHPort
	I0819 19:32:30.129892  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHKeyPath
	I0819 19:32:30.130084  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHUsername
	I0819 19:32:30.130252  445411 sshutil.go:53] new ssh client: &{IP:192.168.50.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/newest-cni-125279/id_rsa Username:docker}
	I0819 19:32:30.214589  445411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 19:32:30.239778  445411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0819 19:32:30.268214  445411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 19:32:30.295326  445411 provision.go:87] duration metric: took 257.24866ms to configureAuth
	I0819 19:32:30.295359  445411 buildroot.go:189] setting minikube options for container-runtime
	I0819 19:32:30.295543  445411 config.go:182] Loaded profile config "newest-cni-125279": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:32:30.295622  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHHostname
	I0819 19:32:30.298361  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:30.298811  445411 main.go:141] libmachine: (newest-cni-125279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:45:fc", ip: ""} in network mk-newest-cni-125279: {Iface:virbr2 ExpiryTime:2024-08-19 20:32:17 +0000 UTC Type:0 Mac:52:54:00:65:45:fc Iaid: IPaddr:192.168.50.232 Prefix:24 Hostname:newest-cni-125279 Clientid:01:52:54:00:65:45:fc}
	I0819 19:32:30.298841  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined IP address 192.168.50.232 and MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:30.299069  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHPort
	I0819 19:32:30.299277  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHKeyPath
	I0819 19:32:30.299475  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHKeyPath
	I0819 19:32:30.299627  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHUsername
	I0819 19:32:30.299821  445411 main.go:141] libmachine: Using SSH client type: native
	I0819 19:32:30.299995  445411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.232 22 <nil> <nil>}
	I0819 19:32:30.300015  445411 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 19:32:30.572732  445411 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 19:32:30.572787  445411 main.go:141] libmachine: Checking connection to Docker...
	I0819 19:32:30.572799  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetURL
	I0819 19:32:30.574408  445411 main.go:141] libmachine: (newest-cni-125279) DBG | Using libvirt version 6000000
	I0819 19:32:30.577257  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:30.577595  445411 main.go:141] libmachine: (newest-cni-125279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:45:fc", ip: ""} in network mk-newest-cni-125279: {Iface:virbr2 ExpiryTime:2024-08-19 20:32:17 +0000 UTC Type:0 Mac:52:54:00:65:45:fc Iaid: IPaddr:192.168.50.232 Prefix:24 Hostname:newest-cni-125279 Clientid:01:52:54:00:65:45:fc}
	I0819 19:32:30.577635  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined IP address 192.168.50.232 and MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:30.577748  445411 main.go:141] libmachine: Docker is up and running!
	I0819 19:32:30.577763  445411 main.go:141] libmachine: Reticulating splines...
	I0819 19:32:30.577771  445411 client.go:171] duration metric: took 28.162840157s to LocalClient.Create
	I0819 19:32:30.577792  445411 start.go:167] duration metric: took 28.162901607s to libmachine.API.Create "newest-cni-125279"
	I0819 19:32:30.577812  445411 start.go:293] postStartSetup for "newest-cni-125279" (driver="kvm2")
	I0819 19:32:30.577825  445411 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 19:32:30.577842  445411 main.go:141] libmachine: (newest-cni-125279) Calling .DriverName
	I0819 19:32:30.578092  445411 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 19:32:30.578115  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHHostname
	I0819 19:32:30.580377  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:30.580726  445411 main.go:141] libmachine: (newest-cni-125279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:45:fc", ip: ""} in network mk-newest-cni-125279: {Iface:virbr2 ExpiryTime:2024-08-19 20:32:17 +0000 UTC Type:0 Mac:52:54:00:65:45:fc Iaid: IPaddr:192.168.50.232 Prefix:24 Hostname:newest-cni-125279 Clientid:01:52:54:00:65:45:fc}
	I0819 19:32:30.580756  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined IP address 192.168.50.232 and MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:30.580904  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHPort
	I0819 19:32:30.581126  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHKeyPath
	I0819 19:32:30.581304  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHUsername
	I0819 19:32:30.581460  445411 sshutil.go:53] new ssh client: &{IP:192.168.50.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/newest-cni-125279/id_rsa Username:docker}
	I0819 19:32:30.667321  445411 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 19:32:30.672199  445411 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 19:32:30.672247  445411 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-372744/.minikube/addons for local assets ...
	I0819 19:32:30.672316  445411 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-372744/.minikube/files for local assets ...
	I0819 19:32:30.672408  445411 filesync.go:149] local asset: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem -> 3800092.pem in /etc/ssl/certs
	I0819 19:32:30.672537  445411 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 19:32:30.683683  445411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem --> /etc/ssl/certs/3800092.pem (1708 bytes)
	I0819 19:32:30.707900  445411 start.go:296] duration metric: took 130.068757ms for postStartSetup
	I0819 19:32:30.707971  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetConfigRaw
	I0819 19:32:30.708588  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetIP
	I0819 19:32:30.711485  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:30.711934  445411 main.go:141] libmachine: (newest-cni-125279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:45:fc", ip: ""} in network mk-newest-cni-125279: {Iface:virbr2 ExpiryTime:2024-08-19 20:32:17 +0000 UTC Type:0 Mac:52:54:00:65:45:fc Iaid: IPaddr:192.168.50.232 Prefix:24 Hostname:newest-cni-125279 Clientid:01:52:54:00:65:45:fc}
	I0819 19:32:30.711975  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined IP address 192.168.50.232 and MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:30.712442  445411 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/newest-cni-125279/config.json ...
	I0819 19:32:30.712629  445411 start.go:128] duration metric: took 28.3165109s to createHost
	I0819 19:32:30.712666  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHHostname
	I0819 19:32:30.715033  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:30.715452  445411 main.go:141] libmachine: (newest-cni-125279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:45:fc", ip: ""} in network mk-newest-cni-125279: {Iface:virbr2 ExpiryTime:2024-08-19 20:32:17 +0000 UTC Type:0 Mac:52:54:00:65:45:fc Iaid: IPaddr:192.168.50.232 Prefix:24 Hostname:newest-cni-125279 Clientid:01:52:54:00:65:45:fc}
	I0819 19:32:30.715498  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined IP address 192.168.50.232 and MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:30.715655  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHPort
	I0819 19:32:30.715879  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHKeyPath
	I0819 19:32:30.716044  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHKeyPath
	I0819 19:32:30.716240  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHUsername
	I0819 19:32:30.716377  445411 main.go:141] libmachine: Using SSH client type: native
	I0819 19:32:30.716569  445411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.232 22 <nil> <nil>}
	I0819 19:32:30.716579  445411 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 19:32:30.816408  445411 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724095950.795085288
	
	I0819 19:32:30.816440  445411 fix.go:216] guest clock: 1724095950.795085288
	I0819 19:32:30.816450  445411 fix.go:229] Guest: 2024-08-19 19:32:30.795085288 +0000 UTC Remote: 2024-08-19 19:32:30.712653058 +0000 UTC m=+28.433473700 (delta=82.43223ms)
	I0819 19:32:30.816484  445411 fix.go:200] guest clock delta is within tolerance: 82.43223ms
	I0819 19:32:30.816495  445411 start.go:83] releasing machines lock for "newest-cni-125279", held for 28.420486595s
	I0819 19:32:30.816526  445411 main.go:141] libmachine: (newest-cni-125279) Calling .DriverName
	I0819 19:32:30.816873  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetIP
	I0819 19:32:30.819475  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:30.819798  445411 main.go:141] libmachine: (newest-cni-125279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:45:fc", ip: ""} in network mk-newest-cni-125279: {Iface:virbr2 ExpiryTime:2024-08-19 20:32:17 +0000 UTC Type:0 Mac:52:54:00:65:45:fc Iaid: IPaddr:192.168.50.232 Prefix:24 Hostname:newest-cni-125279 Clientid:01:52:54:00:65:45:fc}
	I0819 19:32:30.819824  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined IP address 192.168.50.232 and MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:30.819984  445411 main.go:141] libmachine: (newest-cni-125279) Calling .DriverName
	I0819 19:32:30.820624  445411 main.go:141] libmachine: (newest-cni-125279) Calling .DriverName
	I0819 19:32:30.820800  445411 main.go:141] libmachine: (newest-cni-125279) Calling .DriverName
	I0819 19:32:30.820900  445411 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 19:32:30.820951  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHHostname
	I0819 19:32:30.821046  445411 ssh_runner.go:195] Run: cat /version.json
	I0819 19:32:30.821075  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHHostname
	I0819 19:32:30.823707  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:30.824027  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:30.824060  445411 main.go:141] libmachine: (newest-cni-125279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:45:fc", ip: ""} in network mk-newest-cni-125279: {Iface:virbr2 ExpiryTime:2024-08-19 20:32:17 +0000 UTC Type:0 Mac:52:54:00:65:45:fc Iaid: IPaddr:192.168.50.232 Prefix:24 Hostname:newest-cni-125279 Clientid:01:52:54:00:65:45:fc}
	I0819 19:32:30.824083  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined IP address 192.168.50.232 and MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:30.824214  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHPort
	I0819 19:32:30.824402  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHKeyPath
	I0819 19:32:30.824413  445411 main.go:141] libmachine: (newest-cni-125279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:45:fc", ip: ""} in network mk-newest-cni-125279: {Iface:virbr2 ExpiryTime:2024-08-19 20:32:17 +0000 UTC Type:0 Mac:52:54:00:65:45:fc Iaid: IPaddr:192.168.50.232 Prefix:24 Hostname:newest-cni-125279 Clientid:01:52:54:00:65:45:fc}
	I0819 19:32:30.824435  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined IP address 192.168.50.232 and MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:30.824618  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHPort
	I0819 19:32:30.824628  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHUsername
	I0819 19:32:30.824782  445411 sshutil.go:53] new ssh client: &{IP:192.168.50.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/newest-cni-125279/id_rsa Username:docker}
	I0819 19:32:30.824845  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHKeyPath
	I0819 19:32:30.825010  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHUsername
	I0819 19:32:30.825175  445411 sshutil.go:53] new ssh client: &{IP:192.168.50.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/newest-cni-125279/id_rsa Username:docker}
	I0819 19:32:30.897078  445411 ssh_runner.go:195] Run: systemctl --version
	I0819 19:32:30.924103  445411 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 19:32:31.098786  445411 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 19:32:31.105730  445411 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 19:32:31.105802  445411 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 19:32:31.124297  445411 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 19:32:31.124330  445411 start.go:495] detecting cgroup driver to use...
	I0819 19:32:31.124435  445411 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 19:32:31.142781  445411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 19:32:31.158044  445411 docker.go:217] disabling cri-docker service (if available) ...
	I0819 19:32:31.158104  445411 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 19:32:31.172659  445411 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 19:32:31.187214  445411 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 19:32:31.299769  445411 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 19:32:31.461533  445411 docker.go:233] disabling docker service ...
	I0819 19:32:31.461615  445411 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 19:32:31.476486  445411 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 19:32:31.490797  445411 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 19:32:31.608782  445411 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 19:32:31.742865  445411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 19:32:31.758761  445411 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 19:32:31.778934  445411 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 19:32:31.778996  445411 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:32:31.790652  445411 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 19:32:31.790725  445411 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:32:31.802633  445411 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:32:31.813033  445411 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:32:31.826216  445411 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 19:32:31.836768  445411 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:32:31.848190  445411 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:32:31.865498  445411 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:32:31.875948  445411 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 19:32:31.886230  445411 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 19:32:31.886291  445411 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 19:32:31.900798  445411 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 19:32:31.910083  445411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:32:32.042525  445411 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 19:32:32.194892  445411 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 19:32:32.194987  445411 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 19:32:32.200584  445411 start.go:563] Will wait 60s for crictl version
	I0819 19:32:32.200664  445411 ssh_runner.go:195] Run: which crictl
	I0819 19:32:32.204682  445411 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 19:32:32.252098  445411 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 19:32:32.252211  445411 ssh_runner.go:195] Run: crio --version
	I0819 19:32:32.285365  445411 ssh_runner.go:195] Run: crio --version
	I0819 19:32:32.318835  445411 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 19:32:32.320145  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetIP
	I0819 19:32:32.322889  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:32.323215  445411 main.go:141] libmachine: (newest-cni-125279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:45:fc", ip: ""} in network mk-newest-cni-125279: {Iface:virbr2 ExpiryTime:2024-08-19 20:32:17 +0000 UTC Type:0 Mac:52:54:00:65:45:fc Iaid: IPaddr:192.168.50.232 Prefix:24 Hostname:newest-cni-125279 Clientid:01:52:54:00:65:45:fc}
	I0819 19:32:32.323238  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined IP address 192.168.50.232 and MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:32.323423  445411 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0819 19:32:32.327759  445411 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 19:32:32.341796  445411 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0819 19:32:32.343474  445411 kubeadm.go:883] updating cluster {Name:newest-cni-125279 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:newest-cni-125279 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.232 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 19:32:32.343607  445411 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 19:32:32.343660  445411 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 19:32:32.378342  445411 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0819 19:32:32.378408  445411 ssh_runner.go:195] Run: which lz4
	I0819 19:32:32.382998  445411 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 19:32:32.387231  445411 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 19:32:32.387267  445411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0819 19:32:33.769032  445411 crio.go:462] duration metric: took 1.386077329s to copy over tarball
	I0819 19:32:33.769135  445411 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 19:32:36.051789  445411 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.282610243s)
	I0819 19:32:36.051829  445411 crio.go:469] duration metric: took 2.282754263s to extract the tarball
	I0819 19:32:36.051867  445411 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 19:32:36.093671  445411 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 19:32:36.143521  445411 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 19:32:36.143544  445411 cache_images.go:84] Images are preloaded, skipping loading
	I0819 19:32:36.143551  445411 kubeadm.go:934] updating node { 192.168.50.232 8443 v1.31.0 crio true true} ...
	I0819 19:32:36.143727  445411 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-125279 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.232
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:newest-cni-125279 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 19:32:36.143814  445411 ssh_runner.go:195] Run: crio config
	I0819 19:32:36.201604  445411 cni.go:84] Creating CNI manager for ""
	I0819 19:32:36.201627  445411 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 19:32:36.201638  445411 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0819 19:32:36.201661  445411 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.50.232 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-125279 NodeName:newest-cni-125279 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.232"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:ma
p[] NodeIP:192.168.50.232 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 19:32:36.201806  445411 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.232
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-125279"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.232
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.232"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 19:32:36.201864  445411 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 19:32:36.212255  445411 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 19:32:36.212331  445411 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 19:32:36.222503  445411 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I0819 19:32:36.241339  445411 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 19:32:36.258749  445411 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2285 bytes)
	I0819 19:32:36.275822  445411 ssh_runner.go:195] Run: grep 192.168.50.232	control-plane.minikube.internal$ /etc/hosts
	I0819 19:32:36.280876  445411 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.232	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 19:32:36.293831  445411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:32:36.404880  445411 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 19:32:36.421983  445411 certs.go:68] Setting up /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/newest-cni-125279 for IP: 192.168.50.232
	I0819 19:32:36.422014  445411 certs.go:194] generating shared ca certs ...
	I0819 19:32:36.422036  445411 certs.go:226] acquiring lock for ca certs: {Name:mk639e03f593e0bccac045f6e9f5ba3b96cc81e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:32:36.422236  445411 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key
	I0819 19:32:36.422290  445411 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key
	I0819 19:32:36.422302  445411 certs.go:256] generating profile certs ...
	I0819 19:32:36.422386  445411 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/newest-cni-125279/client.key
	I0819 19:32:36.422414  445411 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/newest-cni-125279/client.crt with IP's: []
	I0819 19:32:36.522076  445411 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/newest-cni-125279/client.crt ...
	I0819 19:32:36.522105  445411 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/newest-cni-125279/client.crt: {Name:mk442e757a85e0cbf3d7208ab2e44af263b933a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:32:36.522303  445411 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/newest-cni-125279/client.key ...
	I0819 19:32:36.522323  445411 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/newest-cni-125279/client.key: {Name:mk5fd4139f16d611ab233bc9f93cbc4d1a8f1d48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:32:36.522451  445411 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/newest-cni-125279/apiserver.key.84e3bbbc
	I0819 19:32:36.522468  445411 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/newest-cni-125279/apiserver.crt.84e3bbbc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.232]
	I0819 19:32:36.906762  445411 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/newest-cni-125279/apiserver.crt.84e3bbbc ...
	I0819 19:32:36.906793  445411 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/newest-cni-125279/apiserver.crt.84e3bbbc: {Name:mk17975f33b588b4c338f15f79a4cfcf2f2b38a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:32:36.906996  445411 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/newest-cni-125279/apiserver.key.84e3bbbc ...
	I0819 19:32:36.907014  445411 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/newest-cni-125279/apiserver.key.84e3bbbc: {Name:mka0c23eccbc53741bef61d6fb72ea845c1a566f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:32:36.907117  445411 certs.go:381] copying /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/newest-cni-125279/apiserver.crt.84e3bbbc -> /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/newest-cni-125279/apiserver.crt
	I0819 19:32:36.907201  445411 certs.go:385] copying /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/newest-cni-125279/apiserver.key.84e3bbbc -> /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/newest-cni-125279/apiserver.key
	I0819 19:32:36.907257  445411 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/newest-cni-125279/proxy-client.key
	I0819 19:32:36.907273  445411 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/newest-cni-125279/proxy-client.crt with IP's: []
	I0819 19:32:37.010993  445411 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/newest-cni-125279/proxy-client.crt ...
	I0819 19:32:37.011023  445411 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/newest-cni-125279/proxy-client.crt: {Name:mk2d9946398feb1a13005ef73ba011aa62956e5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:32:37.011215  445411 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/newest-cni-125279/proxy-client.key ...
	I0819 19:32:37.011233  445411 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/newest-cni-125279/proxy-client.key: {Name:mk5cea05d69aa800b4c7091992ee7e71f537e940 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:32:37.011458  445411 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem (1338 bytes)
	W0819 19:32:37.011504  445411 certs.go:480] ignoring /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009_empty.pem, impossibly tiny 0 bytes
	I0819 19:32:37.011517  445411 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 19:32:37.011540  445411 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem (1082 bytes)
	I0819 19:32:37.011561  445411 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem (1123 bytes)
	I0819 19:32:37.011581  445411 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem (1675 bytes)
	I0819 19:32:37.011618  445411 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem (1708 bytes)
	I0819 19:32:37.012254  445411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 19:32:37.041652  445411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 19:32:37.066621  445411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 19:32:37.090929  445411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 19:32:37.118074  445411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/newest-cni-125279/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0819 19:32:37.144766  445411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/newest-cni-125279/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 19:32:37.172508  445411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/newest-cni-125279/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 19:32:37.200779  445411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/newest-cni-125279/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 19:32:37.231338  445411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 19:32:37.263164  445411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem --> /usr/share/ca-certificates/380009.pem (1338 bytes)
	I0819 19:32:37.289754  445411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem --> /usr/share/ca-certificates/3800092.pem (1708 bytes)
	I0819 19:32:37.314709  445411 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 19:32:37.331526  445411 ssh_runner.go:195] Run: openssl version
	I0819 19:32:37.337209  445411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 19:32:37.347993  445411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:32:37.352712  445411 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:32:37.352777  445411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:32:37.358779  445411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 19:32:37.370075  445411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/380009.pem && ln -fs /usr/share/ca-certificates/380009.pem /etc/ssl/certs/380009.pem"
	I0819 19:32:37.385161  445411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/380009.pem
	I0819 19:32:37.390922  445411 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 17:56 /usr/share/ca-certificates/380009.pem
	I0819 19:32:37.390981  445411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/380009.pem
	I0819 19:32:37.397269  445411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/380009.pem /etc/ssl/certs/51391683.0"
	I0819 19:32:37.407980  445411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3800092.pem && ln -fs /usr/share/ca-certificates/3800092.pem /etc/ssl/certs/3800092.pem"
	I0819 19:32:37.418871  445411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3800092.pem
	I0819 19:32:37.423895  445411 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 17:56 /usr/share/ca-certificates/3800092.pem
	I0819 19:32:37.423963  445411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3800092.pem
	I0819 19:32:37.429684  445411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3800092.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 19:32:37.440633  445411 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 19:32:37.444830  445411 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 19:32:37.444898  445411 kubeadm.go:392] StartCluster: {Name:newest-cni-125279 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:newest-cni-125279 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.232 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersio
n:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 19:32:37.445003  445411 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 19:32:37.445093  445411 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 19:32:37.489974  445411 cri.go:89] found id: ""
	I0819 19:32:37.490066  445411 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 19:32:37.500445  445411 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 19:32:37.510291  445411 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 19:32:37.520444  445411 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 19:32:37.520467  445411 kubeadm.go:157] found existing configuration files:
	
	I0819 19:32:37.520522  445411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 19:32:37.529784  445411 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 19:32:37.529863  445411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 19:32:37.539857  445411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 19:32:37.549256  445411 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 19:32:37.549315  445411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 19:32:37.561008  445411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 19:32:37.570766  445411 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 19:32:37.570860  445411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 19:32:37.580738  445411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 19:32:37.590056  445411 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 19:32:37.590137  445411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 19:32:37.599965  445411 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 19:32:37.718221  445411 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0819 19:32:37.718300  445411 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 19:32:37.821264  445411 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 19:32:37.821428  445411 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 19:32:37.821569  445411 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0819 19:32:37.832520  445411 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 19:32:37.997432  445411 out.go:235]   - Generating certificates and keys ...
	I0819 19:32:37.997562  445411 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 19:32:37.997654  445411 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 19:32:37.997785  445411 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0819 19:32:38.176509  445411 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0819 19:32:38.461557  445411 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0819 19:32:38.675119  445411 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0819 19:32:39.024425  445411 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0819 19:32:39.024918  445411 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-125279] and IPs [192.168.50.232 127.0.0.1 ::1]
	I0819 19:32:39.153600  445411 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0819 19:32:39.153866  445411 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-125279] and IPs [192.168.50.232 127.0.0.1 ::1]
	I0819 19:32:39.457449  445411 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0819 19:32:39.742467  445411 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0819 19:32:39.828764  445411 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0819 19:32:39.829090  445411 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 19:32:40.077423  445411 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 19:32:40.498250  445411 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0819 19:32:40.770690  445411 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 19:32:40.907045  445411 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 19:32:41.035872  445411 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 19:32:41.036703  445411 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 19:32:41.040503  445411 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 19:32:41.042161  445411 out.go:235]   - Booting up control plane ...
	I0819 19:32:41.042281  445411 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 19:32:41.042403  445411 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 19:32:41.043123  445411 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 19:32:41.062644  445411 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 19:32:41.068879  445411 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 19:32:41.068961  445411 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 19:32:41.200275  445411 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0819 19:32:41.200419  445411 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0819 19:32:41.701362  445411 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.298158ms
	I0819 19:32:41.701502  445411 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0819 19:32:46.704009  445411 kubeadm.go:310] [api-check] The API server is healthy after 5.003164799s
	I0819 19:32:46.715089  445411 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 19:32:46.733939  445411 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 19:32:46.767156  445411 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 19:32:46.767431  445411 kubeadm.go:310] [mark-control-plane] Marking the node newest-cni-125279 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 19:32:46.784334  445411 kubeadm.go:310] [bootstrap-token] Using token: 0hbpr6.pkupeh1068ab9qgq
	I0819 19:32:46.785818  445411 out.go:235]   - Configuring RBAC rules ...
	I0819 19:32:46.785998  445411 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 19:32:46.792924  445411 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 19:32:46.809319  445411 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 19:32:46.814326  445411 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 19:32:46.818015  445411 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 19:32:46.821228  445411 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 19:32:47.109831  445411 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 19:32:47.545814  445411 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 19:32:48.109174  445411 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 19:32:48.110164  445411 kubeadm.go:310] 
	I0819 19:32:48.110257  445411 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 19:32:48.110266  445411 kubeadm.go:310] 
	I0819 19:32:48.110395  445411 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 19:32:48.110403  445411 kubeadm.go:310] 
	I0819 19:32:48.110436  445411 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 19:32:48.110535  445411 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 19:32:48.110622  445411 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 19:32:48.110630  445411 kubeadm.go:310] 
	I0819 19:32:48.110700  445411 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 19:32:48.110713  445411 kubeadm.go:310] 
	I0819 19:32:48.110790  445411 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 19:32:48.110798  445411 kubeadm.go:310] 
	I0819 19:32:48.110876  445411 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 19:32:48.110969  445411 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 19:32:48.111061  445411 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 19:32:48.111072  445411 kubeadm.go:310] 
	I0819 19:32:48.111191  445411 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 19:32:48.111330  445411 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 19:32:48.111356  445411 kubeadm.go:310] 
	I0819 19:32:48.111474  445411 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 0hbpr6.pkupeh1068ab9qgq \
	I0819 19:32:48.111611  445411 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3fcbd90565c5acbc36a47b2db682cb22dce9b172c9bf3af21e506ebb67608039 \
	I0819 19:32:48.111643  445411 kubeadm.go:310] 	--control-plane 
	I0819 19:32:48.111654  445411 kubeadm.go:310] 
	I0819 19:32:48.111769  445411 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 19:32:48.111777  445411 kubeadm.go:310] 
	I0819 19:32:48.111883  445411 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 0hbpr6.pkupeh1068ab9qgq \
	I0819 19:32:48.112033  445411 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3fcbd90565c5acbc36a47b2db682cb22dce9b172c9bf3af21e506ebb67608039 
	I0819 19:32:48.112676  445411 kubeadm.go:310] W0819 19:32:37.700404     846 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 19:32:48.113078  445411 kubeadm.go:310] W0819 19:32:37.701316     846 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 19:32:48.113267  445411 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 19:32:48.113308  445411 cni.go:84] Creating CNI manager for ""
	I0819 19:32:48.113321  445411 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 19:32:48.115018  445411 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 19:32:48.116586  445411 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 19:32:48.127600  445411 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 19:32:48.151136  445411 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 19:32:48.151198  445411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:32:48.151216  445411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-125279 minikube.k8s.io/updated_at=2024_08_19T19_32_48_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=9c2db9d51ec33b5c53a86e9ba3d384ee332e3411 minikube.k8s.io/name=newest-cni-125279 minikube.k8s.io/primary=true
	I0819 19:32:48.386782  445411 ops.go:34] apiserver oom_adj: -16
	I0819 19:32:48.386825  445411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:32:48.887825  445411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:32:49.386791  445411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:32:49.887469  445411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:32:50.386860  445411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:32:50.887795  445411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:32:51.387618  445411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:32:51.887419  445411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:32:51.999510  445411 kubeadm.go:1113] duration metric: took 3.848362058s to wait for elevateKubeSystemPrivileges
	I0819 19:32:51.999550  445411 kubeadm.go:394] duration metric: took 14.554657005s to StartCluster
	I0819 19:32:51.999576  445411 settings.go:142] acquiring lock: {Name:mk396fcf49a1d0e69583cf37ff3c819e37118163 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:32:51.999660  445411 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19468-372744/kubeconfig
	I0819 19:32:52.001419  445411 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/kubeconfig: {Name:mk8e7b4e1bb7da665111d2acd83eb48882c66853 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:32:52.001689  445411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0819 19:32:52.001727  445411 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.232 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 19:32:52.001838  445411 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 19:32:52.001899  445411 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-125279"
	I0819 19:32:52.001925  445411 addons.go:69] Setting default-storageclass=true in profile "newest-cni-125279"
	I0819 19:32:52.001948  445411 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-125279"
	I0819 19:32:52.001956  445411 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-125279"
	I0819 19:32:52.001987  445411 host.go:66] Checking if "newest-cni-125279" exists ...
	I0819 19:32:52.002025  445411 config.go:182] Loaded profile config "newest-cni-125279": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:32:52.002424  445411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:32:52.002424  445411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:32:52.002457  445411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:32:52.002457  445411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:32:52.003581  445411 out.go:177] * Verifying Kubernetes components...
	I0819 19:32:52.005179  445411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:32:52.018377  445411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39055
	I0819 19:32:52.018953  445411 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:32:52.019570  445411 main.go:141] libmachine: Using API Version  1
	I0819 19:32:52.019597  445411 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:32:52.020000  445411 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:32:52.020220  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetState
	I0819 19:32:52.023257  445411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40351
	I0819 19:32:52.023804  445411 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:32:52.024333  445411 main.go:141] libmachine: Using API Version  1
	I0819 19:32:52.024353  445411 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:32:52.024700  445411 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:32:52.024718  445411 addons.go:234] Setting addon default-storageclass=true in "newest-cni-125279"
	I0819 19:32:52.024786  445411 host.go:66] Checking if "newest-cni-125279" exists ...
	I0819 19:32:52.025113  445411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:32:52.025150  445411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:32:52.025204  445411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:32:52.025256  445411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:32:52.041391  445411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44677
	I0819 19:32:52.041892  445411 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:32:52.042637  445411 main.go:141] libmachine: Using API Version  1
	I0819 19:32:52.042664  445411 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:32:52.043009  445411 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:32:52.043232  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetState
	I0819 19:32:52.045052  445411 main.go:141] libmachine: (newest-cni-125279) Calling .DriverName
	I0819 19:32:52.045464  445411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36043
	I0819 19:32:52.045907  445411 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:32:52.046455  445411 main.go:141] libmachine: Using API Version  1
	I0819 19:32:52.046476  445411 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:32:52.046792  445411 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:32:52.047095  445411 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:32:52.047346  445411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:32:52.047393  445411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:32:52.048480  445411 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 19:32:52.048499  445411 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 19:32:52.048515  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHHostname
	I0819 19:32:52.052146  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:52.052705  445411 main.go:141] libmachine: (newest-cni-125279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:45:fc", ip: ""} in network mk-newest-cni-125279: {Iface:virbr2 ExpiryTime:2024-08-19 20:32:17 +0000 UTC Type:0 Mac:52:54:00:65:45:fc Iaid: IPaddr:192.168.50.232 Prefix:24 Hostname:newest-cni-125279 Clientid:01:52:54:00:65:45:fc}
	I0819 19:32:52.052735  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined IP address 192.168.50.232 and MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:52.052937  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHPort
	I0819 19:32:52.053163  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHKeyPath
	I0819 19:32:52.053346  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHUsername
	I0819 19:32:52.053602  445411 sshutil.go:53] new ssh client: &{IP:192.168.50.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/newest-cni-125279/id_rsa Username:docker}
	I0819 19:32:52.064498  445411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39297
	I0819 19:32:52.064863  445411 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:32:52.065445  445411 main.go:141] libmachine: Using API Version  1
	I0819 19:32:52.065473  445411 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:32:52.065804  445411 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:32:52.066003  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetState
	I0819 19:32:52.067753  445411 main.go:141] libmachine: (newest-cni-125279) Calling .DriverName
	I0819 19:32:52.067975  445411 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 19:32:52.067992  445411 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 19:32:52.068012  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHHostname
	I0819 19:32:52.071016  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:52.071486  445411 main.go:141] libmachine: (newest-cni-125279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:45:fc", ip: ""} in network mk-newest-cni-125279: {Iface:virbr2 ExpiryTime:2024-08-19 20:32:17 +0000 UTC Type:0 Mac:52:54:00:65:45:fc Iaid: IPaddr:192.168.50.232 Prefix:24 Hostname:newest-cni-125279 Clientid:01:52:54:00:65:45:fc}
	I0819 19:32:52.071515  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined IP address 192.168.50.232 and MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:52.071754  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHPort
	I0819 19:32:52.071956  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHKeyPath
	I0819 19:32:52.072123  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHUsername
	I0819 19:32:52.072271  445411 sshutil.go:53] new ssh client: &{IP:192.168.50.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/newest-cni-125279/id_rsa Username:docker}
	I0819 19:32:52.201561  445411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0819 19:32:52.249990  445411 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 19:32:52.446606  445411 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 19:32:52.472734  445411 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 19:32:52.824555  445411 start.go:971] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0819 19:32:52.826442  445411 api_server.go:52] waiting for apiserver process to appear ...
	I0819 19:32:52.826506  445411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:32:53.333371  445411 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-125279" context rescaled to 1 replicas
	I0819 19:32:53.493888  445411 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.047222109s)
	I0819 19:32:53.493957  445411 main.go:141] libmachine: Making call to close driver server
	I0819 19:32:53.493955  445411 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.021183426s)
	I0819 19:32:53.493973  445411 main.go:141] libmachine: (newest-cni-125279) Calling .Close
	I0819 19:32:53.494007  445411 main.go:141] libmachine: Making call to close driver server
	I0819 19:32:53.494020  445411 main.go:141] libmachine: (newest-cni-125279) Calling .Close
	I0819 19:32:53.494056  445411 api_server.go:72] duration metric: took 1.492280363s to wait for apiserver process to appear ...
	I0819 19:32:53.494078  445411 api_server.go:88] waiting for apiserver healthz status ...
	I0819 19:32:53.494129  445411 api_server.go:253] Checking apiserver healthz at https://192.168.50.232:8443/healthz ...
	I0819 19:32:53.494348  445411 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:32:53.494404  445411 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:32:53.494429  445411 main.go:141] libmachine: Making call to close driver server
	I0819 19:32:53.494449  445411 main.go:141] libmachine: (newest-cni-125279) Calling .Close
	I0819 19:32:53.494468  445411 main.go:141] libmachine: (newest-cni-125279) DBG | Closing plugin on server side
	I0819 19:32:53.494434  445411 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:32:53.494551  445411 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:32:53.494560  445411 main.go:141] libmachine: Making call to close driver server
	I0819 19:32:53.494583  445411 main.go:141] libmachine: (newest-cni-125279) Calling .Close
	I0819 19:32:53.494786  445411 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:32:53.494830  445411 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:32:53.496403  445411 main.go:141] libmachine: (newest-cni-125279) DBG | Closing plugin on server side
	I0819 19:32:53.496469  445411 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:32:53.498797  445411 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:32:53.521300  445411 api_server.go:279] https://192.168.50.232:8443/healthz returned 200:
	ok
	I0819 19:32:53.522886  445411 api_server.go:141] control plane version: v1.31.0
	I0819 19:32:53.522918  445411 api_server.go:131] duration metric: took 28.832172ms to wait for apiserver health ...
	I0819 19:32:53.522931  445411 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 19:32:53.530254  445411 main.go:141] libmachine: Making call to close driver server
	I0819 19:32:53.530286  445411 main.go:141] libmachine: (newest-cni-125279) Calling .Close
	I0819 19:32:53.530616  445411 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:32:53.530668  445411 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:32:53.530651  445411 main.go:141] libmachine: (newest-cni-125279) DBG | Closing plugin on server side
	I0819 19:32:53.532645  445411 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0819 19:32:53.534453  445411 addons.go:510] duration metric: took 1.532620573s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0819 19:32:53.541174  445411 system_pods.go:59] 8 kube-system pods found
	I0819 19:32:53.541205  445411 system_pods.go:61] "coredns-6f6b679f8f-dcvb8" [3d0efe89-70c3-43c2-9504-c29339089833] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0819 19:32:53.541212  445411 system_pods.go:61] "coredns-6f6b679f8f-djk2g" [e9aba6e2-e835-41c6-bd63-7ac084c9546f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0819 19:32:53.541218  445411 system_pods.go:61] "etcd-newest-cni-125279" [b094a90f-a524-48fe-9401-3865e147c3a9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0819 19:32:53.541227  445411 system_pods.go:61] "kube-apiserver-newest-cni-125279" [88785c36-87aa-48da-b7e3-75ddcf969dfa] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0819 19:32:53.541233  445411 system_pods.go:61] "kube-controller-manager-newest-cni-125279" [9444e4cc-f675-4201-9ed3-8a69fa70a3cf] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0819 19:32:53.541239  445411 system_pods.go:61] "kube-proxy-df7d9" [4e056f03-fc39-4070-8192-1ec53669bc43] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0819 19:32:53.541245  445411 system_pods.go:61] "kube-scheduler-newest-cni-125279" [0fa163ed-419b-4a4e-81d2-194ff57e91d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0819 19:32:53.541252  445411 system_pods.go:61] "storage-provisioner" [d97409ec-ee3e-40a0-9054-e0fce384047a] Pending
	I0819 19:32:53.541259  445411 system_pods.go:74] duration metric: took 18.321098ms to wait for pod list to return data ...
	I0819 19:32:53.541269  445411 default_sa.go:34] waiting for default service account to be created ...
	I0819 19:32:53.553389  445411 default_sa.go:45] found service account: "default"
	I0819 19:32:53.553415  445411 default_sa.go:55] duration metric: took 12.140536ms for default service account to be created ...
	I0819 19:32:53.553429  445411 kubeadm.go:582] duration metric: took 1.551669673s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0819 19:32:53.553446  445411 node_conditions.go:102] verifying NodePressure condition ...
	I0819 19:32:53.565856  445411 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 19:32:53.565904  445411 node_conditions.go:123] node cpu capacity is 2
	I0819 19:32:53.565918  445411 node_conditions.go:105] duration metric: took 12.465483ms to run NodePressure ...
	I0819 19:32:53.565932  445411 start.go:241] waiting for startup goroutines ...
	I0819 19:32:53.565941  445411 start.go:246] waiting for cluster config update ...
	I0819 19:32:53.565956  445411 start.go:255] writing updated cluster config ...
	I0819 19:32:53.566236  445411 ssh_runner.go:195] Run: rm -f paused
	I0819 19:32:53.640713  445411 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 19:32:53.642549  445411 out.go:177] * Done! kubectl is now configured to use "newest-cni-125279" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 19 19:33:01 embed-certs-024748 crio[729]: time="2024-08-19 19:33:01.815984411Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095981815925801,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6ed38206-01a7-48aa-aaed-a7d5cc2627c0 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:33:01 embed-certs-024748 crio[729]: time="2024-08-19 19:33:01.816702118Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b0e57e3c-853e-4a18-9995-6848426ca613 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:33:01 embed-certs-024748 crio[729]: time="2024-08-19 19:33:01.816755767Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b0e57e3c-853e-4a18-9995-6848426ca613 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:33:01 embed-certs-024748 crio[729]: time="2024-08-19 19:33:01.816947884Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:902796698c02b97c3f50f231cba5dfbc00bc7e8344f104fe7a36109e1d10a4f8,PodSandboxId:2cd56c89cb3500385d16c5b82561348e2422ac59ce004cda825f81be1d188ece,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724094792920726029,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7acb6ce1-21b6-4cdd-a5cb-76d694fc0a38,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89e69d6f405cec355f5cd65f38a963570166553b8598f4fca5b73a80d437338d,PodSandboxId:bf6fc22a7831f2da0f48530f74acaeb6bd79a7a1af15d958a29a142868066ff6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724094771757901345,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5ea66261-4ba9-4b4c-9d2f-4ad3490d3ed5,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6bc5b24f616e32fdffb80b6ed0201250b02f143c8217d56ef90dc55551d709f,PodSandboxId:0c91d6c776a7fded12e3aaa9edf57d7888401809c72d077dc752208355bfb3cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724094768498234032,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-7ww4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbde00d4-6027-4d8d-b51e-bd68915da166,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e23a8501fe9333693618c26b918ed665ca9f2ea955dfc771ddbd90f4af91338,PodSandboxId:472436dd2272fa86a82a775f24e7cf1ddccabcfd91d314c0adba450bd1bcb6c0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724094762648038232,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bmmbh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f77f152-f5f4-40f6-9
632-1eaa36b9ea31,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44a4290db8405288dc877d1dbfa8f1a4976cb6221431aef419db3cdff822d3b6,PodSandboxId:2cd56c89cb3500385d16c5b82561348e2422ac59ce004cda825f81be1d188ece,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724094762645492196,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7acb6ce1-21b6-4cdd-a5cb-76d694fc0a
38,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c09c2a3840c6b84c4d187a5b4938f1e79c515609ad3ff7077a163e94acd5fc22,PodSandboxId:f276ebca5e26f21d36a567d969915505ddf60b7ea37d9c8b78d529962a2fcc8d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724094757719644043,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-024748,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e50e02ebe8c4a08870ffac68ea5d2832,},Annotat
ions:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e6dab43bac16fb6a2155177fd2cb01da57c882a322ae89145bc332c50c87071,PodSandboxId:0c32a47af88ab983143fe824a6c65ca5175d816ef2a93f2233540e92436fbae4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724094757709229899,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-024748,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b58db56f9002dd73de08465fd3a
06c18,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d66ad075c652a3b446078444a32327c07459f74199be8f89197067dbad566d5a,PodSandboxId:9ecf2a88c0af30fddbde514ebf4371ab2edb96b5b3d009b04c544d1fecea9381,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724094757706929424,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-024748,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfe3830de5cba7ee0cea7d338361cf28,},Anno
tations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3cb2c04e3eb3398fa324b660ca1864f22175cbf41fd84eae34a24ce7928b672,PodSandboxId:f6cd7683df1d2eace86e8ace9e6f78d5db7173e03f1f652874fa0a76909a253c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724094757704514233,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-024748,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ff95a4cad5e2fe07b2e8f0bc0f26a77,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b0e57e3c-853e-4a18-9995-6848426ca613 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:33:01 embed-certs-024748 crio[729]: time="2024-08-19 19:33:01.856427887Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=95aa9bd3-a9c1-4d62-8c82-e15c1fbf549a name=/runtime.v1.RuntimeService/Version
	Aug 19 19:33:01 embed-certs-024748 crio[729]: time="2024-08-19 19:33:01.856514531Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=95aa9bd3-a9c1-4d62-8c82-e15c1fbf549a name=/runtime.v1.RuntimeService/Version
	Aug 19 19:33:01 embed-certs-024748 crio[729]: time="2024-08-19 19:33:01.857574328Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b3901738-8a5f-4bd0-ba4b-dc483f22ce39 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:33:01 embed-certs-024748 crio[729]: time="2024-08-19 19:33:01.857958022Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095981857939143,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b3901738-8a5f-4bd0-ba4b-dc483f22ce39 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:33:01 embed-certs-024748 crio[729]: time="2024-08-19 19:33:01.858597019Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=248af299-412e-4f90-b413-82948353c978 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:33:01 embed-certs-024748 crio[729]: time="2024-08-19 19:33:01.858647999Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=248af299-412e-4f90-b413-82948353c978 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:33:01 embed-certs-024748 crio[729]: time="2024-08-19 19:33:01.858833366Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:902796698c02b97c3f50f231cba5dfbc00bc7e8344f104fe7a36109e1d10a4f8,PodSandboxId:2cd56c89cb3500385d16c5b82561348e2422ac59ce004cda825f81be1d188ece,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724094792920726029,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7acb6ce1-21b6-4cdd-a5cb-76d694fc0a38,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89e69d6f405cec355f5cd65f38a963570166553b8598f4fca5b73a80d437338d,PodSandboxId:bf6fc22a7831f2da0f48530f74acaeb6bd79a7a1af15d958a29a142868066ff6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724094771757901345,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5ea66261-4ba9-4b4c-9d2f-4ad3490d3ed5,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6bc5b24f616e32fdffb80b6ed0201250b02f143c8217d56ef90dc55551d709f,PodSandboxId:0c91d6c776a7fded12e3aaa9edf57d7888401809c72d077dc752208355bfb3cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724094768498234032,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-7ww4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbde00d4-6027-4d8d-b51e-bd68915da166,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e23a8501fe9333693618c26b918ed665ca9f2ea955dfc771ddbd90f4af91338,PodSandboxId:472436dd2272fa86a82a775f24e7cf1ddccabcfd91d314c0adba450bd1bcb6c0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724094762648038232,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bmmbh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f77f152-f5f4-40f6-9
632-1eaa36b9ea31,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44a4290db8405288dc877d1dbfa8f1a4976cb6221431aef419db3cdff822d3b6,PodSandboxId:2cd56c89cb3500385d16c5b82561348e2422ac59ce004cda825f81be1d188ece,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724094762645492196,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7acb6ce1-21b6-4cdd-a5cb-76d694fc0a
38,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c09c2a3840c6b84c4d187a5b4938f1e79c515609ad3ff7077a163e94acd5fc22,PodSandboxId:f276ebca5e26f21d36a567d969915505ddf60b7ea37d9c8b78d529962a2fcc8d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724094757719644043,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-024748,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e50e02ebe8c4a08870ffac68ea5d2832,},Annotat
ions:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e6dab43bac16fb6a2155177fd2cb01da57c882a322ae89145bc332c50c87071,PodSandboxId:0c32a47af88ab983143fe824a6c65ca5175d816ef2a93f2233540e92436fbae4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724094757709229899,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-024748,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b58db56f9002dd73de08465fd3a
06c18,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d66ad075c652a3b446078444a32327c07459f74199be8f89197067dbad566d5a,PodSandboxId:9ecf2a88c0af30fddbde514ebf4371ab2edb96b5b3d009b04c544d1fecea9381,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724094757706929424,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-024748,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfe3830de5cba7ee0cea7d338361cf28,},Anno
tations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3cb2c04e3eb3398fa324b660ca1864f22175cbf41fd84eae34a24ce7928b672,PodSandboxId:f6cd7683df1d2eace86e8ace9e6f78d5db7173e03f1f652874fa0a76909a253c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724094757704514233,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-024748,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ff95a4cad5e2fe07b2e8f0bc0f26a77,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=248af299-412e-4f90-b413-82948353c978 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:33:01 embed-certs-024748 crio[729]: time="2024-08-19 19:33:01.905104344Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=12cd2f87-8039-4340-9a12-458a85b9cab0 name=/runtime.v1.RuntimeService/Version
	Aug 19 19:33:01 embed-certs-024748 crio[729]: time="2024-08-19 19:33:01.905184666Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=12cd2f87-8039-4340-9a12-458a85b9cab0 name=/runtime.v1.RuntimeService/Version
	Aug 19 19:33:01 embed-certs-024748 crio[729]: time="2024-08-19 19:33:01.907070316Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f5e163fb-6901-4368-857a-9330279404d4 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:33:01 embed-certs-024748 crio[729]: time="2024-08-19 19:33:01.907605983Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095981907579303,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f5e163fb-6901-4368-857a-9330279404d4 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:33:01 embed-certs-024748 crio[729]: time="2024-08-19 19:33:01.908410966Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e69968f6-929c-42a7-81e5-414d74d06c15 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:33:01 embed-certs-024748 crio[729]: time="2024-08-19 19:33:01.908464688Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e69968f6-929c-42a7-81e5-414d74d06c15 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:33:01 embed-certs-024748 crio[729]: time="2024-08-19 19:33:01.908682578Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:902796698c02b97c3f50f231cba5dfbc00bc7e8344f104fe7a36109e1d10a4f8,PodSandboxId:2cd56c89cb3500385d16c5b82561348e2422ac59ce004cda825f81be1d188ece,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724094792920726029,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7acb6ce1-21b6-4cdd-a5cb-76d694fc0a38,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89e69d6f405cec355f5cd65f38a963570166553b8598f4fca5b73a80d437338d,PodSandboxId:bf6fc22a7831f2da0f48530f74acaeb6bd79a7a1af15d958a29a142868066ff6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724094771757901345,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5ea66261-4ba9-4b4c-9d2f-4ad3490d3ed5,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6bc5b24f616e32fdffb80b6ed0201250b02f143c8217d56ef90dc55551d709f,PodSandboxId:0c91d6c776a7fded12e3aaa9edf57d7888401809c72d077dc752208355bfb3cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724094768498234032,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-7ww4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbde00d4-6027-4d8d-b51e-bd68915da166,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e23a8501fe9333693618c26b918ed665ca9f2ea955dfc771ddbd90f4af91338,PodSandboxId:472436dd2272fa86a82a775f24e7cf1ddccabcfd91d314c0adba450bd1bcb6c0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724094762648038232,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bmmbh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f77f152-f5f4-40f6-9
632-1eaa36b9ea31,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44a4290db8405288dc877d1dbfa8f1a4976cb6221431aef419db3cdff822d3b6,PodSandboxId:2cd56c89cb3500385d16c5b82561348e2422ac59ce004cda825f81be1d188ece,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724094762645492196,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7acb6ce1-21b6-4cdd-a5cb-76d694fc0a
38,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c09c2a3840c6b84c4d187a5b4938f1e79c515609ad3ff7077a163e94acd5fc22,PodSandboxId:f276ebca5e26f21d36a567d969915505ddf60b7ea37d9c8b78d529962a2fcc8d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724094757719644043,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-024748,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e50e02ebe8c4a08870ffac68ea5d2832,},Annotat
ions:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e6dab43bac16fb6a2155177fd2cb01da57c882a322ae89145bc332c50c87071,PodSandboxId:0c32a47af88ab983143fe824a6c65ca5175d816ef2a93f2233540e92436fbae4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724094757709229899,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-024748,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b58db56f9002dd73de08465fd3a
06c18,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d66ad075c652a3b446078444a32327c07459f74199be8f89197067dbad566d5a,PodSandboxId:9ecf2a88c0af30fddbde514ebf4371ab2edb96b5b3d009b04c544d1fecea9381,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724094757706929424,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-024748,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfe3830de5cba7ee0cea7d338361cf28,},Anno
tations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3cb2c04e3eb3398fa324b660ca1864f22175cbf41fd84eae34a24ce7928b672,PodSandboxId:f6cd7683df1d2eace86e8ace9e6f78d5db7173e03f1f652874fa0a76909a253c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724094757704514233,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-024748,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ff95a4cad5e2fe07b2e8f0bc0f26a77,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e69968f6-929c-42a7-81e5-414d74d06c15 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:33:01 embed-certs-024748 crio[729]: time="2024-08-19 19:33:01.942081866Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7f35952d-25be-474a-9f8b-368b13675da7 name=/runtime.v1.RuntimeService/Version
	Aug 19 19:33:01 embed-certs-024748 crio[729]: time="2024-08-19 19:33:01.942170087Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7f35952d-25be-474a-9f8b-368b13675da7 name=/runtime.v1.RuntimeService/Version
	Aug 19 19:33:01 embed-certs-024748 crio[729]: time="2024-08-19 19:33:01.943634735Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=96d2f407-04a5-47fa-be25-59b72182ea55 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:33:01 embed-certs-024748 crio[729]: time="2024-08-19 19:33:01.945166736Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095981945074183,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=96d2f407-04a5-47fa-be25-59b72182ea55 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:33:01 embed-certs-024748 crio[729]: time="2024-08-19 19:33:01.946799217Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c5334b37-2ce1-4f16-8adb-b15c8cb54e5d name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:33:01 embed-certs-024748 crio[729]: time="2024-08-19 19:33:01.946912070Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c5334b37-2ce1-4f16-8adb-b15c8cb54e5d name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:33:01 embed-certs-024748 crio[729]: time="2024-08-19 19:33:01.947463755Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:902796698c02b97c3f50f231cba5dfbc00bc7e8344f104fe7a36109e1d10a4f8,PodSandboxId:2cd56c89cb3500385d16c5b82561348e2422ac59ce004cda825f81be1d188ece,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724094792920726029,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7acb6ce1-21b6-4cdd-a5cb-76d694fc0a38,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89e69d6f405cec355f5cd65f38a963570166553b8598f4fca5b73a80d437338d,PodSandboxId:bf6fc22a7831f2da0f48530f74acaeb6bd79a7a1af15d958a29a142868066ff6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724094771757901345,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5ea66261-4ba9-4b4c-9d2f-4ad3490d3ed5,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6bc5b24f616e32fdffb80b6ed0201250b02f143c8217d56ef90dc55551d709f,PodSandboxId:0c91d6c776a7fded12e3aaa9edf57d7888401809c72d077dc752208355bfb3cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724094768498234032,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-7ww4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbde00d4-6027-4d8d-b51e-bd68915da166,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e23a8501fe9333693618c26b918ed665ca9f2ea955dfc771ddbd90f4af91338,PodSandboxId:472436dd2272fa86a82a775f24e7cf1ddccabcfd91d314c0adba450bd1bcb6c0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724094762648038232,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bmmbh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f77f152-f5f4-40f6-9
632-1eaa36b9ea31,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44a4290db8405288dc877d1dbfa8f1a4976cb6221431aef419db3cdff822d3b6,PodSandboxId:2cd56c89cb3500385d16c5b82561348e2422ac59ce004cda825f81be1d188ece,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724094762645492196,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7acb6ce1-21b6-4cdd-a5cb-76d694fc0a
38,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c09c2a3840c6b84c4d187a5b4938f1e79c515609ad3ff7077a163e94acd5fc22,PodSandboxId:f276ebca5e26f21d36a567d969915505ddf60b7ea37d9c8b78d529962a2fcc8d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724094757719644043,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-024748,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e50e02ebe8c4a08870ffac68ea5d2832,},Annotat
ions:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e6dab43bac16fb6a2155177fd2cb01da57c882a322ae89145bc332c50c87071,PodSandboxId:0c32a47af88ab983143fe824a6c65ca5175d816ef2a93f2233540e92436fbae4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724094757709229899,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-024748,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b58db56f9002dd73de08465fd3a
06c18,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d66ad075c652a3b446078444a32327c07459f74199be8f89197067dbad566d5a,PodSandboxId:9ecf2a88c0af30fddbde514ebf4371ab2edb96b5b3d009b04c544d1fecea9381,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724094757706929424,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-024748,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfe3830de5cba7ee0cea7d338361cf28,},Anno
tations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3cb2c04e3eb3398fa324b660ca1864f22175cbf41fd84eae34a24ce7928b672,PodSandboxId:f6cd7683df1d2eace86e8ace9e6f78d5db7173e03f1f652874fa0a76909a253c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724094757704514233,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-024748,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ff95a4cad5e2fe07b2e8f0bc0f26a77,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c5334b37-2ce1-4f16-8adb-b15c8cb54e5d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	902796698c02b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Running             storage-provisioner       2                   2cd56c89cb350       storage-provisioner
	89e69d6f405ce       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   20 minutes ago      Running             busybox                   1                   bf6fc22a7831f       busybox
	a6bc5b24f616e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      20 minutes ago      Running             coredns                   1                   0c91d6c776a7f       coredns-6f6b679f8f-7ww4z
	3e23a8501fe93       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      20 minutes ago      Running             kube-proxy                1                   472436dd2272f       kube-proxy-bmmbh
	44a4290db8405       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Exited              storage-provisioner       1                   2cd56c89cb350       storage-provisioner
	c09c2a3840c6b       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      20 minutes ago      Running             kube-scheduler            1                   f276ebca5e26f       kube-scheduler-embed-certs-024748
	6e6dab43bac16       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      20 minutes ago      Running             kube-controller-manager   1                   0c32a47af88ab       kube-controller-manager-embed-certs-024748
	d66ad075c652a       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      20 minutes ago      Running             kube-apiserver            1                   9ecf2a88c0af3       kube-apiserver-embed-certs-024748
	a3cb2c04e3eb3       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      20 minutes ago      Running             etcd                      1                   f6cd7683df1d2       etcd-embed-certs-024748
	
	
	==> coredns [a6bc5b24f616e32fdffb80b6ed0201250b02f143c8217d56ef90dc55551d709f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:54879 - 58031 "HINFO IN 5320066620498500483.4879752652727281099. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023169116s
	
	
	==> describe nodes <==
	Name:               embed-certs-024748
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-024748
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9c2db9d51ec33b5c53a86e9ba3d384ee332e3411
	                    minikube.k8s.io/name=embed-certs-024748
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T19_03_59_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 19:03:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-024748
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 19:32:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 19:28:31 +0000   Mon, 19 Aug 2024 19:03:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 19:28:31 +0000   Mon, 19 Aug 2024 19:03:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 19:28:31 +0000   Mon, 19 Aug 2024 19:03:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 19:28:31 +0000   Mon, 19 Aug 2024 19:12:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.96
	  Hostname:    embed-certs-024748
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c1d25bc85cf54318a724e1632e8d037c
	  System UUID:                c1d25bc8-5cf5-4318-a724-e1632e8d037c
	  Boot ID:                    10a9592c-f3d9-46b1-ae6c-c03919493ddc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 coredns-6f6b679f8f-7ww4z                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     28m
	  kube-system                 etcd-embed-certs-024748                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         29m
	  kube-system                 kube-apiserver-embed-certs-024748             250m (12%)    0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-controller-manager-embed-certs-024748    200m (10%)    0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-proxy-bmmbh                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-scheduler-embed-certs-024748             100m (5%)     0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 metrics-server-6867b74b74-kxcwh               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         28m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 20m                kube-proxy       
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     29m                kubelet          Node embed-certs-024748 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node embed-certs-024748 status is now: NodeHasNoDiskPressure
	  Normal  NodeReady                29m                kubelet          Node embed-certs-024748 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node embed-certs-024748 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           29m                node-controller  Node embed-certs-024748 event: Registered Node embed-certs-024748 in Controller
	  Normal  Starting                 20m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  20m (x8 over 20m)  kubelet          Node embed-certs-024748 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20m (x8 over 20m)  kubelet          Node embed-certs-024748 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20m (x7 over 20m)  kubelet          Node embed-certs-024748 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           20m                node-controller  Node embed-certs-024748 event: Registered Node embed-certs-024748 in Controller
	
	
	==> dmesg <==
	[Aug19 19:12] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.060475] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041906] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.963569] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.445760] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.603121] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.037605] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	[  +0.065932] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058817] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +0.208895] systemd-fstab-generator[671]: Ignoring "noauto" option for root device
	[  +0.141663] systemd-fstab-generator[683]: Ignoring "noauto" option for root device
	[  +0.310180] systemd-fstab-generator[714]: Ignoring "noauto" option for root device
	[  +4.273716] systemd-fstab-generator[810]: Ignoring "noauto" option for root device
	[  +0.058777] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.777321] systemd-fstab-generator[929]: Ignoring "noauto" option for root device
	[  +6.136672] kauditd_printk_skb: 97 callbacks suppressed
	[  +1.389340] systemd-fstab-generator[1542]: Ignoring "noauto" option for root device
	[  +3.877866] kauditd_printk_skb: 80 callbacks suppressed
	[Aug19 19:13] kauditd_printk_skb: 33 callbacks suppressed
	
	
	==> etcd [a3cb2c04e3eb3398fa324b660ca1864f22175cbf41fd84eae34a24ce7928b672] <==
	{"level":"info","ts":"2024-08-19T19:12:39.227829Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.96:2379"}
	{"level":"warn","ts":"2024-08-19T19:12:57.384257Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"157.230792ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1888012562079734476 > lease_revoke:<id:1a33916c0e3cd929>","response":"size:27"}
	{"level":"info","ts":"2024-08-19T19:22:39.268891Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":868}
	{"level":"info","ts":"2024-08-19T19:22:39.279559Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":868,"took":"9.952226ms","hash":1381353552,"current-db-size-bytes":2719744,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":2719744,"current-db-size-in-use":"2.7 MB"}
	{"level":"info","ts":"2024-08-19T19:22:39.279669Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1381353552,"revision":868,"compact-revision":-1}
	{"level":"info","ts":"2024-08-19T19:27:39.275443Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1111}
	{"level":"info","ts":"2024-08-19T19:27:39.278955Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1111,"took":"3.045763ms","hash":4106165817,"current-db-size-bytes":2719744,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":1556480,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-08-19T19:27:39.279029Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4106165817,"revision":1111,"compact-revision":868}
	{"level":"info","ts":"2024-08-19T19:32:38.635635Z","caller":"traceutil/trace.go:171","msg":"trace[446953740] linearizableReadLoop","detail":"{readStateIndex:1881; appliedIndex:1880; }","duration":"369.433218ms","start":"2024-08-19T19:32:38.266174Z","end":"2024-08-19T19:32:38.635607Z","steps":["trace[446953740] 'read index received'  (duration: 369.293722ms)","trace[446953740] 'applied index is now lower than readState.Index'  (duration: 139.105µs)"],"step_count":2}
	{"level":"info","ts":"2024-08-19T19:32:38.635982Z","caller":"traceutil/trace.go:171","msg":"trace[1531123675] transaction","detail":"{read_only:false; response_revision:1596; number_of_response:1; }","duration":"390.257311ms","start":"2024-08-19T19:32:38.245711Z","end":"2024-08-19T19:32:38.635968Z","steps":["trace[1531123675] 'process raft request'  (duration: 389.795408ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T19:32:38.636661Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T19:32:38.245693Z","time spent":"390.33488ms","remote":"127.0.0.1:56900","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1103,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1595 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1030 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-08-19T19:32:38.636898Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"370.714637ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ingressclasses/\" range_end:\"/registry/ingressclasses0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T19:32:38.636956Z","caller":"traceutil/trace.go:171","msg":"trace[628215007] range","detail":"{range_begin:/registry/ingressclasses/; range_end:/registry/ingressclasses0; response_count:0; response_revision:1596; }","duration":"370.774793ms","start":"2024-08-19T19:32:38.266170Z","end":"2024-08-19T19:32:38.636944Z","steps":["trace[628215007] 'agreement among raft nodes before linearized reading'  (duration: 370.69396ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T19:32:38.637015Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T19:32:38.266124Z","time spent":"370.879364ms","remote":"127.0.0.1:57042","response type":"/etcdserverpb.KV/Range","request count":0,"request size":56,"response count":0,"response size":27,"request content":"key:\"/registry/ingressclasses/\" range_end:\"/registry/ingressclasses0\" count_only:true "}
	{"level":"warn","ts":"2024-08-19T19:32:38.637895Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"114.418664ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T19:32:38.637942Z","caller":"traceutil/trace.go:171","msg":"trace[2019821387] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1596; }","duration":"114.469268ms","start":"2024-08-19T19:32:38.523465Z","end":"2024-08-19T19:32:38.637934Z","steps":["trace[2019821387] 'agreement among raft nodes before linearized reading'  (duration: 114.40788ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T19:32:38.638198Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"206.339062ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/\" range_end:\"/registry/services/specs0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-08-19T19:32:38.638235Z","caller":"traceutil/trace.go:171","msg":"trace[1003900417] range","detail":"{range_begin:/registry/services/specs/; range_end:/registry/services/specs0; response_count:0; response_revision:1596; }","duration":"206.380209ms","start":"2024-08-19T19:32:38.431849Z","end":"2024-08-19T19:32:38.638229Z","steps":["trace[1003900417] 'agreement among raft nodes before linearized reading'  (duration: 206.307814ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T19:32:38.639347Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"223.368969ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/\" range_end:\"/registry/events0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-08-19T19:32:38.639389Z","caller":"traceutil/trace.go:171","msg":"trace[1150565570] range","detail":"{range_begin:/registry/events/; range_end:/registry/events0; response_count:0; response_revision:1596; }","duration":"223.415298ms","start":"2024-08-19T19:32:38.415967Z","end":"2024-08-19T19:32:38.639383Z","steps":["trace[1150565570] 'agreement among raft nodes before linearized reading'  (duration: 223.278044ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T19:32:38.639459Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"244.244664ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/\" range_end:\"/registry/clusterrolebindings0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-08-19T19:32:38.639510Z","caller":"traceutil/trace.go:171","msg":"trace[1770598483] range","detail":"{range_begin:/registry/clusterrolebindings/; range_end:/registry/clusterrolebindings0; response_count:0; response_revision:1596; }","duration":"244.377266ms","start":"2024-08-19T19:32:38.395124Z","end":"2024-08-19T19:32:38.639501Z","steps":["trace[1770598483] 'agreement among raft nodes before linearized reading'  (duration: 242.686725ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T19:32:39.281964Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1354}
	{"level":"info","ts":"2024-08-19T19:32:39.285217Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1354,"took":"3.009556ms","hash":3146137081,"current-db-size-bytes":2719744,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":1548288,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2024-08-19T19:32:39.285261Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3146137081,"revision":1354,"compact-revision":1111}
	
	
	==> kernel <==
	 19:33:02 up 20 min,  0 users,  load average: 0.08, 0.21, 0.19
	Linux embed-certs-024748 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [d66ad075c652a3b446078444a32327c07459f74199be8f89197067dbad566d5a] <==
	I0819 19:28:41.665834       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0819 19:28:41.665935       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0819 19:30:41.666642       1 handler_proxy.go:99] no RequestInfo found in the context
	E0819 19:30:41.666916       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0819 19:30:41.667007       1 handler_proxy.go:99] no RequestInfo found in the context
	E0819 19:30:41.667034       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0819 19:30:41.668147       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0819 19:30:41.668227       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0819 19:32:40.667857       1 handler_proxy.go:99] no RequestInfo found in the context
	E0819 19:32:40.668162       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0819 19:32:41.670547       1 handler_proxy.go:99] no RequestInfo found in the context
	E0819 19:32:41.670684       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0819 19:32:41.670583       1 handler_proxy.go:99] no RequestInfo found in the context
	E0819 19:32:41.670814       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0819 19:32:41.672321       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0819 19:32:41.672431       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [6e6dab43bac16fb6a2155177fd2cb01da57c882a322ae89145bc332c50c87071] <==
	E0819 19:27:44.299142       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 19:27:44.886717       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 19:28:14.306788       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 19:28:14.894578       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0819 19:28:31.137182       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-024748"
	E0819 19:28:44.318688       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 19:28:44.903610       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0819 19:28:59.727154       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="283.092µs"
	I0819 19:29:13.729183       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="124.87µs"
	E0819 19:29:14.324367       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 19:29:14.911748       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 19:29:44.330254       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 19:29:44.919613       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 19:30:14.337857       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 19:30:14.927722       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 19:30:44.345019       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 19:30:44.935818       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 19:31:14.351484       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 19:31:14.945853       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 19:31:44.357209       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 19:31:44.952892       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 19:32:14.363874       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 19:32:14.962461       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 19:32:44.372771       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 19:32:44.970361       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [3e23a8501fe9333693618c26b918ed665ca9f2ea955dfc771ddbd90f4af91338] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 19:12:42.915570       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0819 19:12:42.928183       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.72.96"]
	E0819 19:12:42.928260       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 19:12:42.976110       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 19:12:42.976159       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 19:12:42.976193       1 server_linux.go:169] "Using iptables Proxier"
	I0819 19:12:42.978950       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 19:12:42.979386       1 server.go:483] "Version info" version="v1.31.0"
	I0819 19:12:42.979415       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 19:12:42.982732       1 config.go:197] "Starting service config controller"
	I0819 19:12:42.982777       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 19:12:42.982800       1 config.go:104] "Starting endpoint slice config controller"
	I0819 19:12:42.982804       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 19:12:42.984120       1 config.go:326] "Starting node config controller"
	I0819 19:12:42.984144       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 19:12:43.083364       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0819 19:12:43.083449       1 shared_informer.go:320] Caches are synced for service config
	I0819 19:12:43.084762       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [c09c2a3840c6b84c4d187a5b4938f1e79c515609ad3ff7077a163e94acd5fc22] <==
	I0819 19:12:38.700079       1 serving.go:386] Generated self-signed cert in-memory
	W0819 19:12:40.633695       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0819 19:12:40.633739       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0819 19:12:40.633749       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0819 19:12:40.633754       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0819 19:12:40.695066       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0819 19:12:40.695121       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 19:12:40.703715       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0819 19:12:40.703966       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0819 19:12:40.703987       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0819 19:12:40.704264       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0819 19:12:40.805570       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 19 19:31:52 embed-certs-024748 kubelet[936]: E0819 19:31:52.714250     936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-kxcwh" podUID="15f86629-d916-4fdc-9ecf-9cb1b6c83f85"
	Aug 19 19:31:56 embed-certs-024748 kubelet[936]: E0819 19:31:56.974596     936 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095916974015018,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:31:56 embed-certs-024748 kubelet[936]: E0819 19:31:56.975105     936 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095916974015018,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:32:06 embed-certs-024748 kubelet[936]: E0819 19:32:06.715712     936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-kxcwh" podUID="15f86629-d916-4fdc-9ecf-9cb1b6c83f85"
	Aug 19 19:32:06 embed-certs-024748 kubelet[936]: E0819 19:32:06.977042     936 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095926976657256,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:32:06 embed-certs-024748 kubelet[936]: E0819 19:32:06.977168     936 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095926976657256,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:32:16 embed-certs-024748 kubelet[936]: E0819 19:32:16.978482     936 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095936978158357,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:32:16 embed-certs-024748 kubelet[936]: E0819 19:32:16.978524     936 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095936978158357,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:32:19 embed-certs-024748 kubelet[936]: E0819 19:32:19.712414     936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-kxcwh" podUID="15f86629-d916-4fdc-9ecf-9cb1b6c83f85"
	Aug 19 19:32:26 embed-certs-024748 kubelet[936]: E0819 19:32:26.979827     936 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095946979361675,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:32:26 embed-certs-024748 kubelet[936]: E0819 19:32:26.979882     936 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095946979361675,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:32:33 embed-certs-024748 kubelet[936]: E0819 19:32:33.712744     936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-kxcwh" podUID="15f86629-d916-4fdc-9ecf-9cb1b6c83f85"
	Aug 19 19:32:36 embed-certs-024748 kubelet[936]: E0819 19:32:36.727518     936 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 19:32:36 embed-certs-024748 kubelet[936]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 19:32:36 embed-certs-024748 kubelet[936]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 19:32:36 embed-certs-024748 kubelet[936]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 19:32:36 embed-certs-024748 kubelet[936]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 19:32:36 embed-certs-024748 kubelet[936]: E0819 19:32:36.983036     936 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095956982250606,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:32:36 embed-certs-024748 kubelet[936]: E0819 19:32:36.983074     936 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095956982250606,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:32:46 embed-certs-024748 kubelet[936]: E0819 19:32:46.712491     936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-kxcwh" podUID="15f86629-d916-4fdc-9ecf-9cb1b6c83f85"
	Aug 19 19:32:46 embed-certs-024748 kubelet[936]: E0819 19:32:46.985108     936 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095966984629225,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:32:46 embed-certs-024748 kubelet[936]: E0819 19:32:46.985160     936 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095966984629225,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:32:56 embed-certs-024748 kubelet[936]: E0819 19:32:56.988175     936 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095976987677333,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:32:56 embed-certs-024748 kubelet[936]: E0819 19:32:56.988236     936 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095976987677333,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:33:01 embed-certs-024748 kubelet[936]: E0819 19:33:01.712524     936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-kxcwh" podUID="15f86629-d916-4fdc-9ecf-9cb1b6c83f85"
	
	
	==> storage-provisioner [44a4290db8405288dc877d1dbfa8f1a4976cb6221431aef419db3cdff822d3b6] <==
	I0819 19:12:42.788642       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0819 19:13:12.796084       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [902796698c02b97c3f50f231cba5dfbc00bc7e8344f104fe7a36109e1d10a4f8] <==
	I0819 19:13:13.018254       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0819 19:13:13.030894       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0819 19:13:13.031102       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0819 19:13:30.437350       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0819 19:13:30.437857       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-024748_6b5cddaf-f03c-4e72-9562-f24f0996e8ad!
	I0819 19:13:30.437928       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ab9322fd-2e11-4b42-8a8e-29ec8425fd9d", APIVersion:"v1", ResourceVersion:"653", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-024748_6b5cddaf-f03c-4e72-9562-f24f0996e8ad became leader
	I0819 19:13:30.538391       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-024748_6b5cddaf-f03c-4e72-9562-f24f0996e8ad!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-024748 -n embed-certs-024748
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-024748 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-kxcwh
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-024748 describe pod metrics-server-6867b74b74-kxcwh
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-024748 describe pod metrics-server-6867b74b74-kxcwh: exit status 1 (64.489247ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-kxcwh" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-024748 describe pod metrics-server-6867b74b74-kxcwh: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (388.47s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (330.73s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-278232 -n no-preload-278232
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-08-19 19:32:33.483598573 +0000 UTC m=+6500.683525379
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-278232 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-278232 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.183µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-278232 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-278232 -n no-preload-278232
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-278232 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-278232 logs -n 25: (1.408575309s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-571803                           | enable-default-cni-571803    | jenkins | v1.33.1 | 19 Aug 24 19:03 UTC | 19 Aug 24 19:03 UTC |
	|         | sudo systemctl status crio                             |                              |         |         |                     |                     |
	|         | --all --full --no-pager                                |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-571803                           | enable-default-cni-571803    | jenkins | v1.33.1 | 19 Aug 24 19:03 UTC | 19 Aug 24 19:03 UTC |
	|         | sudo systemctl cat crio                                |                              |         |         |                     |                     |
	|         | --no-pager                                             |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-571803                           | enable-default-cni-571803    | jenkins | v1.33.1 | 19 Aug 24 19:03 UTC | 19 Aug 24 19:03 UTC |
	|         | sudo find /etc/crio -type f                            |                              |         |         |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                          |                              |         |         |                     |                     |
	|         | \;                                                     |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-571803                           | enable-default-cni-571803    | jenkins | v1.33.1 | 19 Aug 24 19:03 UTC | 19 Aug 24 19:03 UTC |
	|         | sudo crio config                                       |                              |         |         |                     |                     |
	| delete  | -p enable-default-cni-571803                           | enable-default-cni-571803    | jenkins | v1.33.1 | 19 Aug 24 19:03 UTC | 19 Aug 24 19:03 UTC |
	| delete  | -p                                                     | disable-driver-mounts-737091 | jenkins | v1.33.1 | 19 Aug 24 19:03 UTC | 19 Aug 24 19:03 UTC |
	|         | disable-driver-mounts-737091                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-982795 | jenkins | v1.33.1 | 19 Aug 24 19:03 UTC | 19 Aug 24 19:04 UTC |
	|         | default-k8s-diff-port-982795                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-278232             | no-preload-278232            | jenkins | v1.33.1 | 19 Aug 24 19:04 UTC | 19 Aug 24 19:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-278232                                   | no-preload-278232            | jenkins | v1.33.1 | 19 Aug 24 19:04 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-982795  | default-k8s-diff-port-982795 | jenkins | v1.33.1 | 19 Aug 24 19:04 UTC | 19 Aug 24 19:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-982795 | jenkins | v1.33.1 | 19 Aug 24 19:04 UTC |                     |
	|         | default-k8s-diff-port-982795                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-024748            | embed-certs-024748           | jenkins | v1.33.1 | 19 Aug 24 19:04 UTC | 19 Aug 24 19:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-024748                                  | embed-certs-024748           | jenkins | v1.33.1 | 19 Aug 24 19:04 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-104669        | old-k8s-version-104669       | jenkins | v1.33.1 | 19 Aug 24 19:06 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-278232                  | no-preload-278232            | jenkins | v1.33.1 | 19 Aug 24 19:07 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-278232                                   | no-preload-278232            | jenkins | v1.33.1 | 19 Aug 24 19:07 UTC | 19 Aug 24 19:18 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-982795       | default-k8s-diff-port-982795 | jenkins | v1.33.1 | 19 Aug 24 19:07 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-024748                 | embed-certs-024748           | jenkins | v1.33.1 | 19 Aug 24 19:07 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-982795 | jenkins | v1.33.1 | 19 Aug 24 19:07 UTC | 19 Aug 24 19:17 UTC |
	|         | default-k8s-diff-port-982795                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-024748                                  | embed-certs-024748           | jenkins | v1.33.1 | 19 Aug 24 19:07 UTC | 19 Aug 24 19:17 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-104669                              | old-k8s-version-104669       | jenkins | v1.33.1 | 19 Aug 24 19:08 UTC | 19 Aug 24 19:08 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-104669             | old-k8s-version-104669       | jenkins | v1.33.1 | 19 Aug 24 19:08 UTC | 19 Aug 24 19:08 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-104669                              | old-k8s-version-104669       | jenkins | v1.33.1 | 19 Aug 24 19:08 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-104669                              | old-k8s-version-104669       | jenkins | v1.33.1 | 19 Aug 24 19:32 UTC | 19 Aug 24 19:32 UTC |
	| start   | -p newest-cni-125279 --memory=2200 --alsologtostderr   | newest-cni-125279            | jenkins | v1.33.1 | 19 Aug 24 19:32 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 19:32:02
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 19:32:02.317801  445411 out.go:345] Setting OutFile to fd 1 ...
	I0819 19:32:02.317947  445411 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:32:02.317956  445411 out.go:358] Setting ErrFile to fd 2...
	I0819 19:32:02.317960  445411 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:32:02.318140  445411 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19468-372744/.minikube/bin
	I0819 19:32:02.318721  445411 out.go:352] Setting JSON to false
	I0819 19:32:02.319959  445411 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":11665,"bootTime":1724084257,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 19:32:02.320029  445411 start.go:139] virtualization: kvm guest
	I0819 19:32:02.323301  445411 out.go:177] * [newest-cni-125279] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 19:32:02.324805  445411 out.go:177]   - MINIKUBE_LOCATION=19468
	I0819 19:32:02.324907  445411 notify.go:220] Checking for updates...
	I0819 19:32:02.327570  445411 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 19:32:02.328867  445411 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19468-372744/kubeconfig
	I0819 19:32:02.330030  445411 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19468-372744/.minikube
	I0819 19:32:02.331155  445411 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 19:32:02.332531  445411 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 19:32:02.334284  445411 config.go:182] Loaded profile config "default-k8s-diff-port-982795": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:32:02.334382  445411 config.go:182] Loaded profile config "embed-certs-024748": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:32:02.334468  445411 config.go:182] Loaded profile config "no-preload-278232": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:32:02.334557  445411 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 19:32:02.372274  445411 out.go:177] * Using the kvm2 driver based on user configuration
	I0819 19:32:02.373727  445411 start.go:297] selected driver: kvm2
	I0819 19:32:02.373751  445411 start.go:901] validating driver "kvm2" against <nil>
	I0819 19:32:02.373765  445411 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 19:32:02.374812  445411 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 19:32:02.374907  445411 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19468-372744/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 19:32:02.391377  445411 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 19:32:02.391431  445411 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0819 19:32:02.391476  445411 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0819 19:32:02.391744  445411 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0819 19:32:02.391833  445411 cni.go:84] Creating CNI manager for ""
	I0819 19:32:02.391853  445411 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 19:32:02.391867  445411 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 19:32:02.391941  445411 start.go:340] cluster config:
	{Name:newest-cni-125279 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:newest-cni-125279 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 19:32:02.392094  445411 iso.go:125] acquiring lock: {Name:mk4c0ac1c3202b1a296739df622960e7a0bd8566 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 19:32:02.394156  445411 out.go:177] * Starting "newest-cni-125279" primary control-plane node in "newest-cni-125279" cluster
	I0819 19:32:02.395383  445411 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 19:32:02.395423  445411 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0819 19:32:02.395432  445411 cache.go:56] Caching tarball of preloaded images
	I0819 19:32:02.395526  445411 preload.go:172] Found /home/jenkins/minikube-integration/19468-372744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 19:32:02.395540  445411 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 19:32:02.395701  445411 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/newest-cni-125279/config.json ...
	I0819 19:32:02.395728  445411 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/newest-cni-125279/config.json: {Name:mk56c54824bf8b7ba5a8e97517d1b3bc99bf8d31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:32:02.395952  445411 start.go:360] acquireMachinesLock for newest-cni-125279: {Name:mk24ba67a747357e9ce40f1e460d2bb0bc59cc75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 19:32:02.395997  445411 start.go:364] duration metric: took 24.973µs to acquireMachinesLock for "newest-cni-125279"
	I0819 19:32:02.396022  445411 start.go:93] Provisioning new machine with config: &{Name:newest-cni-125279 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.0 ClusterName:newest-cni-125279 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 19:32:02.396105  445411 start.go:125] createHost starting for "" (driver="kvm2")
	I0819 19:32:02.397869  445411 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 19:32:02.398005  445411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:32:02.398039  445411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:32:02.413037  445411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36907
	I0819 19:32:02.413486  445411 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:32:02.414025  445411 main.go:141] libmachine: Using API Version  1
	I0819 19:32:02.414046  445411 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:32:02.414404  445411 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:32:02.414604  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetMachineName
	I0819 19:32:02.414754  445411 main.go:141] libmachine: (newest-cni-125279) Calling .DriverName
	I0819 19:32:02.414893  445411 start.go:159] libmachine.API.Create for "newest-cni-125279" (driver="kvm2")
	I0819 19:32:02.414922  445411 client.go:168] LocalClient.Create starting
	I0819 19:32:02.414957  445411 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem
	I0819 19:32:02.414993  445411 main.go:141] libmachine: Decoding PEM data...
	I0819 19:32:02.415011  445411 main.go:141] libmachine: Parsing certificate...
	I0819 19:32:02.415074  445411 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem
	I0819 19:32:02.415093  445411 main.go:141] libmachine: Decoding PEM data...
	I0819 19:32:02.415106  445411 main.go:141] libmachine: Parsing certificate...
	I0819 19:32:02.415124  445411 main.go:141] libmachine: Running pre-create checks...
	I0819 19:32:02.415133  445411 main.go:141] libmachine: (newest-cni-125279) Calling .PreCreateCheck
	I0819 19:32:02.415482  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetConfigRaw
	I0819 19:32:02.415858  445411 main.go:141] libmachine: Creating machine...
	I0819 19:32:02.415872  445411 main.go:141] libmachine: (newest-cni-125279) Calling .Create
	I0819 19:32:02.416006  445411 main.go:141] libmachine: (newest-cni-125279) Creating KVM machine...
	I0819 19:32:02.417294  445411 main.go:141] libmachine: (newest-cni-125279) DBG | found existing default KVM network
	I0819 19:32:02.418650  445411 main.go:141] libmachine: (newest-cni-125279) DBG | I0819 19:32:02.418450  445433 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:c6:65:2e} reservation:<nil>}
	I0819 19:32:02.420057  445411 main.go:141] libmachine: (newest-cni-125279) DBG | I0819 19:32:02.419936  445433 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002a8720}
	I0819 19:32:02.420087  445411 main.go:141] libmachine: (newest-cni-125279) DBG | created network xml: 
	I0819 19:32:02.420097  445411 main.go:141] libmachine: (newest-cni-125279) DBG | <network>
	I0819 19:32:02.420106  445411 main.go:141] libmachine: (newest-cni-125279) DBG |   <name>mk-newest-cni-125279</name>
	I0819 19:32:02.420115  445411 main.go:141] libmachine: (newest-cni-125279) DBG |   <dns enable='no'/>
	I0819 19:32:02.420130  445411 main.go:141] libmachine: (newest-cni-125279) DBG |   
	I0819 19:32:02.420157  445411 main.go:141] libmachine: (newest-cni-125279) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0819 19:32:02.420173  445411 main.go:141] libmachine: (newest-cni-125279) DBG |     <dhcp>
	I0819 19:32:02.420184  445411 main.go:141] libmachine: (newest-cni-125279) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0819 19:32:02.420202  445411 main.go:141] libmachine: (newest-cni-125279) DBG |     </dhcp>
	I0819 19:32:02.420215  445411 main.go:141] libmachine: (newest-cni-125279) DBG |   </ip>
	I0819 19:32:02.420226  445411 main.go:141] libmachine: (newest-cni-125279) DBG |   
	I0819 19:32:02.420235  445411 main.go:141] libmachine: (newest-cni-125279) DBG | </network>
	I0819 19:32:02.420244  445411 main.go:141] libmachine: (newest-cni-125279) DBG | 
	I0819 19:32:02.425975  445411 main.go:141] libmachine: (newest-cni-125279) DBG | trying to create private KVM network mk-newest-cni-125279 192.168.50.0/24...
	I0819 19:32:02.500440  445411 main.go:141] libmachine: (newest-cni-125279) DBG | private KVM network mk-newest-cni-125279 192.168.50.0/24 created
	I0819 19:32:02.500473  445411 main.go:141] libmachine: (newest-cni-125279) DBG | I0819 19:32:02.500424  445433 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19468-372744/.minikube
	I0819 19:32:02.500483  445411 main.go:141] libmachine: (newest-cni-125279) Setting up store path in /home/jenkins/minikube-integration/19468-372744/.minikube/machines/newest-cni-125279 ...
	I0819 19:32:02.500494  445411 main.go:141] libmachine: (newest-cni-125279) Building disk image from file:///home/jenkins/minikube-integration/19468-372744/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0819 19:32:02.500640  445411 main.go:141] libmachine: (newest-cni-125279) Downloading /home/jenkins/minikube-integration/19468-372744/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19468-372744/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 19:32:02.798963  445411 main.go:141] libmachine: (newest-cni-125279) DBG | I0819 19:32:02.798783  445433 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/newest-cni-125279/id_rsa...
	I0819 19:32:03.381276  445411 main.go:141] libmachine: (newest-cni-125279) DBG | I0819 19:32:03.381113  445433 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/newest-cni-125279/newest-cni-125279.rawdisk...
	I0819 19:32:03.381317  445411 main.go:141] libmachine: (newest-cni-125279) DBG | Writing magic tar header
	I0819 19:32:03.381337  445411 main.go:141] libmachine: (newest-cni-125279) DBG | Writing SSH key tar header
	I0819 19:32:03.381352  445411 main.go:141] libmachine: (newest-cni-125279) DBG | I0819 19:32:03.381273  445433 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19468-372744/.minikube/machines/newest-cni-125279 ...
	I0819 19:32:03.381457  445411 main.go:141] libmachine: (newest-cni-125279) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/newest-cni-125279
	I0819 19:32:03.381486  445411 main.go:141] libmachine: (newest-cni-125279) Setting executable bit set on /home/jenkins/minikube-integration/19468-372744/.minikube/machines/newest-cni-125279 (perms=drwx------)
	I0819 19:32:03.381497  445411 main.go:141] libmachine: (newest-cni-125279) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19468-372744/.minikube/machines
	I0819 19:32:03.381516  445411 main.go:141] libmachine: (newest-cni-125279) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19468-372744/.minikube
	I0819 19:32:03.381533  445411 main.go:141] libmachine: (newest-cni-125279) Setting executable bit set on /home/jenkins/minikube-integration/19468-372744/.minikube/machines (perms=drwxr-xr-x)
	I0819 19:32:03.381543  445411 main.go:141] libmachine: (newest-cni-125279) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19468-372744
	I0819 19:32:03.381554  445411 main.go:141] libmachine: (newest-cni-125279) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0819 19:32:03.381564  445411 main.go:141] libmachine: (newest-cni-125279) DBG | Checking permissions on dir: /home/jenkins
	I0819 19:32:03.381585  445411 main.go:141] libmachine: (newest-cni-125279) Setting executable bit set on /home/jenkins/minikube-integration/19468-372744/.minikube (perms=drwxr-xr-x)
	I0819 19:32:03.381647  445411 main.go:141] libmachine: (newest-cni-125279) DBG | Checking permissions on dir: /home
	I0819 19:32:03.381673  445411 main.go:141] libmachine: (newest-cni-125279) Setting executable bit set on /home/jenkins/minikube-integration/19468-372744 (perms=drwxrwxr-x)
	I0819 19:32:03.381679  445411 main.go:141] libmachine: (newest-cni-125279) DBG | Skipping /home - not owner
	I0819 19:32:03.381694  445411 main.go:141] libmachine: (newest-cni-125279) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0819 19:32:03.381712  445411 main.go:141] libmachine: (newest-cni-125279) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0819 19:32:03.381734  445411 main.go:141] libmachine: (newest-cni-125279) Creating domain...
	I0819 19:32:03.382890  445411 main.go:141] libmachine: (newest-cni-125279) define libvirt domain using xml: 
	I0819 19:32:03.382908  445411 main.go:141] libmachine: (newest-cni-125279) <domain type='kvm'>
	I0819 19:32:03.382921  445411 main.go:141] libmachine: (newest-cni-125279)   <name>newest-cni-125279</name>
	I0819 19:32:03.382926  445411 main.go:141] libmachine: (newest-cni-125279)   <memory unit='MiB'>2200</memory>
	I0819 19:32:03.382932  445411 main.go:141] libmachine: (newest-cni-125279)   <vcpu>2</vcpu>
	I0819 19:32:03.382936  445411 main.go:141] libmachine: (newest-cni-125279)   <features>
	I0819 19:32:03.382941  445411 main.go:141] libmachine: (newest-cni-125279)     <acpi/>
	I0819 19:32:03.382945  445411 main.go:141] libmachine: (newest-cni-125279)     <apic/>
	I0819 19:32:03.382957  445411 main.go:141] libmachine: (newest-cni-125279)     <pae/>
	I0819 19:32:03.382961  445411 main.go:141] libmachine: (newest-cni-125279)     
	I0819 19:32:03.382966  445411 main.go:141] libmachine: (newest-cni-125279)   </features>
	I0819 19:32:03.382971  445411 main.go:141] libmachine: (newest-cni-125279)   <cpu mode='host-passthrough'>
	I0819 19:32:03.382976  445411 main.go:141] libmachine: (newest-cni-125279)   
	I0819 19:32:03.382980  445411 main.go:141] libmachine: (newest-cni-125279)   </cpu>
	I0819 19:32:03.382985  445411 main.go:141] libmachine: (newest-cni-125279)   <os>
	I0819 19:32:03.382989  445411 main.go:141] libmachine: (newest-cni-125279)     <type>hvm</type>
	I0819 19:32:03.383027  445411 main.go:141] libmachine: (newest-cni-125279)     <boot dev='cdrom'/>
	I0819 19:32:03.383054  445411 main.go:141] libmachine: (newest-cni-125279)     <boot dev='hd'/>
	I0819 19:32:03.383064  445411 main.go:141] libmachine: (newest-cni-125279)     <bootmenu enable='no'/>
	I0819 19:32:03.383078  445411 main.go:141] libmachine: (newest-cni-125279)   </os>
	I0819 19:32:03.383088  445411 main.go:141] libmachine: (newest-cni-125279)   <devices>
	I0819 19:32:03.383094  445411 main.go:141] libmachine: (newest-cni-125279)     <disk type='file' device='cdrom'>
	I0819 19:32:03.383124  445411 main.go:141] libmachine: (newest-cni-125279)       <source file='/home/jenkins/minikube-integration/19468-372744/.minikube/machines/newest-cni-125279/boot2docker.iso'/>
	I0819 19:32:03.383132  445411 main.go:141] libmachine: (newest-cni-125279)       <target dev='hdc' bus='scsi'/>
	I0819 19:32:03.383139  445411 main.go:141] libmachine: (newest-cni-125279)       <readonly/>
	I0819 19:32:03.383146  445411 main.go:141] libmachine: (newest-cni-125279)     </disk>
	I0819 19:32:03.383153  445411 main.go:141] libmachine: (newest-cni-125279)     <disk type='file' device='disk'>
	I0819 19:32:03.383164  445411 main.go:141] libmachine: (newest-cni-125279)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0819 19:32:03.383174  445411 main.go:141] libmachine: (newest-cni-125279)       <source file='/home/jenkins/minikube-integration/19468-372744/.minikube/machines/newest-cni-125279/newest-cni-125279.rawdisk'/>
	I0819 19:32:03.383185  445411 main.go:141] libmachine: (newest-cni-125279)       <target dev='hda' bus='virtio'/>
	I0819 19:32:03.383195  445411 main.go:141] libmachine: (newest-cni-125279)     </disk>
	I0819 19:32:03.383201  445411 main.go:141] libmachine: (newest-cni-125279)     <interface type='network'>
	I0819 19:32:03.383208  445411 main.go:141] libmachine: (newest-cni-125279)       <source network='mk-newest-cni-125279'/>
	I0819 19:32:03.383214  445411 main.go:141] libmachine: (newest-cni-125279)       <model type='virtio'/>
	I0819 19:32:03.383221  445411 main.go:141] libmachine: (newest-cni-125279)     </interface>
	I0819 19:32:03.383228  445411 main.go:141] libmachine: (newest-cni-125279)     <interface type='network'>
	I0819 19:32:03.383234  445411 main.go:141] libmachine: (newest-cni-125279)       <source network='default'/>
	I0819 19:32:03.383241  445411 main.go:141] libmachine: (newest-cni-125279)       <model type='virtio'/>
	I0819 19:32:03.383247  445411 main.go:141] libmachine: (newest-cni-125279)     </interface>
	I0819 19:32:03.383254  445411 main.go:141] libmachine: (newest-cni-125279)     <serial type='pty'>
	I0819 19:32:03.383276  445411 main.go:141] libmachine: (newest-cni-125279)       <target port='0'/>
	I0819 19:32:03.383298  445411 main.go:141] libmachine: (newest-cni-125279)     </serial>
	I0819 19:32:03.383312  445411 main.go:141] libmachine: (newest-cni-125279)     <console type='pty'>
	I0819 19:32:03.383324  445411 main.go:141] libmachine: (newest-cni-125279)       <target type='serial' port='0'/>
	I0819 19:32:03.383335  445411 main.go:141] libmachine: (newest-cni-125279)     </console>
	I0819 19:32:03.383343  445411 main.go:141] libmachine: (newest-cni-125279)     <rng model='virtio'>
	I0819 19:32:03.383354  445411 main.go:141] libmachine: (newest-cni-125279)       <backend model='random'>/dev/random</backend>
	I0819 19:32:03.383365  445411 main.go:141] libmachine: (newest-cni-125279)     </rng>
	I0819 19:32:03.383373  445411 main.go:141] libmachine: (newest-cni-125279)     
	I0819 19:32:03.383398  445411 main.go:141] libmachine: (newest-cni-125279)     
	I0819 19:32:03.383411  445411 main.go:141] libmachine: (newest-cni-125279)   </devices>
	I0819 19:32:03.383418  445411 main.go:141] libmachine: (newest-cni-125279) </domain>
	I0819 19:32:03.383432  445411 main.go:141] libmachine: (newest-cni-125279) 
	I0819 19:32:03.388101  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:e9:4f:ba in network default
	I0819 19:32:03.388676  445411 main.go:141] libmachine: (newest-cni-125279) Ensuring networks are active...
	I0819 19:32:03.388697  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:03.389462  445411 main.go:141] libmachine: (newest-cni-125279) Ensuring network default is active
	I0819 19:32:03.389784  445411 main.go:141] libmachine: (newest-cni-125279) Ensuring network mk-newest-cni-125279 is active
	I0819 19:32:03.390393  445411 main.go:141] libmachine: (newest-cni-125279) Getting domain xml...
	I0819 19:32:03.391229  445411 main.go:141] libmachine: (newest-cni-125279) Creating domain...
	I0819 19:32:04.679193  445411 main.go:141] libmachine: (newest-cni-125279) Waiting to get IP...
	I0819 19:32:04.680262  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:04.680726  445411 main.go:141] libmachine: (newest-cni-125279) DBG | unable to find current IP address of domain newest-cni-125279 in network mk-newest-cni-125279
	I0819 19:32:04.680778  445411 main.go:141] libmachine: (newest-cni-125279) DBG | I0819 19:32:04.680693  445433 retry.go:31] will retry after 224.19994ms: waiting for machine to come up
	I0819 19:32:04.906192  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:04.906687  445411 main.go:141] libmachine: (newest-cni-125279) DBG | unable to find current IP address of domain newest-cni-125279 in network mk-newest-cni-125279
	I0819 19:32:04.906726  445411 main.go:141] libmachine: (newest-cni-125279) DBG | I0819 19:32:04.906631  445433 retry.go:31] will retry after 368.917614ms: waiting for machine to come up
	I0819 19:32:05.277245  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:05.277768  445411 main.go:141] libmachine: (newest-cni-125279) DBG | unable to find current IP address of domain newest-cni-125279 in network mk-newest-cni-125279
	I0819 19:32:05.277796  445411 main.go:141] libmachine: (newest-cni-125279) DBG | I0819 19:32:05.277717  445433 retry.go:31] will retry after 485.273357ms: waiting for machine to come up
	I0819 19:32:05.764588  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:05.765104  445411 main.go:141] libmachine: (newest-cni-125279) DBG | unable to find current IP address of domain newest-cni-125279 in network mk-newest-cni-125279
	I0819 19:32:05.765134  445411 main.go:141] libmachine: (newest-cni-125279) DBG | I0819 19:32:05.765062  445433 retry.go:31] will retry after 428.947871ms: waiting for machine to come up
	I0819 19:32:06.195692  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:06.196191  445411 main.go:141] libmachine: (newest-cni-125279) DBG | unable to find current IP address of domain newest-cni-125279 in network mk-newest-cni-125279
	I0819 19:32:06.196225  445411 main.go:141] libmachine: (newest-cni-125279) DBG | I0819 19:32:06.196132  445433 retry.go:31] will retry after 509.986197ms: waiting for machine to come up
	I0819 19:32:06.708134  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:06.708779  445411 main.go:141] libmachine: (newest-cni-125279) DBG | unable to find current IP address of domain newest-cni-125279 in network mk-newest-cni-125279
	I0819 19:32:06.708809  445411 main.go:141] libmachine: (newest-cni-125279) DBG | I0819 19:32:06.708708  445433 retry.go:31] will retry after 722.569889ms: waiting for machine to come up
	I0819 19:32:07.433380  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:07.433795  445411 main.go:141] libmachine: (newest-cni-125279) DBG | unable to find current IP address of domain newest-cni-125279 in network mk-newest-cni-125279
	I0819 19:32:07.433825  445411 main.go:141] libmachine: (newest-cni-125279) DBG | I0819 19:32:07.433723  445433 retry.go:31] will retry after 891.136923ms: waiting for machine to come up
	I0819 19:32:08.326855  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:08.327398  445411 main.go:141] libmachine: (newest-cni-125279) DBG | unable to find current IP address of domain newest-cni-125279 in network mk-newest-cni-125279
	I0819 19:32:08.327429  445411 main.go:141] libmachine: (newest-cni-125279) DBG | I0819 19:32:08.327341  445433 retry.go:31] will retry after 896.894835ms: waiting for machine to come up
	I0819 19:32:09.226343  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:09.226809  445411 main.go:141] libmachine: (newest-cni-125279) DBG | unable to find current IP address of domain newest-cni-125279 in network mk-newest-cni-125279
	I0819 19:32:09.226841  445411 main.go:141] libmachine: (newest-cni-125279) DBG | I0819 19:32:09.226758  445433 retry.go:31] will retry after 1.681643232s: waiting for machine to come up
	I0819 19:32:10.910683  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:10.911127  445411 main.go:141] libmachine: (newest-cni-125279) DBG | unable to find current IP address of domain newest-cni-125279 in network mk-newest-cni-125279
	I0819 19:32:10.911172  445411 main.go:141] libmachine: (newest-cni-125279) DBG | I0819 19:32:10.911068  445433 retry.go:31] will retry after 2.135746694s: waiting for machine to come up
	I0819 19:32:13.048343  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:13.048838  445411 main.go:141] libmachine: (newest-cni-125279) DBG | unable to find current IP address of domain newest-cni-125279 in network mk-newest-cni-125279
	I0819 19:32:13.048872  445411 main.go:141] libmachine: (newest-cni-125279) DBG | I0819 19:32:13.048778  445433 retry.go:31] will retry after 2.305017457s: waiting for machine to come up
	I0819 19:32:15.355145  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:15.355687  445411 main.go:141] libmachine: (newest-cni-125279) DBG | unable to find current IP address of domain newest-cni-125279 in network mk-newest-cni-125279
	I0819 19:32:15.355719  445411 main.go:141] libmachine: (newest-cni-125279) DBG | I0819 19:32:15.355596  445433 retry.go:31] will retry after 2.545066173s: waiting for machine to come up
	I0819 19:32:17.902054  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:17.902474  445411 main.go:141] libmachine: (newest-cni-125279) DBG | unable to find current IP address of domain newest-cni-125279 in network mk-newest-cni-125279
	I0819 19:32:17.902500  445411 main.go:141] libmachine: (newest-cni-125279) DBG | I0819 19:32:17.902429  445433 retry.go:31] will retry after 3.775157108s: waiting for machine to come up
	I0819 19:32:21.682467  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:21.682937  445411 main.go:141] libmachine: (newest-cni-125279) DBG | unable to find current IP address of domain newest-cni-125279 in network mk-newest-cni-125279
	I0819 19:32:21.682968  445411 main.go:141] libmachine: (newest-cni-125279) DBG | I0819 19:32:21.682882  445433 retry.go:31] will retry after 4.681714962s: waiting for machine to come up
	I0819 19:32:26.369533  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:26.370079  445411 main.go:141] libmachine: (newest-cni-125279) Found IP for machine: 192.168.50.232
	I0819 19:32:26.370123  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has current primary IP address 192.168.50.232 and MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:26.370133  445411 main.go:141] libmachine: (newest-cni-125279) Reserving static IP address...
	I0819 19:32:26.370514  445411 main.go:141] libmachine: (newest-cni-125279) DBG | unable to find host DHCP lease matching {name: "newest-cni-125279", mac: "52:54:00:65:45:fc", ip: "192.168.50.232"} in network mk-newest-cni-125279
	I0819 19:32:26.449045  445411 main.go:141] libmachine: (newest-cni-125279) DBG | Getting to WaitForSSH function...
	I0819 19:32:26.449080  445411 main.go:141] libmachine: (newest-cni-125279) Reserved static IP address: 192.168.50.232
	I0819 19:32:26.449095  445411 main.go:141] libmachine: (newest-cni-125279) Waiting for SSH to be available...
	I0819 19:32:26.451960  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:26.452361  445411 main.go:141] libmachine: (newest-cni-125279) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:65:45:fc", ip: ""} in network mk-newest-cni-125279
	I0819 19:32:26.452391  445411 main.go:141] libmachine: (newest-cni-125279) DBG | unable to find defined IP address of network mk-newest-cni-125279 interface with MAC address 52:54:00:65:45:fc
	I0819 19:32:26.452539  445411 main.go:141] libmachine: (newest-cni-125279) DBG | Using SSH client type: external
	I0819 19:32:26.452565  445411 main.go:141] libmachine: (newest-cni-125279) DBG | Using SSH private key: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/newest-cni-125279/id_rsa (-rw-------)
	I0819 19:32:26.452608  445411 main.go:141] libmachine: (newest-cni-125279) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19468-372744/.minikube/machines/newest-cni-125279/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 19:32:26.452626  445411 main.go:141] libmachine: (newest-cni-125279) DBG | About to run SSH command:
	I0819 19:32:26.452643  445411 main.go:141] libmachine: (newest-cni-125279) DBG | exit 0
	I0819 19:32:26.456705  445411 main.go:141] libmachine: (newest-cni-125279) DBG | SSH cmd err, output: exit status 255: 
	I0819 19:32:26.456734  445411 main.go:141] libmachine: (newest-cni-125279) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0819 19:32:26.456746  445411 main.go:141] libmachine: (newest-cni-125279) DBG | command : exit 0
	I0819 19:32:26.456753  445411 main.go:141] libmachine: (newest-cni-125279) DBG | err     : exit status 255
	I0819 19:32:26.456765  445411 main.go:141] libmachine: (newest-cni-125279) DBG | output  : 
	I0819 19:32:29.458674  445411 main.go:141] libmachine: (newest-cni-125279) DBG | Getting to WaitForSSH function...
	I0819 19:32:29.461202  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:29.461680  445411 main.go:141] libmachine: (newest-cni-125279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:45:fc", ip: ""} in network mk-newest-cni-125279: {Iface:virbr2 ExpiryTime:2024-08-19 20:32:17 +0000 UTC Type:0 Mac:52:54:00:65:45:fc Iaid: IPaddr:192.168.50.232 Prefix:24 Hostname:newest-cni-125279 Clientid:01:52:54:00:65:45:fc}
	I0819 19:32:29.461712  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined IP address 192.168.50.232 and MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:29.461830  445411 main.go:141] libmachine: (newest-cni-125279) DBG | Using SSH client type: external
	I0819 19:32:29.461857  445411 main.go:141] libmachine: (newest-cni-125279) DBG | Using SSH private key: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/newest-cni-125279/id_rsa (-rw-------)
	I0819 19:32:29.461917  445411 main.go:141] libmachine: (newest-cni-125279) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.232 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19468-372744/.minikube/machines/newest-cni-125279/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 19:32:29.461940  445411 main.go:141] libmachine: (newest-cni-125279) DBG | About to run SSH command:
	I0819 19:32:29.461953  445411 main.go:141] libmachine: (newest-cni-125279) DBG | exit 0
	I0819 19:32:29.583911  445411 main.go:141] libmachine: (newest-cni-125279) DBG | SSH cmd err, output: <nil>: 
	I0819 19:32:29.584154  445411 main.go:141] libmachine: (newest-cni-125279) KVM machine creation complete!
	I0819 19:32:29.584492  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetConfigRaw
	I0819 19:32:29.585123  445411 main.go:141] libmachine: (newest-cni-125279) Calling .DriverName
	I0819 19:32:29.585389  445411 main.go:141] libmachine: (newest-cni-125279) Calling .DriverName
	I0819 19:32:29.585598  445411 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0819 19:32:29.585613  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetState
	I0819 19:32:29.587203  445411 main.go:141] libmachine: Detecting operating system of created instance...
	I0819 19:32:29.587238  445411 main.go:141] libmachine: Waiting for SSH to be available...
	I0819 19:32:29.587247  445411 main.go:141] libmachine: Getting to WaitForSSH function...
	I0819 19:32:29.587260  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHHostname
	I0819 19:32:29.589944  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:29.590478  445411 main.go:141] libmachine: (newest-cni-125279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:45:fc", ip: ""} in network mk-newest-cni-125279: {Iface:virbr2 ExpiryTime:2024-08-19 20:32:17 +0000 UTC Type:0 Mac:52:54:00:65:45:fc Iaid: IPaddr:192.168.50.232 Prefix:24 Hostname:newest-cni-125279 Clientid:01:52:54:00:65:45:fc}
	I0819 19:32:29.590505  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined IP address 192.168.50.232 and MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:29.590641  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHPort
	I0819 19:32:29.590882  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHKeyPath
	I0819 19:32:29.591071  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHKeyPath
	I0819 19:32:29.591263  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHUsername
	I0819 19:32:29.591474  445411 main.go:141] libmachine: Using SSH client type: native
	I0819 19:32:29.591801  445411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.232 22 <nil> <nil>}
	I0819 19:32:29.591818  445411 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0819 19:32:29.691215  445411 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 19:32:29.691248  445411 main.go:141] libmachine: Detecting the provisioner...
	I0819 19:32:29.691260  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHHostname
	I0819 19:32:29.694391  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:29.694727  445411 main.go:141] libmachine: (newest-cni-125279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:45:fc", ip: ""} in network mk-newest-cni-125279: {Iface:virbr2 ExpiryTime:2024-08-19 20:32:17 +0000 UTC Type:0 Mac:52:54:00:65:45:fc Iaid: IPaddr:192.168.50.232 Prefix:24 Hostname:newest-cni-125279 Clientid:01:52:54:00:65:45:fc}
	I0819 19:32:29.694770  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined IP address 192.168.50.232 and MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:29.694916  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHPort
	I0819 19:32:29.695132  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHKeyPath
	I0819 19:32:29.695322  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHKeyPath
	I0819 19:32:29.695488  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHUsername
	I0819 19:32:29.695612  445411 main.go:141] libmachine: Using SSH client type: native
	I0819 19:32:29.695821  445411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.232 22 <nil> <nil>}
	I0819 19:32:29.695836  445411 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0819 19:32:29.796928  445411 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0819 19:32:29.797037  445411 main.go:141] libmachine: found compatible host: buildroot
	I0819 19:32:29.797054  445411 main.go:141] libmachine: Provisioning with buildroot...
	I0819 19:32:29.797066  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetMachineName
	I0819 19:32:29.797366  445411 buildroot.go:166] provisioning hostname "newest-cni-125279"
	I0819 19:32:29.797403  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetMachineName
	I0819 19:32:29.797632  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHHostname
	I0819 19:32:29.800770  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:29.801208  445411 main.go:141] libmachine: (newest-cni-125279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:45:fc", ip: ""} in network mk-newest-cni-125279: {Iface:virbr2 ExpiryTime:2024-08-19 20:32:17 +0000 UTC Type:0 Mac:52:54:00:65:45:fc Iaid: IPaddr:192.168.50.232 Prefix:24 Hostname:newest-cni-125279 Clientid:01:52:54:00:65:45:fc}
	I0819 19:32:29.801234  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined IP address 192.168.50.232 and MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:29.801430  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHPort
	I0819 19:32:29.801626  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHKeyPath
	I0819 19:32:29.801815  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHKeyPath
	I0819 19:32:29.802008  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHUsername
	I0819 19:32:29.802219  445411 main.go:141] libmachine: Using SSH client type: native
	I0819 19:32:29.802470  445411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.232 22 <nil> <nil>}
	I0819 19:32:29.802492  445411 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-125279 && echo "newest-cni-125279" | sudo tee /etc/hostname
	I0819 19:32:29.914936  445411 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-125279
	
	I0819 19:32:29.914992  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHHostname
	I0819 19:32:29.917969  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:29.918378  445411 main.go:141] libmachine: (newest-cni-125279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:45:fc", ip: ""} in network mk-newest-cni-125279: {Iface:virbr2 ExpiryTime:2024-08-19 20:32:17 +0000 UTC Type:0 Mac:52:54:00:65:45:fc Iaid: IPaddr:192.168.50.232 Prefix:24 Hostname:newest-cni-125279 Clientid:01:52:54:00:65:45:fc}
	I0819 19:32:29.918409  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined IP address 192.168.50.232 and MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:29.918597  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHPort
	I0819 19:32:29.918809  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHKeyPath
	I0819 19:32:29.919019  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHKeyPath
	I0819 19:32:29.919218  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHUsername
	I0819 19:32:29.919449  445411 main.go:141] libmachine: Using SSH client type: native
	I0819 19:32:29.919650  445411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.232 22 <nil> <nil>}
	I0819 19:32:29.919689  445411 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-125279' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-125279/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-125279' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 19:32:30.037919  445411 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 19:32:30.037960  445411 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19468-372744/.minikube CaCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19468-372744/.minikube}
	I0819 19:32:30.038027  445411 buildroot.go:174] setting up certificates
	I0819 19:32:30.038057  445411 provision.go:84] configureAuth start
	I0819 19:32:30.038078  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetMachineName
	I0819 19:32:30.038465  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetIP
	I0819 19:32:30.041517  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:30.041938  445411 main.go:141] libmachine: (newest-cni-125279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:45:fc", ip: ""} in network mk-newest-cni-125279: {Iface:virbr2 ExpiryTime:2024-08-19 20:32:17 +0000 UTC Type:0 Mac:52:54:00:65:45:fc Iaid: IPaddr:192.168.50.232 Prefix:24 Hostname:newest-cni-125279 Clientid:01:52:54:00:65:45:fc}
	I0819 19:32:30.041959  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined IP address 192.168.50.232 and MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:30.042125  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHHostname
	I0819 19:32:30.044600  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:30.044956  445411 main.go:141] libmachine: (newest-cni-125279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:45:fc", ip: ""} in network mk-newest-cni-125279: {Iface:virbr2 ExpiryTime:2024-08-19 20:32:17 +0000 UTC Type:0 Mac:52:54:00:65:45:fc Iaid: IPaddr:192.168.50.232 Prefix:24 Hostname:newest-cni-125279 Clientid:01:52:54:00:65:45:fc}
	I0819 19:32:30.044983  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined IP address 192.168.50.232 and MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:30.045163  445411 provision.go:143] copyHostCerts
	I0819 19:32:30.045241  445411 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem, removing ...
	I0819 19:32:30.045260  445411 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem
	I0819 19:32:30.045354  445411 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem (1123 bytes)
	I0819 19:32:30.045484  445411 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem, removing ...
	I0819 19:32:30.045497  445411 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem
	I0819 19:32:30.045536  445411 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem (1675 bytes)
	I0819 19:32:30.045646  445411 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem, removing ...
	I0819 19:32:30.045664  445411 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem
	I0819 19:32:30.045694  445411 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem (1082 bytes)
	I0819 19:32:30.045779  445411 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem org=jenkins.newest-cni-125279 san=[127.0.0.1 192.168.50.232 localhost minikube newest-cni-125279]
	I0819 19:32:30.126262  445411 provision.go:177] copyRemoteCerts
	I0819 19:32:30.126347  445411 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 19:32:30.126382  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHHostname
	I0819 19:32:30.129167  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:30.129464  445411 main.go:141] libmachine: (newest-cni-125279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:45:fc", ip: ""} in network mk-newest-cni-125279: {Iface:virbr2 ExpiryTime:2024-08-19 20:32:17 +0000 UTC Type:0 Mac:52:54:00:65:45:fc Iaid: IPaddr:192.168.50.232 Prefix:24 Hostname:newest-cni-125279 Clientid:01:52:54:00:65:45:fc}
	I0819 19:32:30.129498  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined IP address 192.168.50.232 and MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:30.129651  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHPort
	I0819 19:32:30.129892  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHKeyPath
	I0819 19:32:30.130084  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHUsername
	I0819 19:32:30.130252  445411 sshutil.go:53] new ssh client: &{IP:192.168.50.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/newest-cni-125279/id_rsa Username:docker}
	I0819 19:32:30.214589  445411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 19:32:30.239778  445411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0819 19:32:30.268214  445411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 19:32:30.295326  445411 provision.go:87] duration metric: took 257.24866ms to configureAuth
	I0819 19:32:30.295359  445411 buildroot.go:189] setting minikube options for container-runtime
	I0819 19:32:30.295543  445411 config.go:182] Loaded profile config "newest-cni-125279": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:32:30.295622  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHHostname
	I0819 19:32:30.298361  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:30.298811  445411 main.go:141] libmachine: (newest-cni-125279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:45:fc", ip: ""} in network mk-newest-cni-125279: {Iface:virbr2 ExpiryTime:2024-08-19 20:32:17 +0000 UTC Type:0 Mac:52:54:00:65:45:fc Iaid: IPaddr:192.168.50.232 Prefix:24 Hostname:newest-cni-125279 Clientid:01:52:54:00:65:45:fc}
	I0819 19:32:30.298841  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined IP address 192.168.50.232 and MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:30.299069  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHPort
	I0819 19:32:30.299277  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHKeyPath
	I0819 19:32:30.299475  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHKeyPath
	I0819 19:32:30.299627  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHUsername
	I0819 19:32:30.299821  445411 main.go:141] libmachine: Using SSH client type: native
	I0819 19:32:30.299995  445411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.232 22 <nil> <nil>}
	I0819 19:32:30.300015  445411 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 19:32:30.572732  445411 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 19:32:30.572787  445411 main.go:141] libmachine: Checking connection to Docker...
	I0819 19:32:30.572799  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetURL
	I0819 19:32:30.574408  445411 main.go:141] libmachine: (newest-cni-125279) DBG | Using libvirt version 6000000
	I0819 19:32:30.577257  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:30.577595  445411 main.go:141] libmachine: (newest-cni-125279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:45:fc", ip: ""} in network mk-newest-cni-125279: {Iface:virbr2 ExpiryTime:2024-08-19 20:32:17 +0000 UTC Type:0 Mac:52:54:00:65:45:fc Iaid: IPaddr:192.168.50.232 Prefix:24 Hostname:newest-cni-125279 Clientid:01:52:54:00:65:45:fc}
	I0819 19:32:30.577635  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined IP address 192.168.50.232 and MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:30.577748  445411 main.go:141] libmachine: Docker is up and running!
	I0819 19:32:30.577763  445411 main.go:141] libmachine: Reticulating splines...
	I0819 19:32:30.577771  445411 client.go:171] duration metric: took 28.162840157s to LocalClient.Create
	I0819 19:32:30.577792  445411 start.go:167] duration metric: took 28.162901607s to libmachine.API.Create "newest-cni-125279"
	I0819 19:32:30.577812  445411 start.go:293] postStartSetup for "newest-cni-125279" (driver="kvm2")
	I0819 19:32:30.577825  445411 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 19:32:30.577842  445411 main.go:141] libmachine: (newest-cni-125279) Calling .DriverName
	I0819 19:32:30.578092  445411 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 19:32:30.578115  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHHostname
	I0819 19:32:30.580377  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:30.580726  445411 main.go:141] libmachine: (newest-cni-125279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:45:fc", ip: ""} in network mk-newest-cni-125279: {Iface:virbr2 ExpiryTime:2024-08-19 20:32:17 +0000 UTC Type:0 Mac:52:54:00:65:45:fc Iaid: IPaddr:192.168.50.232 Prefix:24 Hostname:newest-cni-125279 Clientid:01:52:54:00:65:45:fc}
	I0819 19:32:30.580756  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined IP address 192.168.50.232 and MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:30.580904  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHPort
	I0819 19:32:30.581126  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHKeyPath
	I0819 19:32:30.581304  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHUsername
	I0819 19:32:30.581460  445411 sshutil.go:53] new ssh client: &{IP:192.168.50.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/newest-cni-125279/id_rsa Username:docker}
	I0819 19:32:30.667321  445411 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 19:32:30.672199  445411 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 19:32:30.672247  445411 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-372744/.minikube/addons for local assets ...
	I0819 19:32:30.672316  445411 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-372744/.minikube/files for local assets ...
	I0819 19:32:30.672408  445411 filesync.go:149] local asset: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem -> 3800092.pem in /etc/ssl/certs
	I0819 19:32:30.672537  445411 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 19:32:30.683683  445411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem --> /etc/ssl/certs/3800092.pem (1708 bytes)
	I0819 19:32:30.707900  445411 start.go:296] duration metric: took 130.068757ms for postStartSetup
	I0819 19:32:30.707971  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetConfigRaw
	I0819 19:32:30.708588  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetIP
	I0819 19:32:30.711485  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:30.711934  445411 main.go:141] libmachine: (newest-cni-125279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:45:fc", ip: ""} in network mk-newest-cni-125279: {Iface:virbr2 ExpiryTime:2024-08-19 20:32:17 +0000 UTC Type:0 Mac:52:54:00:65:45:fc Iaid: IPaddr:192.168.50.232 Prefix:24 Hostname:newest-cni-125279 Clientid:01:52:54:00:65:45:fc}
	I0819 19:32:30.711975  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined IP address 192.168.50.232 and MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:30.712442  445411 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/newest-cni-125279/config.json ...
	I0819 19:32:30.712629  445411 start.go:128] duration metric: took 28.3165109s to createHost
	I0819 19:32:30.712666  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHHostname
	I0819 19:32:30.715033  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:30.715452  445411 main.go:141] libmachine: (newest-cni-125279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:45:fc", ip: ""} in network mk-newest-cni-125279: {Iface:virbr2 ExpiryTime:2024-08-19 20:32:17 +0000 UTC Type:0 Mac:52:54:00:65:45:fc Iaid: IPaddr:192.168.50.232 Prefix:24 Hostname:newest-cni-125279 Clientid:01:52:54:00:65:45:fc}
	I0819 19:32:30.715498  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined IP address 192.168.50.232 and MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:30.715655  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHPort
	I0819 19:32:30.715879  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHKeyPath
	I0819 19:32:30.716044  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHKeyPath
	I0819 19:32:30.716240  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHUsername
	I0819 19:32:30.716377  445411 main.go:141] libmachine: Using SSH client type: native
	I0819 19:32:30.716569  445411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.232 22 <nil> <nil>}
	I0819 19:32:30.716579  445411 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 19:32:30.816408  445411 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724095950.795085288
	
	I0819 19:32:30.816440  445411 fix.go:216] guest clock: 1724095950.795085288
	I0819 19:32:30.816450  445411 fix.go:229] Guest: 2024-08-19 19:32:30.795085288 +0000 UTC Remote: 2024-08-19 19:32:30.712653058 +0000 UTC m=+28.433473700 (delta=82.43223ms)
	I0819 19:32:30.816484  445411 fix.go:200] guest clock delta is within tolerance: 82.43223ms
	I0819 19:32:30.816495  445411 start.go:83] releasing machines lock for "newest-cni-125279", held for 28.420486595s
	I0819 19:32:30.816526  445411 main.go:141] libmachine: (newest-cni-125279) Calling .DriverName
	I0819 19:32:30.816873  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetIP
	I0819 19:32:30.819475  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:30.819798  445411 main.go:141] libmachine: (newest-cni-125279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:45:fc", ip: ""} in network mk-newest-cni-125279: {Iface:virbr2 ExpiryTime:2024-08-19 20:32:17 +0000 UTC Type:0 Mac:52:54:00:65:45:fc Iaid: IPaddr:192.168.50.232 Prefix:24 Hostname:newest-cni-125279 Clientid:01:52:54:00:65:45:fc}
	I0819 19:32:30.819824  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined IP address 192.168.50.232 and MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:30.819984  445411 main.go:141] libmachine: (newest-cni-125279) Calling .DriverName
	I0819 19:32:30.820624  445411 main.go:141] libmachine: (newest-cni-125279) Calling .DriverName
	I0819 19:32:30.820800  445411 main.go:141] libmachine: (newest-cni-125279) Calling .DriverName
	I0819 19:32:30.820900  445411 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 19:32:30.820951  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHHostname
	I0819 19:32:30.821046  445411 ssh_runner.go:195] Run: cat /version.json
	I0819 19:32:30.821075  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHHostname
	I0819 19:32:30.823707  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:30.824027  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:30.824060  445411 main.go:141] libmachine: (newest-cni-125279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:45:fc", ip: ""} in network mk-newest-cni-125279: {Iface:virbr2 ExpiryTime:2024-08-19 20:32:17 +0000 UTC Type:0 Mac:52:54:00:65:45:fc Iaid: IPaddr:192.168.50.232 Prefix:24 Hostname:newest-cni-125279 Clientid:01:52:54:00:65:45:fc}
	I0819 19:32:30.824083  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined IP address 192.168.50.232 and MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:30.824214  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHPort
	I0819 19:32:30.824402  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHKeyPath
	I0819 19:32:30.824413  445411 main.go:141] libmachine: (newest-cni-125279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:45:fc", ip: ""} in network mk-newest-cni-125279: {Iface:virbr2 ExpiryTime:2024-08-19 20:32:17 +0000 UTC Type:0 Mac:52:54:00:65:45:fc Iaid: IPaddr:192.168.50.232 Prefix:24 Hostname:newest-cni-125279 Clientid:01:52:54:00:65:45:fc}
	I0819 19:32:30.824435  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined IP address 192.168.50.232 and MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:30.824618  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHPort
	I0819 19:32:30.824628  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHUsername
	I0819 19:32:30.824782  445411 sshutil.go:53] new ssh client: &{IP:192.168.50.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/newest-cni-125279/id_rsa Username:docker}
	I0819 19:32:30.824845  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHKeyPath
	I0819 19:32:30.825010  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetSSHUsername
	I0819 19:32:30.825175  445411 sshutil.go:53] new ssh client: &{IP:192.168.50.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/newest-cni-125279/id_rsa Username:docker}
	I0819 19:32:30.897078  445411 ssh_runner.go:195] Run: systemctl --version
	I0819 19:32:30.924103  445411 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 19:32:31.098786  445411 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 19:32:31.105730  445411 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 19:32:31.105802  445411 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 19:32:31.124297  445411 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 19:32:31.124330  445411 start.go:495] detecting cgroup driver to use...
	I0819 19:32:31.124435  445411 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 19:32:31.142781  445411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 19:32:31.158044  445411 docker.go:217] disabling cri-docker service (if available) ...
	I0819 19:32:31.158104  445411 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 19:32:31.172659  445411 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 19:32:31.187214  445411 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 19:32:31.299769  445411 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 19:32:31.461533  445411 docker.go:233] disabling docker service ...
	I0819 19:32:31.461615  445411 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 19:32:31.476486  445411 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 19:32:31.490797  445411 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 19:32:31.608782  445411 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 19:32:31.742865  445411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 19:32:31.758761  445411 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 19:32:31.778934  445411 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 19:32:31.778996  445411 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:32:31.790652  445411 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 19:32:31.790725  445411 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:32:31.802633  445411 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:32:31.813033  445411 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:32:31.826216  445411 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 19:32:31.836768  445411 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:32:31.848190  445411 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:32:31.865498  445411 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:32:31.875948  445411 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 19:32:31.886230  445411 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 19:32:31.886291  445411 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 19:32:31.900798  445411 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 19:32:31.910083  445411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:32:32.042525  445411 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 19:32:32.194892  445411 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 19:32:32.194987  445411 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 19:32:32.200584  445411 start.go:563] Will wait 60s for crictl version
	I0819 19:32:32.200664  445411 ssh_runner.go:195] Run: which crictl
	I0819 19:32:32.204682  445411 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 19:32:32.252098  445411 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 19:32:32.252211  445411 ssh_runner.go:195] Run: crio --version
	I0819 19:32:32.285365  445411 ssh_runner.go:195] Run: crio --version
	I0819 19:32:32.318835  445411 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 19:32:32.320145  445411 main.go:141] libmachine: (newest-cni-125279) Calling .GetIP
	I0819 19:32:32.322889  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:32.323215  445411 main.go:141] libmachine: (newest-cni-125279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:45:fc", ip: ""} in network mk-newest-cni-125279: {Iface:virbr2 ExpiryTime:2024-08-19 20:32:17 +0000 UTC Type:0 Mac:52:54:00:65:45:fc Iaid: IPaddr:192.168.50.232 Prefix:24 Hostname:newest-cni-125279 Clientid:01:52:54:00:65:45:fc}
	I0819 19:32:32.323238  445411 main.go:141] libmachine: (newest-cni-125279) DBG | domain newest-cni-125279 has defined IP address 192.168.50.232 and MAC address 52:54:00:65:45:fc in network mk-newest-cni-125279
	I0819 19:32:32.323423  445411 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0819 19:32:32.327759  445411 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 19:32:32.341796  445411 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	
	
	==> CRI-O <==
	Aug 19 19:32:34 no-preload-278232 crio[730]: time="2024-08-19 19:32:34.199414649Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=659ce9ec-863f-4f8e-b59b-28cdb433db99 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:32:34 no-preload-278232 crio[730]: time="2024-08-19 19:32:34.199592636Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fd16c88623359ff9e44155c82c7e33b07dc040678d1d6f1915a25d80a5db0bbd,PodSandboxId:0a0904912f9d1ec30f183f6dc2a4a978a812a54d7567d6009ba727db55d1bdd0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724094844169322731,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24766475-1a5b-4f1a-9350-3e891b5272cc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddf310788bc301171c712e1c8fa8d1e15b7f3597213ff1831e1df21f82a06aad,PodSandboxId:ddcc63d3b2d0261556759c2a90de5c2a60a41c054dab36b627f123bde0c70a7f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724094823934448867,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1cfd7b93-e926-4ffd-93f3-8e0f9d0d382c,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ad390cacd3d89ad9a5e7af71dab26d472a67971ffda086057b7cf0e0a9560aa,PodSandboxId:483740644dca99e5dad0d73df753462357782d8dce4e00f5f128e873a5ed1857,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724094820764207455,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-22lbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8a5cabd-41d4-41cb-91c1-2db1f3471db3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:482a17643a2dedc658bdc88ca54e2ffb40166833acfc42adf452364226e51dc6,PodSandboxId:0a0904912f9d1ec30f183f6dc2a4a978a812a54d7567d6009ba727db55d1bdd0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724094813357247280,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2
4766475-1a5b-4f1a-9350-3e891b5272cc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:236b4296ad713b251ca958489ebfc4ce41bd2cb64d538cf0cf5f72cc9243e94a,PodSandboxId:d12040956306fe1996c8fd63d665b3fa8ef5971ae8f159bfb02265f834d22f6e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724094813417123420,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rcf49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85d5814a-1ba9-46be-ab11-17bf40c0f0
29,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:123f84ccdc9cf1aa830891307b79d42c9166f018bff19b498a5107e428feb92f,PodSandboxId:147c748ad560c6509d9c63140135061c77a73543e173fefb595b86c17686ee3a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724094809606945404,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-278232,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b7b93e2ee261f2b15c9a4518a7a53db,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27d104597d0ca1b418bd0cab630536ff2d859717c314b48ea994680b21a5bd9a,PodSandboxId:a45488cfda6169d18ff350bcc851621c0f1ffa780fa5e78cc370b1cfd51871c4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724094809639535991,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-278232,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47880a4872cf6261f8f118c958bba0f1,},Annotations:map[string]string{io.kubernetes.containe
r.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdac290df2d44c9b30a9c4378f98137a73e603fccd18bc228cca5d017f0a7094,PodSandboxId:f5d20a4943041665d7b7508782190955db5244b6ccd2d33c0939c602f6543c81,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724094809579476955,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-278232,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc2710bbd7c397cccb826f5bab023f24,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0
944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:390aeac356048873634022bb4093a927ddaf293b994b7316b79cfc2c4c329346,PodSandboxId:b5702869c384335cd8f5ac98c625b2d394a097bdbf336d29a84450ae213a4c7f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724094809574956402,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-278232,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fb7d810b3d18af9a02af1eab5fdf39a,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=659ce9ec-863f-4f8e-b59b-28cdb433db99 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:32:34 no-preload-278232 crio[730]: time="2024-08-19 19:32:34.235365062Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=34d78b73-1c73-4305-ac6d-d947a69b42d8 name=/runtime.v1.RuntimeService/Version
	Aug 19 19:32:34 no-preload-278232 crio[730]: time="2024-08-19 19:32:34.235481170Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=34d78b73-1c73-4305-ac6d-d947a69b42d8 name=/runtime.v1.RuntimeService/Version
	Aug 19 19:32:34 no-preload-278232 crio[730]: time="2024-08-19 19:32:34.237508260Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8eb8a04e-911c-47f1-8992-14bb4b881e05 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:32:34 no-preload-278232 crio[730]: time="2024-08-19 19:32:34.238381338Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095954238352116,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8eb8a04e-911c-47f1-8992-14bb4b881e05 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:32:34 no-preload-278232 crio[730]: time="2024-08-19 19:32:34.239356268Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ff69dfc7-863b-4538-84f2-08e40ef88e27 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:32:34 no-preload-278232 crio[730]: time="2024-08-19 19:32:34.239463228Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ff69dfc7-863b-4538-84f2-08e40ef88e27 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:32:34 no-preload-278232 crio[730]: time="2024-08-19 19:32:34.239848718Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fd16c88623359ff9e44155c82c7e33b07dc040678d1d6f1915a25d80a5db0bbd,PodSandboxId:0a0904912f9d1ec30f183f6dc2a4a978a812a54d7567d6009ba727db55d1bdd0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724094844169322731,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24766475-1a5b-4f1a-9350-3e891b5272cc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddf310788bc301171c712e1c8fa8d1e15b7f3597213ff1831e1df21f82a06aad,PodSandboxId:ddcc63d3b2d0261556759c2a90de5c2a60a41c054dab36b627f123bde0c70a7f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724094823934448867,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1cfd7b93-e926-4ffd-93f3-8e0f9d0d382c,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ad390cacd3d89ad9a5e7af71dab26d472a67971ffda086057b7cf0e0a9560aa,PodSandboxId:483740644dca99e5dad0d73df753462357782d8dce4e00f5f128e873a5ed1857,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724094820764207455,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-22lbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8a5cabd-41d4-41cb-91c1-2db1f3471db3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:482a17643a2dedc658bdc88ca54e2ffb40166833acfc42adf452364226e51dc6,PodSandboxId:0a0904912f9d1ec30f183f6dc2a4a978a812a54d7567d6009ba727db55d1bdd0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724094813357247280,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2
4766475-1a5b-4f1a-9350-3e891b5272cc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:236b4296ad713b251ca958489ebfc4ce41bd2cb64d538cf0cf5f72cc9243e94a,PodSandboxId:d12040956306fe1996c8fd63d665b3fa8ef5971ae8f159bfb02265f834d22f6e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724094813417123420,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rcf49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85d5814a-1ba9-46be-ab11-17bf40c0f0
29,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:123f84ccdc9cf1aa830891307b79d42c9166f018bff19b498a5107e428feb92f,PodSandboxId:147c748ad560c6509d9c63140135061c77a73543e173fefb595b86c17686ee3a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724094809606945404,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-278232,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b7b93e2ee261f2b15c9a4518a7a53db,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27d104597d0ca1b418bd0cab630536ff2d859717c314b48ea994680b21a5bd9a,PodSandboxId:a45488cfda6169d18ff350bcc851621c0f1ffa780fa5e78cc370b1cfd51871c4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724094809639535991,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-278232,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47880a4872cf6261f8f118c958bba0f1,},Annotations:map[string]string{io.kubernetes.containe
r.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdac290df2d44c9b30a9c4378f98137a73e603fccd18bc228cca5d017f0a7094,PodSandboxId:f5d20a4943041665d7b7508782190955db5244b6ccd2d33c0939c602f6543c81,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724094809579476955,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-278232,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc2710bbd7c397cccb826f5bab023f24,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0
944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:390aeac356048873634022bb4093a927ddaf293b994b7316b79cfc2c4c329346,PodSandboxId:b5702869c384335cd8f5ac98c625b2d394a097bdbf336d29a84450ae213a4c7f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724094809574956402,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-278232,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fb7d810b3d18af9a02af1eab5fdf39a,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ff69dfc7-863b-4538-84f2-08e40ef88e27 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:32:34 no-preload-278232 crio[730]: time="2024-08-19 19:32:34.289261402Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=59142ebf-fe8f-4597-ac83-f0cd62ef2179 name=/runtime.v1.RuntimeService/Version
	Aug 19 19:32:34 no-preload-278232 crio[730]: time="2024-08-19 19:32:34.289359960Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=59142ebf-fe8f-4597-ac83-f0cd62ef2179 name=/runtime.v1.RuntimeService/Version
	Aug 19 19:32:34 no-preload-278232 crio[730]: time="2024-08-19 19:32:34.291316865Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7f706b2f-1e6e-42d8-9a1a-0b5e975419cf name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:32:34 no-preload-278232 crio[730]: time="2024-08-19 19:32:34.291743761Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095954291720196,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7f706b2f-1e6e-42d8-9a1a-0b5e975419cf name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:32:34 no-preload-278232 crio[730]: time="2024-08-19 19:32:34.292364480Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9c2f54d5-7d62-4a82-aec8-d59c8b568491 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:32:34 no-preload-278232 crio[730]: time="2024-08-19 19:32:34.292438469Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9c2f54d5-7d62-4a82-aec8-d59c8b568491 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:32:34 no-preload-278232 crio[730]: time="2024-08-19 19:32:34.292713117Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fd16c88623359ff9e44155c82c7e33b07dc040678d1d6f1915a25d80a5db0bbd,PodSandboxId:0a0904912f9d1ec30f183f6dc2a4a978a812a54d7567d6009ba727db55d1bdd0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724094844169322731,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24766475-1a5b-4f1a-9350-3e891b5272cc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddf310788bc301171c712e1c8fa8d1e15b7f3597213ff1831e1df21f82a06aad,PodSandboxId:ddcc63d3b2d0261556759c2a90de5c2a60a41c054dab36b627f123bde0c70a7f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724094823934448867,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1cfd7b93-e926-4ffd-93f3-8e0f9d0d382c,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ad390cacd3d89ad9a5e7af71dab26d472a67971ffda086057b7cf0e0a9560aa,PodSandboxId:483740644dca99e5dad0d73df753462357782d8dce4e00f5f128e873a5ed1857,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724094820764207455,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-22lbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8a5cabd-41d4-41cb-91c1-2db1f3471db3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:482a17643a2dedc658bdc88ca54e2ffb40166833acfc42adf452364226e51dc6,PodSandboxId:0a0904912f9d1ec30f183f6dc2a4a978a812a54d7567d6009ba727db55d1bdd0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724094813357247280,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2
4766475-1a5b-4f1a-9350-3e891b5272cc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:236b4296ad713b251ca958489ebfc4ce41bd2cb64d538cf0cf5f72cc9243e94a,PodSandboxId:d12040956306fe1996c8fd63d665b3fa8ef5971ae8f159bfb02265f834d22f6e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724094813417123420,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rcf49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85d5814a-1ba9-46be-ab11-17bf40c0f0
29,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:123f84ccdc9cf1aa830891307b79d42c9166f018bff19b498a5107e428feb92f,PodSandboxId:147c748ad560c6509d9c63140135061c77a73543e173fefb595b86c17686ee3a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724094809606945404,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-278232,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b7b93e2ee261f2b15c9a4518a7a53db,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27d104597d0ca1b418bd0cab630536ff2d859717c314b48ea994680b21a5bd9a,PodSandboxId:a45488cfda6169d18ff350bcc851621c0f1ffa780fa5e78cc370b1cfd51871c4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724094809639535991,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-278232,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47880a4872cf6261f8f118c958bba0f1,},Annotations:map[string]string{io.kubernetes.containe
r.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdac290df2d44c9b30a9c4378f98137a73e603fccd18bc228cca5d017f0a7094,PodSandboxId:f5d20a4943041665d7b7508782190955db5244b6ccd2d33c0939c602f6543c81,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724094809579476955,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-278232,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc2710bbd7c397cccb826f5bab023f24,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0
944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:390aeac356048873634022bb4093a927ddaf293b994b7316b79cfc2c4c329346,PodSandboxId:b5702869c384335cd8f5ac98c625b2d394a097bdbf336d29a84450ae213a4c7f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724094809574956402,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-278232,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fb7d810b3d18af9a02af1eab5fdf39a,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9c2f54d5-7d62-4a82-aec8-d59c8b568491 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:32:34 no-preload-278232 crio[730]: time="2024-08-19 19:32:34.329881598Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=504c8ca8-38e8-43bf-a426-2c216d9aaf76 name=/runtime.v1.RuntimeService/Version
	Aug 19 19:32:34 no-preload-278232 crio[730]: time="2024-08-19 19:32:34.329976261Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=504c8ca8-38e8-43bf-a426-2c216d9aaf76 name=/runtime.v1.RuntimeService/Version
	Aug 19 19:32:34 no-preload-278232 crio[730]: time="2024-08-19 19:32:34.331595551Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ccac6662-2aea-4343-ad43-f1783698706a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:32:34 no-preload-278232 crio[730]: time="2024-08-19 19:32:34.332894304Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095954332859951,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ccac6662-2aea-4343-ad43-f1783698706a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:32:34 no-preload-278232 crio[730]: time="2024-08-19 19:32:34.333392973Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5c8adeb4-f52f-4331-b3d8-689f16fc9277 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:32:34 no-preload-278232 crio[730]: time="2024-08-19 19:32:34.333445705Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5c8adeb4-f52f-4331-b3d8-689f16fc9277 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:32:34 no-preload-278232 crio[730]: time="2024-08-19 19:32:34.333628642Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fd16c88623359ff9e44155c82c7e33b07dc040678d1d6f1915a25d80a5db0bbd,PodSandboxId:0a0904912f9d1ec30f183f6dc2a4a978a812a54d7567d6009ba727db55d1bdd0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724094844169322731,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24766475-1a5b-4f1a-9350-3e891b5272cc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddf310788bc301171c712e1c8fa8d1e15b7f3597213ff1831e1df21f82a06aad,PodSandboxId:ddcc63d3b2d0261556759c2a90de5c2a60a41c054dab36b627f123bde0c70a7f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724094823934448867,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1cfd7b93-e926-4ffd-93f3-8e0f9d0d382c,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ad390cacd3d89ad9a5e7af71dab26d472a67971ffda086057b7cf0e0a9560aa,PodSandboxId:483740644dca99e5dad0d73df753462357782d8dce4e00f5f128e873a5ed1857,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724094820764207455,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-22lbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8a5cabd-41d4-41cb-91c1-2db1f3471db3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:482a17643a2dedc658bdc88ca54e2ffb40166833acfc42adf452364226e51dc6,PodSandboxId:0a0904912f9d1ec30f183f6dc2a4a978a812a54d7567d6009ba727db55d1bdd0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724094813357247280,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2
4766475-1a5b-4f1a-9350-3e891b5272cc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:236b4296ad713b251ca958489ebfc4ce41bd2cb64d538cf0cf5f72cc9243e94a,PodSandboxId:d12040956306fe1996c8fd63d665b3fa8ef5971ae8f159bfb02265f834d22f6e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724094813417123420,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rcf49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85d5814a-1ba9-46be-ab11-17bf40c0f0
29,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:123f84ccdc9cf1aa830891307b79d42c9166f018bff19b498a5107e428feb92f,PodSandboxId:147c748ad560c6509d9c63140135061c77a73543e173fefb595b86c17686ee3a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724094809606945404,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-278232,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b7b93e2ee261f2b15c9a4518a7a53db,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27d104597d0ca1b418bd0cab630536ff2d859717c314b48ea994680b21a5bd9a,PodSandboxId:a45488cfda6169d18ff350bcc851621c0f1ffa780fa5e78cc370b1cfd51871c4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724094809639535991,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-278232,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47880a4872cf6261f8f118c958bba0f1,},Annotations:map[string]string{io.kubernetes.containe
r.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdac290df2d44c9b30a9c4378f98137a73e603fccd18bc228cca5d017f0a7094,PodSandboxId:f5d20a4943041665d7b7508782190955db5244b6ccd2d33c0939c602f6543c81,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724094809579476955,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-278232,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc2710bbd7c397cccb826f5bab023f24,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0
944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:390aeac356048873634022bb4093a927ddaf293b994b7316b79cfc2c4c329346,PodSandboxId:b5702869c384335cd8f5ac98c625b2d394a097bdbf336d29a84450ae213a4c7f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724094809574956402,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-278232,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fb7d810b3d18af9a02af1eab5fdf39a,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5c8adeb4-f52f-4331-b3d8-689f16fc9277 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:32:34 no-preload-278232 crio[730]: time="2024-08-19 19:32:34.336441784Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="otel-collector/interceptors.go:62" id=8d043459-b5bb-4d28-8a41-02a8000ef1a8 name=/runtime.v1.RuntimeService/Status
	Aug 19 19:32:34 no-preload-278232 crio[730]: time="2024-08-19 19:32:34.336509606Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=8d043459-b5bb-4d28-8a41-02a8000ef1a8 name=/runtime.v1.RuntimeService/Status
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	fd16c88623359       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      18 minutes ago      Running             storage-provisioner       2                   0a0904912f9d1       storage-provisioner
	ddf310788bc30       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   18 minutes ago      Running             busybox                   1                   ddcc63d3b2d02       busybox
	6ad390cacd3d8       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      18 minutes ago      Running             coredns                   1                   483740644dca9       coredns-6f6b679f8f-22lbt
	236b4296ad713       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      19 minutes ago      Running             kube-proxy                1                   d12040956306f       kube-proxy-rcf49
	482a17643a2de       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Exited              storage-provisioner       1                   0a0904912f9d1       storage-provisioner
	27d104597d0ca       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      19 minutes ago      Running             etcd                      1                   a45488cfda616       etcd-no-preload-278232
	123f84ccdc9cf       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      19 minutes ago      Running             kube-scheduler            1                   147c748ad560c       kube-scheduler-no-preload-278232
	cdac290df2d44       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      19 minutes ago      Running             kube-apiserver            1                   f5d20a4943041       kube-apiserver-no-preload-278232
	390aeac356048       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      19 minutes ago      Running             kube-controller-manager   1                   b5702869c3843       kube-controller-manager-no-preload-278232
	
	
	==> coredns [6ad390cacd3d89ad9a5e7af71dab26d472a67971ffda086057b7cf0e0a9560aa] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:42720 - 61003 "HINFO IN 4589887553472215587.3096284654120628867. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01082479s
	
	
	==> describe nodes <==
	Name:               no-preload-278232
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-278232
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9c2db9d51ec33b5c53a86e9ba3d384ee332e3411
	                    minikube.k8s.io/name=no-preload-278232
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T19_03_44_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 19:03:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-278232
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 19:32:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 19:29:21 +0000   Mon, 19 Aug 2024 19:03:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 19:29:21 +0000   Mon, 19 Aug 2024 19:03:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 19:29:21 +0000   Mon, 19 Aug 2024 19:03:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 19:29:21 +0000   Mon, 19 Aug 2024 19:13:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.106
	  Hostname:    no-preload-278232
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a659604399814453bc7f22780393e1fd
	  System UUID:                a6596043-9981-4453-bc7f-22780393e1fd
	  Boot ID:                    1511af4e-0834-4565-8331-154ab7841607
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 coredns-6f6b679f8f-22lbt                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     28m
	  kube-system                 etcd-no-preload-278232                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         28m
	  kube-system                 kube-apiserver-no-preload-278232             250m (12%)    0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-controller-manager-no-preload-278232    200m (10%)    0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-proxy-rcf49                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-scheduler-no-preload-278232             100m (5%)     0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 metrics-server-6867b74b74-vxwrs              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         27m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 19m                kube-proxy       
	  Normal  NodeHasNoDiskPressure    28m (x8 over 28m)  kubelet          Node no-preload-278232 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  28m (x8 over 28m)  kubelet          Node no-preload-278232 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     28m (x7 over 28m)  kubelet          Node no-preload-278232 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     28m                kubelet          Node no-preload-278232 status is now: NodeHasSufficientPID
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  28m                kubelet          Node no-preload-278232 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m                kubelet          Node no-preload-278232 status is now: NodeHasNoDiskPressure
	  Normal  NodeReady                28m                kubelet          Node no-preload-278232 status is now: NodeReady
	  Normal  RegisteredNode           28m                node-controller  Node no-preload-278232 event: Registered Node no-preload-278232 in Controller
	  Normal  CIDRAssignmentFailed     28m                cidrAllocator    Node no-preload-278232 status is now: CIDRAssignmentFailed
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node no-preload-278232 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node no-preload-278232 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)  kubelet          Node no-preload-278232 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           18m                node-controller  Node no-preload-278232 event: Registered Node no-preload-278232 in Controller
	
	
	==> dmesg <==
	[Aug19 19:12] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052265] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041214] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.094927] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Aug19 19:13] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.604362] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.743009] systemd-fstab-generator[647]: Ignoring "noauto" option for root device
	[  +0.062647] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.055382] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +0.182332] systemd-fstab-generator[673]: Ignoring "noauto" option for root device
	[  +0.130167] systemd-fstab-generator[685]: Ignoring "noauto" option for root device
	[  +0.283984] systemd-fstab-generator[714]: Ignoring "noauto" option for root device
	[ +15.882927] systemd-fstab-generator[1314]: Ignoring "noauto" option for root device
	[  +0.068974] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.829138] systemd-fstab-generator[1434]: Ignoring "noauto" option for root device
	[  +4.079921] kauditd_printk_skb: 97 callbacks suppressed
	[  +2.938027] systemd-fstab-generator[2064]: Ignoring "noauto" option for root device
	[  +3.313906] kauditd_printk_skb: 61 callbacks suppressed
	[Aug19 19:14] kauditd_printk_skb: 46 callbacks suppressed
	
	
	==> etcd [27d104597d0ca1b418bd0cab630536ff2d859717c314b48ea994680b21a5bd9a] <==
	{"level":"info","ts":"2024-08-19T19:13:30.047304Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.106:2380"}
	{"level":"info","ts":"2024-08-19T19:13:30.047782Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"133f99d1dc1797cc","initial-advertise-peer-urls":["https://192.168.39.106:2380"],"listen-peer-urls":["https://192.168.39.106:2380"],"advertise-client-urls":["https://192.168.39.106:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.106:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-19T19:13:30.047833Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-19T19:13:30.853753Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"133f99d1dc1797cc is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-19T19:13:30.854349Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"133f99d1dc1797cc became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-19T19:13:30.854394Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"133f99d1dc1797cc received MsgPreVoteResp from 133f99d1dc1797cc at term 2"}
	{"level":"info","ts":"2024-08-19T19:13:30.854433Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"133f99d1dc1797cc became candidate at term 3"}
	{"level":"info","ts":"2024-08-19T19:13:30.854458Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"133f99d1dc1797cc received MsgVoteResp from 133f99d1dc1797cc at term 3"}
	{"level":"info","ts":"2024-08-19T19:13:30.854488Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"133f99d1dc1797cc became leader at term 3"}
	{"level":"info","ts":"2024-08-19T19:13:30.854514Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 133f99d1dc1797cc elected leader 133f99d1dc1797cc at term 3"}
	{"level":"info","ts":"2024-08-19T19:13:30.901288Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"133f99d1dc1797cc","local-member-attributes":"{Name:no-preload-278232 ClientURLs:[https://192.168.39.106:2379]}","request-path":"/0/members/133f99d1dc1797cc/attributes","cluster-id":"db63b0e3647a827","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-19T19:13:30.901565Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T19:13:30.902191Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T19:13:30.903751Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-19T19:13:30.903785Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-19T19:13:30.904937Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T19:13:30.905250Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T19:13:30.909131Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-19T19:13:30.910180Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.106:2379"}
	{"level":"info","ts":"2024-08-19T19:23:30.941511Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":881}
	{"level":"info","ts":"2024-08-19T19:23:30.953209Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":881,"took":"10.773333ms","hash":1012546118,"current-db-size-bytes":2748416,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":2748416,"current-db-size-in-use":"2.7 MB"}
	{"level":"info","ts":"2024-08-19T19:23:30.953324Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1012546118,"revision":881,"compact-revision":-1}
	{"level":"info","ts":"2024-08-19T19:28:30.953621Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1124}
	{"level":"info","ts":"2024-08-19T19:28:30.958277Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1124,"took":"3.75353ms","hash":3547541127,"current-db-size-bytes":2748416,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":1646592,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-08-19T19:28:30.958383Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3547541127,"revision":1124,"compact-revision":881}
	
	
	==> kernel <==
	 19:32:34 up 19 min,  0 users,  load average: 0.13, 0.15, 0.17
	Linux no-preload-278232 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [cdac290df2d44c9b30a9c4378f98137a73e603fccd18bc228cca5d017f0a7094] <==
	W0819 19:28:33.522211       1 handler_proxy.go:99] no RequestInfo found in the context
	E0819 19:28:33.522420       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0819 19:28:33.523457       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0819 19:28:33.523548       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0819 19:29:33.524605       1 handler_proxy.go:99] no RequestInfo found in the context
	W0819 19:29:33.524788       1 handler_proxy.go:99] no RequestInfo found in the context
	E0819 19:29:33.524902       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0819 19:29:33.525059       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0819 19:29:33.526087       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0819 19:29:33.526125       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0819 19:31:33.526697       1 handler_proxy.go:99] no RequestInfo found in the context
	E0819 19:31:33.526812       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0819 19:31:33.526718       1 handler_proxy.go:99] no RequestInfo found in the context
	E0819 19:31:33.526857       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0819 19:31:33.528133       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0819 19:31:33.528208       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [390aeac356048873634022bb4093a927ddaf293b994b7316b79cfc2c4c329346] <==
	E0819 19:27:06.145396       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 19:27:06.654611       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 19:27:36.154533       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 19:27:36.663912       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 19:28:06.161973       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 19:28:06.674037       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 19:28:36.168795       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 19:28:36.682886       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 19:29:06.175190       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 19:29:06.690995       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0819 19:29:21.689771       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-278232"
	E0819 19:29:36.183354       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 19:29:36.699855       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0819 19:29:59.983447       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="262.918µs"
	E0819 19:30:06.190300       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 19:30:06.709459       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0819 19:30:10.981989       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="112.671µs"
	E0819 19:30:36.197391       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 19:30:36.717155       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 19:31:06.202899       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 19:31:06.726855       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 19:31:36.214512       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 19:31:36.733751       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 19:32:06.222554       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 19:32:06.743000       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [236b4296ad713b251ca958489ebfc4ce41bd2cb64d538cf0cf5f72cc9243e94a] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 19:13:33.696193       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0819 19:13:33.706203       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.106"]
	E0819 19:13:33.706278       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 19:13:33.745044       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 19:13:33.745097       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 19:13:33.745126       1 server_linux.go:169] "Using iptables Proxier"
	I0819 19:13:33.747538       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 19:13:33.747880       1 server.go:483] "Version info" version="v1.31.0"
	I0819 19:13:33.747920       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 19:13:33.749618       1 config.go:197] "Starting service config controller"
	I0819 19:13:33.749695       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 19:13:33.749714       1 config.go:104] "Starting endpoint slice config controller"
	I0819 19:13:33.749718       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 19:13:33.751397       1 config.go:326] "Starting node config controller"
	I0819 19:13:33.751453       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 19:13:33.850707       1 shared_informer.go:320] Caches are synced for service config
	I0819 19:13:33.850840       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0819 19:13:33.851817       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [123f84ccdc9cf1aa830891307b79d42c9166f018bff19b498a5107e428feb92f] <==
	I0819 19:13:30.715440       1 serving.go:386] Generated self-signed cert in-memory
	W0819 19:13:32.423973       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0819 19:13:32.424161       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0819 19:13:32.424253       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0819 19:13:32.424287       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0819 19:13:32.502350       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0819 19:13:32.502478       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 19:13:32.504583       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0819 19:13:32.506966       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0819 19:13:32.509711       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0819 19:13:32.506985       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0819 19:13:32.610988       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 19 19:31:29 no-preload-278232 kubelet[1441]: E0819 19:31:29.219128    1441 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095889218814065,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:31:29 no-preload-278232 kubelet[1441]: E0819 19:31:29.219170    1441 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095889218814065,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:31:31 no-preload-278232 kubelet[1441]: E0819 19:31:31.965685    1441 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-vxwrs" podUID="e8b74128-b393-4f0f-90fe-e05f20d54acd"
	Aug 19 19:31:39 no-preload-278232 kubelet[1441]: E0819 19:31:39.221614    1441 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095899221124772,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:31:39 no-preload-278232 kubelet[1441]: E0819 19:31:39.221708    1441 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095899221124772,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:31:46 no-preload-278232 kubelet[1441]: E0819 19:31:46.965973    1441 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-vxwrs" podUID="e8b74128-b393-4f0f-90fe-e05f20d54acd"
	Aug 19 19:31:49 no-preload-278232 kubelet[1441]: E0819 19:31:49.223709    1441 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095909223018266,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:31:49 no-preload-278232 kubelet[1441]: E0819 19:31:49.224032    1441 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095909223018266,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:31:59 no-preload-278232 kubelet[1441]: E0819 19:31:59.226018    1441 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095919225450811,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:31:59 no-preload-278232 kubelet[1441]: E0819 19:31:59.226145    1441 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095919225450811,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:32:00 no-preload-278232 kubelet[1441]: E0819 19:32:00.966432    1441 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-vxwrs" podUID="e8b74128-b393-4f0f-90fe-e05f20d54acd"
	Aug 19 19:32:09 no-preload-278232 kubelet[1441]: E0819 19:32:09.228542    1441 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095929228206303,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:32:09 no-preload-278232 kubelet[1441]: E0819 19:32:09.228575    1441 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095929228206303,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:32:12 no-preload-278232 kubelet[1441]: E0819 19:32:12.966425    1441 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-vxwrs" podUID="e8b74128-b393-4f0f-90fe-e05f20d54acd"
	Aug 19 19:32:19 no-preload-278232 kubelet[1441]: E0819 19:32:19.230890    1441 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095939230233109,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:32:19 no-preload-278232 kubelet[1441]: E0819 19:32:19.231168    1441 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095939230233109,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:32:23 no-preload-278232 kubelet[1441]: E0819 19:32:23.965591    1441 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-vxwrs" podUID="e8b74128-b393-4f0f-90fe-e05f20d54acd"
	Aug 19 19:32:28 no-preload-278232 kubelet[1441]: E0819 19:32:28.984469    1441 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 19:32:28 no-preload-278232 kubelet[1441]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 19:32:28 no-preload-278232 kubelet[1441]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 19:32:28 no-preload-278232 kubelet[1441]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 19:32:28 no-preload-278232 kubelet[1441]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 19:32:29 no-preload-278232 kubelet[1441]: E0819 19:32:29.233047    1441 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095949232708335,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:32:29 no-preload-278232 kubelet[1441]: E0819 19:32:29.233132    1441 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095949232708335,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 19:32:34 no-preload-278232 kubelet[1441]: E0819 19:32:34.967048    1441 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-vxwrs" podUID="e8b74128-b393-4f0f-90fe-e05f20d54acd"
	
	
	==> storage-provisioner [482a17643a2dedc658bdc88ca54e2ffb40166833acfc42adf452364226e51dc6] <==
	I0819 19:13:33.624392       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0819 19:14:03.629608       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [fd16c88623359ff9e44155c82c7e33b07dc040678d1d6f1915a25d80a5db0bbd] <==
	I0819 19:14:04.276605       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0819 19:14:04.286381       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0819 19:14:04.286480       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0819 19:14:21.692349       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0819 19:14:21.692512       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-278232_5fb11e2d-1d74-4fc5-a305-ed2c8e2d8a63!
	I0819 19:14:21.693776       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"68e9d6a8-f7ee-4060-9564-5e9b63dc1edd", APIVersion:"v1", ResourceVersion:"661", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-278232_5fb11e2d-1d74-4fc5-a305-ed2c8e2d8a63 became leader
	I0819 19:14:21.793062       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-278232_5fb11e2d-1d74-4fc5-a305-ed2c8e2d8a63!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-278232 -n no-preload-278232
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-278232 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-vxwrs
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-278232 describe pod metrics-server-6867b74b74-vxwrs
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-278232 describe pod metrics-server-6867b74b74-vxwrs: exit status 1 (80.217123ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-vxwrs" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-278232 describe pod metrics-server-6867b74b74-vxwrs: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (330.73s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (117.54s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
E0819 19:30:24.365609  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/functional-499773/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
E0819 19:31:07.808077  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/calico-571803/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
E0819 19:31:29.164904  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/custom-flannel-571803/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.32:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.32:8443: connect: connection refused
E0819 19:31:57.573714  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/flannel-571803/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-104669 -n old-k8s-version-104669
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-104669 -n old-k8s-version-104669: exit status 2 (242.189539ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-104669" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-104669 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-104669 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.463µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-104669 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-104669 -n old-k8s-version-104669
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-104669 -n old-k8s-version-104669: exit status 2 (231.93408ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-104669 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-104669 logs -n 25: (1.671790164s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-571803                           | enable-default-cni-571803    | jenkins | v1.33.1 | 19 Aug 24 19:03 UTC | 19 Aug 24 19:03 UTC |
	|         | sudo cat                                               |                              |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-571803                           | enable-default-cni-571803    | jenkins | v1.33.1 | 19 Aug 24 19:03 UTC | 19 Aug 24 19:03 UTC |
	|         | sudo containerd config dump                            |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-571803                           | enable-default-cni-571803    | jenkins | v1.33.1 | 19 Aug 24 19:03 UTC | 19 Aug 24 19:03 UTC |
	|         | sudo systemctl status crio                             |                              |         |         |                     |                     |
	|         | --all --full --no-pager                                |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-571803                           | enable-default-cni-571803    | jenkins | v1.33.1 | 19 Aug 24 19:03 UTC | 19 Aug 24 19:03 UTC |
	|         | sudo systemctl cat crio                                |                              |         |         |                     |                     |
	|         | --no-pager                                             |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-571803                           | enable-default-cni-571803    | jenkins | v1.33.1 | 19 Aug 24 19:03 UTC | 19 Aug 24 19:03 UTC |
	|         | sudo find /etc/crio -type f                            |                              |         |         |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                          |                              |         |         |                     |                     |
	|         | \;                                                     |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-571803                           | enable-default-cni-571803    | jenkins | v1.33.1 | 19 Aug 24 19:03 UTC | 19 Aug 24 19:03 UTC |
	|         | sudo crio config                                       |                              |         |         |                     |                     |
	| delete  | -p enable-default-cni-571803                           | enable-default-cni-571803    | jenkins | v1.33.1 | 19 Aug 24 19:03 UTC | 19 Aug 24 19:03 UTC |
	| delete  | -p                                                     | disable-driver-mounts-737091 | jenkins | v1.33.1 | 19 Aug 24 19:03 UTC | 19 Aug 24 19:03 UTC |
	|         | disable-driver-mounts-737091                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-982795 | jenkins | v1.33.1 | 19 Aug 24 19:03 UTC | 19 Aug 24 19:04 UTC |
	|         | default-k8s-diff-port-982795                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-278232             | no-preload-278232            | jenkins | v1.33.1 | 19 Aug 24 19:04 UTC | 19 Aug 24 19:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-278232                                   | no-preload-278232            | jenkins | v1.33.1 | 19 Aug 24 19:04 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-982795  | default-k8s-diff-port-982795 | jenkins | v1.33.1 | 19 Aug 24 19:04 UTC | 19 Aug 24 19:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-982795 | jenkins | v1.33.1 | 19 Aug 24 19:04 UTC |                     |
	|         | default-k8s-diff-port-982795                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-024748            | embed-certs-024748           | jenkins | v1.33.1 | 19 Aug 24 19:04 UTC | 19 Aug 24 19:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-024748                                  | embed-certs-024748           | jenkins | v1.33.1 | 19 Aug 24 19:04 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-104669        | old-k8s-version-104669       | jenkins | v1.33.1 | 19 Aug 24 19:06 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-278232                  | no-preload-278232            | jenkins | v1.33.1 | 19 Aug 24 19:07 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-278232                                   | no-preload-278232            | jenkins | v1.33.1 | 19 Aug 24 19:07 UTC | 19 Aug 24 19:18 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-982795       | default-k8s-diff-port-982795 | jenkins | v1.33.1 | 19 Aug 24 19:07 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-024748                 | embed-certs-024748           | jenkins | v1.33.1 | 19 Aug 24 19:07 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-982795 | jenkins | v1.33.1 | 19 Aug 24 19:07 UTC | 19 Aug 24 19:17 UTC |
	|         | default-k8s-diff-port-982795                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-024748                                  | embed-certs-024748           | jenkins | v1.33.1 | 19 Aug 24 19:07 UTC | 19 Aug 24 19:17 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-104669                              | old-k8s-version-104669       | jenkins | v1.33.1 | 19 Aug 24 19:08 UTC | 19 Aug 24 19:08 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-104669             | old-k8s-version-104669       | jenkins | v1.33.1 | 19 Aug 24 19:08 UTC | 19 Aug 24 19:08 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-104669                              | old-k8s-version-104669       | jenkins | v1.33.1 | 19 Aug 24 19:08 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 19:08:30
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 19:08:30.532545  438716 out.go:345] Setting OutFile to fd 1 ...
	I0819 19:08:30.532649  438716 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:08:30.532657  438716 out.go:358] Setting ErrFile to fd 2...
	I0819 19:08:30.532661  438716 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:08:30.532811  438716 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19468-372744/.minikube/bin
	I0819 19:08:30.533379  438716 out.go:352] Setting JSON to false
	I0819 19:08:30.534373  438716 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":10253,"bootTime":1724084257,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 19:08:30.534451  438716 start.go:139] virtualization: kvm guest
	I0819 19:08:30.536658  438716 out.go:177] * [old-k8s-version-104669] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 19:08:30.537921  438716 out.go:177]   - MINIKUBE_LOCATION=19468
	I0819 19:08:30.537959  438716 notify.go:220] Checking for updates...
	I0819 19:08:30.540501  438716 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 19:08:30.541864  438716 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19468-372744/kubeconfig
	I0819 19:08:30.543170  438716 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19468-372744/.minikube
	I0819 19:08:30.544395  438716 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 19:08:30.545614  438716 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 19:08:30.547072  438716 config.go:182] Loaded profile config "old-k8s-version-104669": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0819 19:08:30.547468  438716 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:08:30.547570  438716 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:08:30.563059  438716 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34139
	I0819 19:08:30.563506  438716 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:08:30.564068  438716 main.go:141] libmachine: Using API Version  1
	I0819 19:08:30.564091  438716 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:08:30.564474  438716 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:08:30.564719  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .DriverName
	I0819 19:08:30.566599  438716 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0819 19:08:30.568124  438716 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 19:08:30.568503  438716 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:08:30.568541  438716 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:08:30.583805  438716 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35313
	I0819 19:08:30.584314  438716 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:08:30.584805  438716 main.go:141] libmachine: Using API Version  1
	I0819 19:08:30.584827  438716 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:08:30.585131  438716 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:08:30.585320  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .DriverName
	I0819 19:08:30.621020  438716 out.go:177] * Using the kvm2 driver based on existing profile
	I0819 19:08:30.622137  438716 start.go:297] selected driver: kvm2
	I0819 19:08:30.622158  438716 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-104669 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-104669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.32 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 19:08:30.622252  438716 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 19:08:30.622998  438716 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 19:08:30.623082  438716 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19468-372744/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 19:08:30.638616  438716 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 19:08:30.638998  438716 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 19:08:30.639047  438716 cni.go:84] Creating CNI manager for ""
	I0819 19:08:30.639059  438716 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 19:08:30.639097  438716 start.go:340] cluster config:
	{Name:old-k8s-version-104669 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-104669 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.32 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 19:08:30.639243  438716 iso.go:125] acquiring lock: {Name:mk4c0ac1c3202b1a296739df622960e7a0bd8566 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 19:08:30.641823  438716 out.go:177] * Starting "old-k8s-version-104669" primary control-plane node in "old-k8s-version-104669" cluster
	I0819 19:08:30.915976  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:08:30.643167  438716 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0819 19:08:30.643197  438716 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0819 19:08:30.643205  438716 cache.go:56] Caching tarball of preloaded images
	I0819 19:08:30.643300  438716 preload.go:172] Found /home/jenkins/minikube-integration/19468-372744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 19:08:30.643311  438716 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0819 19:08:30.643409  438716 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/config.json ...
	I0819 19:08:30.643583  438716 start.go:360] acquireMachinesLock for old-k8s-version-104669: {Name:mk24ba67a747357e9ce40f1e460d2bb0bc59cc75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 19:08:33.988031  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:08:40.067999  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:08:43.140051  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:08:49.219991  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:08:52.292013  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:08:58.371952  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:09:01.444061  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:09:07.523958  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:09:10.595977  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:09:16.675955  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:09:19.748037  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:09:25.828064  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:09:28.899972  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:09:34.980044  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:09:38.052066  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:09:44.131960  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:09:47.203926  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:09:53.283992  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:09:56.355952  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:10:02.435994  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:10:05.508042  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:10:11.587960  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:10:14.660027  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:10:20.740007  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:10:23.811991  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:10:29.891998  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:10:32.963959  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:10:39.043942  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:10:42.116029  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:10:48.195984  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:10:51.267954  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:10:57.347922  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:11:00.419952  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:11:06.499978  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:11:09.572013  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:11:15.652066  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:11:18.724012  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:11:24.804001  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:11:27.875961  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:11:33.956046  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:11:37.027998  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:11:43.108014  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:11:46.179987  438001 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.106:22: connect: no route to host
	I0819 19:11:49.184190  438245 start.go:364] duration metric: took 4m21.835882225s to acquireMachinesLock for "default-k8s-diff-port-982795"
	I0819 19:11:49.184280  438245 start.go:96] Skipping create...Using existing machine configuration
	I0819 19:11:49.184296  438245 fix.go:54] fixHost starting: 
	I0819 19:11:49.184628  438245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:11:49.184661  438245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:11:49.200544  438245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38241
	I0819 19:11:49.200994  438245 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:11:49.201530  438245 main.go:141] libmachine: Using API Version  1
	I0819 19:11:49.201560  438245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:11:49.201953  438245 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:11:49.202151  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .DriverName
	I0819 19:11:49.202296  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetState
	I0819 19:11:49.203841  438245 fix.go:112] recreateIfNeeded on default-k8s-diff-port-982795: state=Stopped err=<nil>
	I0819 19:11:49.203875  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .DriverName
	W0819 19:11:49.204042  438245 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 19:11:49.205721  438245 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-982795" ...
	I0819 19:11:49.181717  438001 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 19:11:49.181755  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetMachineName
	I0819 19:11:49.182097  438001 buildroot.go:166] provisioning hostname "no-preload-278232"
	I0819 19:11:49.182131  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetMachineName
	I0819 19:11:49.182392  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHHostname
	I0819 19:11:49.184006  438001 machine.go:96] duration metric: took 4m37.423775019s to provisionDockerMachine
	I0819 19:11:49.184078  438001 fix.go:56] duration metric: took 4m37.445408913s for fixHost
	I0819 19:11:49.184091  438001 start.go:83] releasing machines lock for "no-preload-278232", held for 4m37.44544277s
	W0819 19:11:49.184116  438001 start.go:714] error starting host: provision: host is not running
	W0819 19:11:49.184274  438001 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0819 19:11:49.184288  438001 start.go:729] Will try again in 5 seconds ...
	I0819 19:11:49.206739  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .Start
	I0819 19:11:49.206892  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Ensuring networks are active...
	I0819 19:11:49.207586  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Ensuring network default is active
	I0819 19:11:49.207947  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Ensuring network mk-default-k8s-diff-port-982795 is active
	I0819 19:11:49.208368  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Getting domain xml...
	I0819 19:11:49.209114  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Creating domain...
	I0819 19:11:50.421290  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting to get IP...
	I0819 19:11:50.422082  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:11:50.422490  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | unable to find current IP address of domain default-k8s-diff-port-982795 in network mk-default-k8s-diff-port-982795
	I0819 19:11:50.422562  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | I0819 19:11:50.422473  439403 retry.go:31] will retry after 273.434317ms: waiting for machine to come up
	I0819 19:11:50.698167  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:11:50.698598  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | unable to find current IP address of domain default-k8s-diff-port-982795 in network mk-default-k8s-diff-port-982795
	I0819 19:11:50.698635  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | I0819 19:11:50.698569  439403 retry.go:31] will retry after 367.841325ms: waiting for machine to come up
	I0819 19:11:51.068401  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:11:51.068996  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | unable to find current IP address of domain default-k8s-diff-port-982795 in network mk-default-k8s-diff-port-982795
	I0819 19:11:51.069019  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | I0819 19:11:51.068942  439403 retry.go:31] will retry after 460.053559ms: waiting for machine to come up
	I0819 19:11:51.530228  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:11:51.530700  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | unable to find current IP address of domain default-k8s-diff-port-982795 in network mk-default-k8s-diff-port-982795
	I0819 19:11:51.530730  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | I0819 19:11:51.530636  439403 retry.go:31] will retry after 498.222116ms: waiting for machine to come up
	I0819 19:11:52.030322  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:11:52.030771  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | unable to find current IP address of domain default-k8s-diff-port-982795 in network mk-default-k8s-diff-port-982795
	I0819 19:11:52.030808  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | I0819 19:11:52.030710  439403 retry.go:31] will retry after 750.75175ms: waiting for machine to come up
	I0819 19:11:54.186765  438001 start.go:360] acquireMachinesLock for no-preload-278232: {Name:mk24ba67a747357e9ce40f1e460d2bb0bc59cc75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 19:11:52.782638  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:11:52.783001  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | unable to find current IP address of domain default-k8s-diff-port-982795 in network mk-default-k8s-diff-port-982795
	I0819 19:11:52.783027  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | I0819 19:11:52.782952  439403 retry.go:31] will retry after 576.883195ms: waiting for machine to come up
	I0819 19:11:53.361702  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:11:53.362105  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | unable to find current IP address of domain default-k8s-diff-port-982795 in network mk-default-k8s-diff-port-982795
	I0819 19:11:53.362138  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | I0819 19:11:53.362035  439403 retry.go:31] will retry after 900.512446ms: waiting for machine to come up
	I0819 19:11:54.264656  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:11:54.265032  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | unable to find current IP address of domain default-k8s-diff-port-982795 in network mk-default-k8s-diff-port-982795
	I0819 19:11:54.265052  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | I0819 19:11:54.264984  439403 retry.go:31] will retry after 1.339005367s: waiting for machine to come up
	I0819 19:11:55.605816  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:11:55.606348  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | unable to find current IP address of domain default-k8s-diff-port-982795 in network mk-default-k8s-diff-port-982795
	I0819 19:11:55.606378  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | I0819 19:11:55.606304  439403 retry.go:31] will retry after 1.517824531s: waiting for machine to come up
	I0819 19:11:57.126027  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:11:57.126400  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | unable to find current IP address of domain default-k8s-diff-port-982795 in network mk-default-k8s-diff-port-982795
	I0819 19:11:57.126426  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | I0819 19:11:57.126340  439403 retry.go:31] will retry after 2.220939365s: waiting for machine to come up
	I0819 19:11:59.348649  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:11:59.349041  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | unable to find current IP address of domain default-k8s-diff-port-982795 in network mk-default-k8s-diff-port-982795
	I0819 19:11:59.349072  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | I0819 19:11:59.348987  439403 retry.go:31] will retry after 2.830298687s: waiting for machine to come up
	I0819 19:12:02.182934  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:02.183398  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | unable to find current IP address of domain default-k8s-diff-port-982795 in network mk-default-k8s-diff-port-982795
	I0819 19:12:02.183422  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | I0819 19:12:02.183348  439403 retry.go:31] will retry after 2.302725829s: waiting for machine to come up
	I0819 19:12:04.487648  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:04.488074  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | unable to find current IP address of domain default-k8s-diff-port-982795 in network mk-default-k8s-diff-port-982795
	I0819 19:12:04.488108  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | I0819 19:12:04.488016  439403 retry.go:31] will retry after 2.932250361s: waiting for machine to come up
	I0819 19:12:08.736669  438295 start.go:364] duration metric: took 4m39.596501254s to acquireMachinesLock for "embed-certs-024748"
	I0819 19:12:08.736755  438295 start.go:96] Skipping create...Using existing machine configuration
	I0819 19:12:08.736776  438295 fix.go:54] fixHost starting: 
	I0819 19:12:08.737277  438295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:08.737326  438295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:08.754873  438295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36829
	I0819 19:12:08.755301  438295 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:08.755839  438295 main.go:141] libmachine: Using API Version  1
	I0819 19:12:08.755866  438295 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:08.756184  438295 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:08.756383  438295 main.go:141] libmachine: (embed-certs-024748) Calling .DriverName
	I0819 19:12:08.756525  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetState
	I0819 19:12:08.758092  438295 fix.go:112] recreateIfNeeded on embed-certs-024748: state=Stopped err=<nil>
	I0819 19:12:08.758134  438295 main.go:141] libmachine: (embed-certs-024748) Calling .DriverName
	W0819 19:12:08.758299  438295 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 19:12:08.760922  438295 out.go:177] * Restarting existing kvm2 VM for "embed-certs-024748" ...
	I0819 19:12:08.762335  438295 main.go:141] libmachine: (embed-certs-024748) Calling .Start
	I0819 19:12:08.762509  438295 main.go:141] libmachine: (embed-certs-024748) Ensuring networks are active...
	I0819 19:12:08.763274  438295 main.go:141] libmachine: (embed-certs-024748) Ensuring network default is active
	I0819 19:12:08.763647  438295 main.go:141] libmachine: (embed-certs-024748) Ensuring network mk-embed-certs-024748 is active
	I0819 19:12:08.764057  438295 main.go:141] libmachine: (embed-certs-024748) Getting domain xml...
	I0819 19:12:08.764765  438295 main.go:141] libmachine: (embed-certs-024748) Creating domain...
	I0819 19:12:07.424132  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.424589  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Found IP for machine: 192.168.61.48
	I0819 19:12:07.424615  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Reserving static IP address...
	I0819 19:12:07.424634  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has current primary IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.425178  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Reserved static IP address: 192.168.61.48
	I0819 19:12:07.425205  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Waiting for SSH to be available...
	I0819 19:12:07.425237  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-982795", mac: "52:54:00:d4:19:cd", ip: "192.168.61.48"} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:07.425283  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | skip adding static IP to network mk-default-k8s-diff-port-982795 - found existing host DHCP lease matching {name: "default-k8s-diff-port-982795", mac: "52:54:00:d4:19:cd", ip: "192.168.61.48"}
	I0819 19:12:07.425304  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | Getting to WaitForSSH function...
	I0819 19:12:07.427600  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.427969  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:07.428001  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.428179  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | Using SSH client type: external
	I0819 19:12:07.428245  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | Using SSH private key: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/default-k8s-diff-port-982795/id_rsa (-rw-------)
	I0819 19:12:07.428297  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.48 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19468-372744/.minikube/machines/default-k8s-diff-port-982795/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 19:12:07.428321  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | About to run SSH command:
	I0819 19:12:07.428339  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | exit 0
	I0819 19:12:07.547727  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | SSH cmd err, output: <nil>: 
	I0819 19:12:07.548095  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetConfigRaw
	I0819 19:12:07.548741  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetIP
	I0819 19:12:07.551308  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.551700  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:07.551733  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.551967  438245 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/default-k8s-diff-port-982795/config.json ...
	I0819 19:12:07.552164  438245 machine.go:93] provisionDockerMachine start ...
	I0819 19:12:07.552186  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .DriverName
	I0819 19:12:07.552427  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHHostname
	I0819 19:12:07.554782  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.555062  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:07.555080  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.555219  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHPort
	I0819 19:12:07.555427  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:12:07.555586  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:12:07.555767  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHUsername
	I0819 19:12:07.555912  438245 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:07.556152  438245 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.48 22 <nil> <nil>}
	I0819 19:12:07.556168  438245 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 19:12:07.655996  438245 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 19:12:07.656027  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetMachineName
	I0819 19:12:07.656301  438245 buildroot.go:166] provisioning hostname "default-k8s-diff-port-982795"
	I0819 19:12:07.656329  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetMachineName
	I0819 19:12:07.656530  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHHostname
	I0819 19:12:07.658956  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.659311  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:07.659344  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.659439  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHPort
	I0819 19:12:07.659617  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:12:07.659813  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:12:07.659937  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHUsername
	I0819 19:12:07.660112  438245 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:07.660291  438245 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.48 22 <nil> <nil>}
	I0819 19:12:07.660302  438245 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-982795 && echo "default-k8s-diff-port-982795" | sudo tee /etc/hostname
	I0819 19:12:07.773590  438245 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-982795
	
	I0819 19:12:07.773615  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHHostname
	I0819 19:12:07.776994  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.777360  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:07.777399  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.777580  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHPort
	I0819 19:12:07.777860  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:12:07.778060  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:12:07.778273  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHUsername
	I0819 19:12:07.778457  438245 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:07.778665  438245 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.48 22 <nil> <nil>}
	I0819 19:12:07.778687  438245 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-982795' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-982795/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-982795' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 19:12:07.884662  438245 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 19:12:07.884718  438245 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19468-372744/.minikube CaCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19468-372744/.minikube}
	I0819 19:12:07.884751  438245 buildroot.go:174] setting up certificates
	I0819 19:12:07.884768  438245 provision.go:84] configureAuth start
	I0819 19:12:07.884782  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetMachineName
	I0819 19:12:07.885101  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetIP
	I0819 19:12:07.887844  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.888262  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:07.888293  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.888439  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHHostname
	I0819 19:12:07.890581  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.890977  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:07.891005  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:07.891136  438245 provision.go:143] copyHostCerts
	I0819 19:12:07.891219  438245 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem, removing ...
	I0819 19:12:07.891240  438245 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem
	I0819 19:12:07.891306  438245 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem (1082 bytes)
	I0819 19:12:07.891398  438245 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem, removing ...
	I0819 19:12:07.891406  438245 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem
	I0819 19:12:07.891430  438245 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem (1123 bytes)
	I0819 19:12:07.891487  438245 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem, removing ...
	I0819 19:12:07.891494  438245 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem
	I0819 19:12:07.891517  438245 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem (1675 bytes)
	I0819 19:12:07.891570  438245 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-982795 san=[127.0.0.1 192.168.61.48 default-k8s-diff-port-982795 localhost minikube]
	I0819 19:12:08.083963  438245 provision.go:177] copyRemoteCerts
	I0819 19:12:08.084024  438245 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 19:12:08.084086  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHHostname
	I0819 19:12:08.086637  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:08.086961  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:08.087005  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:08.087144  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHPort
	I0819 19:12:08.087357  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:12:08.087507  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHUsername
	I0819 19:12:08.087694  438245 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/default-k8s-diff-port-982795/id_rsa Username:docker}
	I0819 19:12:08.166312  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 19:12:08.194124  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0819 19:12:08.221817  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 19:12:08.249674  438245 provision.go:87] duration metric: took 364.885827ms to configureAuth
	I0819 19:12:08.249709  438245 buildroot.go:189] setting minikube options for container-runtime
	I0819 19:12:08.249891  438245 config.go:182] Loaded profile config "default-k8s-diff-port-982795": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:12:08.249983  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHHostname
	I0819 19:12:08.253045  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:08.253438  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:08.253469  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:08.253647  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHPort
	I0819 19:12:08.253856  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:12:08.254071  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:12:08.254266  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHUsername
	I0819 19:12:08.254481  438245 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:08.254700  438245 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.48 22 <nil> <nil>}
	I0819 19:12:08.254722  438245 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 19:12:08.508775  438245 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 19:12:08.508808  438245 machine.go:96] duration metric: took 956.629475ms to provisionDockerMachine
	I0819 19:12:08.508824  438245 start.go:293] postStartSetup for "default-k8s-diff-port-982795" (driver="kvm2")
	I0819 19:12:08.508838  438245 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 19:12:08.508868  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .DriverName
	I0819 19:12:08.509214  438245 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 19:12:08.509259  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHHostname
	I0819 19:12:08.512004  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:08.512341  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:08.512378  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:08.512517  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHPort
	I0819 19:12:08.512688  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:12:08.512867  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHUsername
	I0819 19:12:08.513059  438245 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/default-k8s-diff-port-982795/id_rsa Username:docker}
	I0819 19:12:08.594287  438245 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 19:12:08.598742  438245 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 19:12:08.598774  438245 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-372744/.minikube/addons for local assets ...
	I0819 19:12:08.598849  438245 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-372744/.minikube/files for local assets ...
	I0819 19:12:08.598943  438245 filesync.go:149] local asset: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem -> 3800092.pem in /etc/ssl/certs
	I0819 19:12:08.599029  438245 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 19:12:08.608416  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem --> /etc/ssl/certs/3800092.pem (1708 bytes)
	I0819 19:12:08.633880  438245 start.go:296] duration metric: took 125.036785ms for postStartSetup
	I0819 19:12:08.633930  438245 fix.go:56] duration metric: took 19.449641939s for fixHost
	I0819 19:12:08.633955  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHHostname
	I0819 19:12:08.636729  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:08.637006  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:08.637030  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:08.637248  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHPort
	I0819 19:12:08.637483  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:12:08.637672  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:12:08.637791  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHUsername
	I0819 19:12:08.637954  438245 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:08.638170  438245 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.48 22 <nil> <nil>}
	I0819 19:12:08.638186  438245 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 19:12:08.736519  438245 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724094728.710064462
	
	I0819 19:12:08.736540  438245 fix.go:216] guest clock: 1724094728.710064462
	I0819 19:12:08.736548  438245 fix.go:229] Guest: 2024-08-19 19:12:08.710064462 +0000 UTC Remote: 2024-08-19 19:12:08.633934039 +0000 UTC m=+281.422189217 (delta=76.130423ms)
	I0819 19:12:08.736568  438245 fix.go:200] guest clock delta is within tolerance: 76.130423ms
	I0819 19:12:08.736580  438245 start.go:83] releasing machines lock for "default-k8s-diff-port-982795", held for 19.552337255s
	I0819 19:12:08.736604  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .DriverName
	I0819 19:12:08.736918  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetIP
	I0819 19:12:08.739570  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:08.740030  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:08.740057  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:08.740222  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .DriverName
	I0819 19:12:08.740762  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .DriverName
	I0819 19:12:08.740960  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .DriverName
	I0819 19:12:08.741037  438245 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 19:12:08.741100  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHHostname
	I0819 19:12:08.741185  438245 ssh_runner.go:195] Run: cat /version.json
	I0819 19:12:08.741206  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHHostname
	I0819 19:12:08.743899  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:08.744037  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:08.744282  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:08.744304  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:08.744439  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHPort
	I0819 19:12:08.744576  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:08.744599  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:12:08.744607  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:08.744689  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHPort
	I0819 19:12:08.744786  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHUsername
	I0819 19:12:08.744858  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:12:08.744923  438245 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/default-k8s-diff-port-982795/id_rsa Username:docker}
	I0819 19:12:08.744997  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHUsername
	I0819 19:12:08.745143  438245 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/default-k8s-diff-port-982795/id_rsa Username:docker}
	I0819 19:12:08.820672  438245 ssh_runner.go:195] Run: systemctl --version
	I0819 19:12:08.847046  438245 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 19:12:08.989725  438245 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 19:12:08.996607  438245 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 19:12:08.996680  438245 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 19:12:09.013017  438245 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 19:12:09.013067  438245 start.go:495] detecting cgroup driver to use...
	I0819 19:12:09.013144  438245 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 19:12:09.030338  438245 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 19:12:09.044580  438245 docker.go:217] disabling cri-docker service (if available) ...
	I0819 19:12:09.044635  438245 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 19:12:09.058825  438245 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 19:12:09.073358  438245 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 19:12:09.194611  438245 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 19:12:09.333368  438245 docker.go:233] disabling docker service ...
	I0819 19:12:09.333446  438245 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 19:12:09.348775  438245 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 19:12:09.362911  438245 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 19:12:09.503015  438245 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 19:12:09.621246  438245 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 19:12:09.638480  438245 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 19:12:09.659346  438245 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 19:12:09.659406  438245 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:09.672088  438245 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 19:12:09.672166  438245 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:09.683704  438245 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:09.694847  438245 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:09.706339  438245 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 19:12:09.718658  438245 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:09.730645  438245 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:09.750843  438245 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:09.762551  438245 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 19:12:09.772960  438245 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 19:12:09.773037  438245 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 19:12:09.788362  438245 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 19:12:09.798695  438245 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:12:09.923389  438245 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 19:12:10.063317  438245 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 19:12:10.063413  438245 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 19:12:10.068449  438245 start.go:563] Will wait 60s for crictl version
	I0819 19:12:10.068540  438245 ssh_runner.go:195] Run: which crictl
	I0819 19:12:10.072807  438245 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 19:12:10.114058  438245 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 19:12:10.114151  438245 ssh_runner.go:195] Run: crio --version
	I0819 19:12:10.147919  438245 ssh_runner.go:195] Run: crio --version
	I0819 19:12:10.180009  438245 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 19:12:10.181218  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetIP
	I0819 19:12:10.184626  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:10.185015  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:12:10.185049  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:12:10.185243  438245 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0819 19:12:10.189653  438245 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 19:12:10.203439  438245 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-982795 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-982795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.48 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 19:12:10.203608  438245 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 19:12:10.203668  438245 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 19:12:10.241427  438245 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0819 19:12:10.241511  438245 ssh_runner.go:195] Run: which lz4
	I0819 19:12:10.245734  438245 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 19:12:10.250082  438245 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 19:12:10.250112  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0819 19:12:11.694285  438245 crio.go:462] duration metric: took 1.448590086s to copy over tarball
	I0819 19:12:11.694371  438245 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 19:12:10.028225  438295 main.go:141] libmachine: (embed-certs-024748) Waiting to get IP...
	I0819 19:12:10.029208  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:10.029696  438295 main.go:141] libmachine: (embed-certs-024748) DBG | unable to find current IP address of domain embed-certs-024748 in network mk-embed-certs-024748
	I0819 19:12:10.029752  438295 main.go:141] libmachine: (embed-certs-024748) DBG | I0819 19:12:10.029666  439540 retry.go:31] will retry after 276.66184ms: waiting for machine to come up
	I0819 19:12:10.308339  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:10.308762  438295 main.go:141] libmachine: (embed-certs-024748) DBG | unable to find current IP address of domain embed-certs-024748 in network mk-embed-certs-024748
	I0819 19:12:10.308804  438295 main.go:141] libmachine: (embed-certs-024748) DBG | I0819 19:12:10.308710  439540 retry.go:31] will retry after 279.376198ms: waiting for machine to come up
	I0819 19:12:10.590326  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:10.591084  438295 main.go:141] libmachine: (embed-certs-024748) DBG | unable to find current IP address of domain embed-certs-024748 in network mk-embed-certs-024748
	I0819 19:12:10.591117  438295 main.go:141] libmachine: (embed-certs-024748) DBG | I0819 19:12:10.590861  439540 retry.go:31] will retry after 364.735563ms: waiting for machine to come up
	I0819 19:12:10.957592  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:10.958075  438295 main.go:141] libmachine: (embed-certs-024748) DBG | unable to find current IP address of domain embed-certs-024748 in network mk-embed-certs-024748
	I0819 19:12:10.958100  438295 main.go:141] libmachine: (embed-certs-024748) DBG | I0819 19:12:10.958033  439540 retry.go:31] will retry after 384.275284ms: waiting for machine to come up
	I0819 19:12:11.343631  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:11.344169  438295 main.go:141] libmachine: (embed-certs-024748) DBG | unable to find current IP address of domain embed-certs-024748 in network mk-embed-certs-024748
	I0819 19:12:11.344192  438295 main.go:141] libmachine: (embed-certs-024748) DBG | I0819 19:12:11.344125  439540 retry.go:31] will retry after 572.182522ms: waiting for machine to come up
	I0819 19:12:11.917660  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:11.918150  438295 main.go:141] libmachine: (embed-certs-024748) DBG | unable to find current IP address of domain embed-certs-024748 in network mk-embed-certs-024748
	I0819 19:12:11.918179  438295 main.go:141] libmachine: (embed-certs-024748) DBG | I0819 19:12:11.918093  439540 retry.go:31] will retry after 767.807058ms: waiting for machine to come up
	I0819 19:12:12.687256  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:12.687782  438295 main.go:141] libmachine: (embed-certs-024748) DBG | unable to find current IP address of domain embed-certs-024748 in network mk-embed-certs-024748
	I0819 19:12:12.687815  438295 main.go:141] libmachine: (embed-certs-024748) DBG | I0819 19:12:12.687728  439540 retry.go:31] will retry after 715.897037ms: waiting for machine to come up
	I0819 19:12:13.406041  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:13.406653  438295 main.go:141] libmachine: (embed-certs-024748) DBG | unable to find current IP address of domain embed-certs-024748 in network mk-embed-certs-024748
	I0819 19:12:13.406690  438295 main.go:141] libmachine: (embed-certs-024748) DBG | I0819 19:12:13.406577  439540 retry.go:31] will retry after 1.301579737s: waiting for machine to come up
	I0819 19:12:13.847779  438245 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.153373496s)
	I0819 19:12:13.847810  438245 crio.go:469] duration metric: took 2.153488101s to extract the tarball
	I0819 19:12:13.847817  438245 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 19:12:13.885520  438245 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 19:12:13.929775  438245 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 19:12:13.929809  438245 cache_images.go:84] Images are preloaded, skipping loading
	I0819 19:12:13.929838  438245 kubeadm.go:934] updating node { 192.168.61.48 8444 v1.31.0 crio true true} ...
	I0819 19:12:13.930019  438245 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-982795 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.48
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-982795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 19:12:13.930113  438245 ssh_runner.go:195] Run: crio config
	I0819 19:12:13.977098  438245 cni.go:84] Creating CNI manager for ""
	I0819 19:12:13.977123  438245 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 19:12:13.977136  438245 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 19:12:13.977176  438245 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.48 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-982795 NodeName:default-k8s-diff-port-982795 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.48"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.48 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 19:12:13.977382  438245 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.48
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-982795"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.48
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.48"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 19:12:13.977461  438245 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 19:12:13.987276  438245 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 19:12:13.987381  438245 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 19:12:13.996666  438245 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0819 19:12:14.013822  438245 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 19:12:14.030936  438245 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0819 19:12:14.048575  438245 ssh_runner.go:195] Run: grep 192.168.61.48	control-plane.minikube.internal$ /etc/hosts
	I0819 19:12:14.052809  438245 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.48	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 19:12:14.065177  438245 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:12:14.185159  438245 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 19:12:14.202906  438245 certs.go:68] Setting up /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/default-k8s-diff-port-982795 for IP: 192.168.61.48
	I0819 19:12:14.202934  438245 certs.go:194] generating shared ca certs ...
	I0819 19:12:14.202966  438245 certs.go:226] acquiring lock for ca certs: {Name:mk639e03f593e0bccac045f6e9f5ba3b96cc81e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:12:14.203184  438245 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key
	I0819 19:12:14.203266  438245 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key
	I0819 19:12:14.203282  438245 certs.go:256] generating profile certs ...
	I0819 19:12:14.203399  438245 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/default-k8s-diff-port-982795/client.key
	I0819 19:12:14.203487  438245 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/default-k8s-diff-port-982795/apiserver.key.a3c7a519
	I0819 19:12:14.203552  438245 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/default-k8s-diff-port-982795/proxy-client.key
	I0819 19:12:14.203757  438245 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem (1338 bytes)
	W0819 19:12:14.203820  438245 certs.go:480] ignoring /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009_empty.pem, impossibly tiny 0 bytes
	I0819 19:12:14.203834  438245 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 19:12:14.203866  438245 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem (1082 bytes)
	I0819 19:12:14.203899  438245 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem (1123 bytes)
	I0819 19:12:14.203929  438245 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem (1675 bytes)
	I0819 19:12:14.203994  438245 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem (1708 bytes)
	I0819 19:12:14.205025  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 19:12:14.258243  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 19:12:14.295380  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 19:12:14.330511  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 19:12:14.358547  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/default-k8s-diff-port-982795/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0819 19:12:14.386938  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/default-k8s-diff-port-982795/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 19:12:14.415021  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/default-k8s-diff-port-982795/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 19:12:14.439531  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/default-k8s-diff-port-982795/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 19:12:14.463969  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 19:12:14.487638  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem --> /usr/share/ca-certificates/380009.pem (1338 bytes)
	I0819 19:12:14.511571  438245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem --> /usr/share/ca-certificates/3800092.pem (1708 bytes)
	I0819 19:12:14.535223  438245 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 19:12:14.552922  438245 ssh_runner.go:195] Run: openssl version
	I0819 19:12:14.559078  438245 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 19:12:14.570605  438245 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:12:14.575411  438245 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:12:14.575484  438245 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:12:14.581714  438245 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 19:12:14.592896  438245 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/380009.pem && ln -fs /usr/share/ca-certificates/380009.pem /etc/ssl/certs/380009.pem"
	I0819 19:12:14.604306  438245 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/380009.pem
	I0819 19:12:14.609139  438245 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 17:56 /usr/share/ca-certificates/380009.pem
	I0819 19:12:14.609212  438245 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/380009.pem
	I0819 19:12:14.615160  438245 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/380009.pem /etc/ssl/certs/51391683.0"
	I0819 19:12:14.626010  438245 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3800092.pem && ln -fs /usr/share/ca-certificates/3800092.pem /etc/ssl/certs/3800092.pem"
	I0819 19:12:14.636821  438245 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3800092.pem
	I0819 19:12:14.641308  438245 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 17:56 /usr/share/ca-certificates/3800092.pem
	I0819 19:12:14.641358  438245 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3800092.pem
	I0819 19:12:14.646898  438245 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3800092.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 19:12:14.657905  438245 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 19:12:14.662780  438245 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 19:12:14.668934  438245 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 19:12:14.674693  438245 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 19:12:14.680683  438245 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 19:12:14.686689  438245 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 19:12:14.692678  438245 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 19:12:14.698784  438245 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-982795 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-982795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.48 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 19:12:14.698930  438245 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 19:12:14.699006  438245 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 19:12:14.740881  438245 cri.go:89] found id: ""
	I0819 19:12:14.740964  438245 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 19:12:14.751589  438245 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 19:12:14.751613  438245 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 19:12:14.751665  438245 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 19:12:14.761837  438245 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 19:12:14.762870  438245 kubeconfig.go:125] found "default-k8s-diff-port-982795" server: "https://192.168.61.48:8444"
	I0819 19:12:14.765176  438245 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 19:12:14.775114  438245 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.48
	I0819 19:12:14.775147  438245 kubeadm.go:1160] stopping kube-system containers ...
	I0819 19:12:14.775161  438245 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0819 19:12:14.775228  438245 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 19:12:14.811373  438245 cri.go:89] found id: ""
	I0819 19:12:14.811442  438245 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0819 19:12:14.829656  438245 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 19:12:14.840215  438245 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 19:12:14.840236  438245 kubeadm.go:157] found existing configuration files:
	
	I0819 19:12:14.840288  438245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0819 19:12:14.850017  438245 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 19:12:14.850075  438245 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 19:12:14.860060  438245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0819 19:12:14.869589  438245 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 19:12:14.869645  438245 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 19:12:14.879249  438245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0819 19:12:14.888475  438245 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 19:12:14.888532  438245 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 19:12:14.898151  438245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0819 19:12:14.907628  438245 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 19:12:14.907737  438245 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 19:12:14.917581  438245 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 19:12:14.927119  438245 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:12:15.037162  438245 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:12:16.355430  438245 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.318225023s)
	I0819 19:12:16.355461  438245 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:12:16.566565  438245 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:12:16.649402  438245 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:12:16.775956  438245 api_server.go:52] waiting for apiserver process to appear ...
	I0819 19:12:16.776067  438245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:12:14.709988  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:14.710397  438295 main.go:141] libmachine: (embed-certs-024748) DBG | unable to find current IP address of domain embed-certs-024748 in network mk-embed-certs-024748
	I0819 19:12:14.710429  438295 main.go:141] libmachine: (embed-certs-024748) DBG | I0819 19:12:14.710338  439540 retry.go:31] will retry after 1.420823505s: waiting for machine to come up
	I0819 19:12:16.133160  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:16.133558  438295 main.go:141] libmachine: (embed-certs-024748) DBG | unable to find current IP address of domain embed-certs-024748 in network mk-embed-certs-024748
	I0819 19:12:16.133587  438295 main.go:141] libmachine: (embed-certs-024748) DBG | I0819 19:12:16.133531  439540 retry.go:31] will retry after 1.71697779s: waiting for machine to come up
	I0819 19:12:17.852342  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:17.852884  438295 main.go:141] libmachine: (embed-certs-024748) DBG | unable to find current IP address of domain embed-certs-024748 in network mk-embed-certs-024748
	I0819 19:12:17.852922  438295 main.go:141] libmachine: (embed-certs-024748) DBG | I0819 19:12:17.852836  439540 retry.go:31] will retry after 2.816782354s: waiting for machine to come up
	I0819 19:12:17.277067  438245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:12:17.777027  438245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:12:17.797513  438245 api_server.go:72] duration metric: took 1.021572879s to wait for apiserver process to appear ...
	I0819 19:12:17.797554  438245 api_server.go:88] waiting for apiserver healthz status ...
	I0819 19:12:17.797596  438245 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8444/healthz ...
	I0819 19:12:17.798191  438245 api_server.go:269] stopped: https://192.168.61.48:8444/healthz: Get "https://192.168.61.48:8444/healthz": dial tcp 192.168.61.48:8444: connect: connection refused
	I0819 19:12:18.297907  438245 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8444/healthz ...
	I0819 19:12:20.177305  438245 api_server.go:279] https://192.168.61.48:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 19:12:20.177345  438245 api_server.go:103] status: https://192.168.61.48:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 19:12:20.177367  438245 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8444/healthz ...
	I0819 19:12:20.244091  438245 api_server.go:279] https://192.168.61.48:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 19:12:20.244140  438245 api_server.go:103] status: https://192.168.61.48:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 19:12:20.298403  438245 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8444/healthz ...
	I0819 19:12:20.304289  438245 api_server.go:279] https://192.168.61.48:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 19:12:20.304325  438245 api_server.go:103] status: https://192.168.61.48:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 19:12:20.797876  438245 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8444/healthz ...
	I0819 19:12:20.803894  438245 api_server.go:279] https://192.168.61.48:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 19:12:20.803935  438245 api_server.go:103] status: https://192.168.61.48:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 19:12:21.298284  438245 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8444/healthz ...
	I0819 19:12:21.320292  438245 api_server.go:279] https://192.168.61.48:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 19:12:21.320320  438245 api_server.go:103] status: https://192.168.61.48:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 19:12:21.797829  438245 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8444/healthz ...
	I0819 19:12:21.802183  438245 api_server.go:279] https://192.168.61.48:8444/healthz returned 200:
	ok
	I0819 19:12:21.809866  438245 api_server.go:141] control plane version: v1.31.0
	I0819 19:12:21.809902  438245 api_server.go:131] duration metric: took 4.012339897s to wait for apiserver health ...
	I0819 19:12:21.809914  438245 cni.go:84] Creating CNI manager for ""
	I0819 19:12:21.809944  438245 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 19:12:21.811668  438245 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 19:12:21.813183  438245 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 19:12:21.826170  438245 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 19:12:21.850473  438245 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 19:12:21.865379  438245 system_pods.go:59] 8 kube-system pods found
	I0819 19:12:21.865422  438245 system_pods.go:61] "coredns-6f6b679f8f-dwbnt" [9b8d7ee3-15ca-475b-b659-d5c3b10890fe] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0819 19:12:21.865442  438245 system_pods.go:61] "etcd-default-k8s-diff-port-982795" [6686e6f6-485d-4c57-89a1-af4f27b6216e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0819 19:12:21.865455  438245 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-982795" [fcfb5a0d-6d6c-4c30-a17f-43106f3dd5ae] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0819 19:12:21.865475  438245 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-982795" [346bf3b5-57e7-4f30-a6ed-959dc9e8941d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0819 19:12:21.865485  438245 system_pods.go:61] "kube-proxy-wrczx" [acabdc8e-5397-4531-afcb-57a8f4c48618] Running
	I0819 19:12:21.865493  438245 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-982795" [82de0c57-e712-4c0c-b751-a17cb0dd75b2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0819 19:12:21.865503  438245 system_pods.go:61] "metrics-server-6867b74b74-5hlnx" [394c87af-a198-4fea-8a30-32a8c3e80884] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 19:12:21.865522  438245 system_pods.go:61] "storage-provisioner" [35f70989-846d-4ec5-b879-a22625ee94ce] Running
	I0819 19:12:21.865534  438245 system_pods.go:74] duration metric: took 15.035147ms to wait for pod list to return data ...
	I0819 19:12:21.865545  438245 node_conditions.go:102] verifying NodePressure condition ...
	I0819 19:12:21.870314  438245 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 19:12:21.870350  438245 node_conditions.go:123] node cpu capacity is 2
	I0819 19:12:21.870366  438245 node_conditions.go:105] duration metric: took 4.813819ms to run NodePressure ...
	I0819 19:12:21.870390  438245 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:12:22.130916  438245 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0819 19:12:22.134889  438245 kubeadm.go:739] kubelet initialised
	I0819 19:12:22.134912  438245 kubeadm.go:740] duration metric: took 3.970465ms waiting for restarted kubelet to initialise ...
	I0819 19:12:22.134920  438245 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 19:12:22.139345  438245 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-dwbnt" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:20.672189  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:20.672655  438295 main.go:141] libmachine: (embed-certs-024748) DBG | unable to find current IP address of domain embed-certs-024748 in network mk-embed-certs-024748
	I0819 19:12:20.672682  438295 main.go:141] libmachine: (embed-certs-024748) DBG | I0819 19:12:20.672613  439540 retry.go:31] will retry after 2.76896974s: waiting for machine to come up
	I0819 19:12:23.442804  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:23.443223  438295 main.go:141] libmachine: (embed-certs-024748) DBG | unable to find current IP address of domain embed-certs-024748 in network mk-embed-certs-024748
	I0819 19:12:23.443268  438295 main.go:141] libmachine: (embed-certs-024748) DBG | I0819 19:12:23.443170  439540 retry.go:31] will retry after 4.199459292s: waiting for machine to come up
	I0819 19:12:24.145329  438245 pod_ready.go:103] pod "coredns-6f6b679f8f-dwbnt" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:26.645695  438245 pod_ready.go:103] pod "coredns-6f6b679f8f-dwbnt" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:27.644842  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:27.645376  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has current primary IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:27.645403  438295 main.go:141] libmachine: (embed-certs-024748) Found IP for machine: 192.168.72.96
	I0819 19:12:27.645417  438295 main.go:141] libmachine: (embed-certs-024748) Reserving static IP address...
	I0819 19:12:27.645874  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "embed-certs-024748", mac: "52:54:00:f0:8b:43", ip: "192.168.72.96"} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:27.645902  438295 main.go:141] libmachine: (embed-certs-024748) Reserved static IP address: 192.168.72.96
	I0819 19:12:27.645919  438295 main.go:141] libmachine: (embed-certs-024748) DBG | skip adding static IP to network mk-embed-certs-024748 - found existing host DHCP lease matching {name: "embed-certs-024748", mac: "52:54:00:f0:8b:43", ip: "192.168.72.96"}
	I0819 19:12:27.645952  438295 main.go:141] libmachine: (embed-certs-024748) Waiting for SSH to be available...
	I0819 19:12:27.645974  438295 main.go:141] libmachine: (embed-certs-024748) DBG | Getting to WaitForSSH function...
	I0819 19:12:27.648195  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:27.648471  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:27.648496  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:27.648717  438295 main.go:141] libmachine: (embed-certs-024748) DBG | Using SSH client type: external
	I0819 19:12:27.648744  438295 main.go:141] libmachine: (embed-certs-024748) DBG | Using SSH private key: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/embed-certs-024748/id_rsa (-rw-------)
	I0819 19:12:27.648773  438295 main.go:141] libmachine: (embed-certs-024748) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.96 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19468-372744/.minikube/machines/embed-certs-024748/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 19:12:27.648792  438295 main.go:141] libmachine: (embed-certs-024748) DBG | About to run SSH command:
	I0819 19:12:27.648808  438295 main.go:141] libmachine: (embed-certs-024748) DBG | exit 0
	I0819 19:12:27.775964  438295 main.go:141] libmachine: (embed-certs-024748) DBG | SSH cmd err, output: <nil>: 
	I0819 19:12:27.776344  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetConfigRaw
	I0819 19:12:27.777100  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetIP
	I0819 19:12:27.780096  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:27.780535  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:27.780570  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:27.780936  438295 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/embed-certs-024748/config.json ...
	I0819 19:12:27.781721  438295 machine.go:93] provisionDockerMachine start ...
	I0819 19:12:27.781748  438295 main.go:141] libmachine: (embed-certs-024748) Calling .DriverName
	I0819 19:12:27.781974  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHHostname
	I0819 19:12:27.784482  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:27.784838  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:27.784868  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:27.785066  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHPort
	I0819 19:12:27.785254  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:27.785452  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:27.785617  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHUsername
	I0819 19:12:27.785789  438295 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:27.786038  438295 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.96 22 <nil> <nil>}
	I0819 19:12:27.786059  438295 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 19:12:27.904337  438295 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 19:12:27.904375  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetMachineName
	I0819 19:12:27.904675  438295 buildroot.go:166] provisioning hostname "embed-certs-024748"
	I0819 19:12:27.904711  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetMachineName
	I0819 19:12:27.904932  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHHostname
	I0819 19:12:27.907960  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:27.908325  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:27.908354  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:27.908446  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHPort
	I0819 19:12:27.908659  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:27.908825  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:27.909012  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHUsername
	I0819 19:12:27.909234  438295 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:27.909441  438295 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.96 22 <nil> <nil>}
	I0819 19:12:27.909458  438295 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-024748 && echo "embed-certs-024748" | sudo tee /etc/hostname
	I0819 19:12:28.036564  438295 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-024748
	
	I0819 19:12:28.036597  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHHostname
	I0819 19:12:28.039385  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:28.039798  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:28.039827  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:28.040071  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHPort
	I0819 19:12:28.040327  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:28.040493  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:28.040652  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHUsername
	I0819 19:12:28.040882  438295 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:28.041113  438295 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.96 22 <nil> <nil>}
	I0819 19:12:28.041138  438295 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-024748' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-024748/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-024748' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 19:12:28.162311  438295 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 19:12:28.162348  438295 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19468-372744/.minikube CaCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19468-372744/.minikube}
	I0819 19:12:28.162368  438295 buildroot.go:174] setting up certificates
	I0819 19:12:28.162376  438295 provision.go:84] configureAuth start
	I0819 19:12:28.162385  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetMachineName
	I0819 19:12:28.162703  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetIP
	I0819 19:12:28.165171  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:28.165563  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:28.165593  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:28.165727  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHHostname
	I0819 19:12:28.167917  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:28.168199  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:28.168221  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:28.168411  438295 provision.go:143] copyHostCerts
	I0819 19:12:28.168469  438295 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem, removing ...
	I0819 19:12:28.168491  438295 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem
	I0819 19:12:28.168560  438295 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem (1082 bytes)
	I0819 19:12:28.168693  438295 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem, removing ...
	I0819 19:12:28.168704  438295 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem
	I0819 19:12:28.168736  438295 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem (1123 bytes)
	I0819 19:12:28.168814  438295 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem, removing ...
	I0819 19:12:28.168824  438295 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem
	I0819 19:12:28.168853  438295 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem (1675 bytes)
	I0819 19:12:28.168942  438295 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem org=jenkins.embed-certs-024748 san=[127.0.0.1 192.168.72.96 embed-certs-024748 localhost minikube]
	I0819 19:12:28.447064  438295 provision.go:177] copyRemoteCerts
	I0819 19:12:28.447129  438295 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 19:12:28.447158  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHHostname
	I0819 19:12:28.449851  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:28.450138  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:28.450163  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:28.450344  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHPort
	I0819 19:12:28.450541  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:28.450713  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHUsername
	I0819 19:12:28.450832  438295 sshutil.go:53] new ssh client: &{IP:192.168.72.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/embed-certs-024748/id_rsa Username:docker}
	I0819 19:12:28.537815  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 19:12:28.562408  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0819 19:12:28.586728  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 19:12:28.611119  438295 provision.go:87] duration metric: took 448.726133ms to configureAuth
	I0819 19:12:28.611158  438295 buildroot.go:189] setting minikube options for container-runtime
	I0819 19:12:28.611351  438295 config.go:182] Loaded profile config "embed-certs-024748": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:12:28.611428  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHHostname
	I0819 19:12:28.614168  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:28.614543  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:28.614571  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:28.614736  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHPort
	I0819 19:12:28.614941  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:28.615083  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:28.615192  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHUsername
	I0819 19:12:28.615302  438295 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:28.615454  438295 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.96 22 <nil> <nil>}
	I0819 19:12:28.615469  438295 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 19:12:28.890054  438295 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 19:12:28.890086  438295 machine.go:96] duration metric: took 1.10834874s to provisionDockerMachine
	I0819 19:12:28.890100  438295 start.go:293] postStartSetup for "embed-certs-024748" (driver="kvm2")
	I0819 19:12:28.890120  438295 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 19:12:28.890146  438295 main.go:141] libmachine: (embed-certs-024748) Calling .DriverName
	I0819 19:12:28.890469  438295 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 19:12:28.890499  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHHostname
	I0819 19:12:28.893251  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:28.893579  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:28.893605  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:28.893733  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHPort
	I0819 19:12:28.893895  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:28.894102  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHUsername
	I0819 19:12:28.894220  438295 sshutil.go:53] new ssh client: &{IP:192.168.72.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/embed-certs-024748/id_rsa Username:docker}
	I0819 19:12:28.979381  438295 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 19:12:28.983921  438295 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 19:12:28.983952  438295 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-372744/.minikube/addons for local assets ...
	I0819 19:12:28.984048  438295 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-372744/.minikube/files for local assets ...
	I0819 19:12:28.984156  438295 filesync.go:149] local asset: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem -> 3800092.pem in /etc/ssl/certs
	I0819 19:12:28.984250  438295 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 19:12:28.994964  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem --> /etc/ssl/certs/3800092.pem (1708 bytes)
	I0819 19:12:29.018801  438295 start.go:296] duration metric: took 128.685446ms for postStartSetup
	I0819 19:12:29.018843  438295 fix.go:56] duration metric: took 20.282076509s for fixHost
	I0819 19:12:29.018870  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHHostname
	I0819 19:12:29.021554  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:29.021848  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:29.021875  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:29.022066  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHPort
	I0819 19:12:29.022261  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:29.022428  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:29.022526  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHUsername
	I0819 19:12:29.022678  438295 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:29.022900  438295 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.96 22 <nil> <nil>}
	I0819 19:12:29.022915  438295 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 19:12:29.132976  438716 start.go:364] duration metric: took 3m58.489348567s to acquireMachinesLock for "old-k8s-version-104669"
	I0819 19:12:29.133047  438716 start.go:96] Skipping create...Using existing machine configuration
	I0819 19:12:29.133055  438716 fix.go:54] fixHost starting: 
	I0819 19:12:29.133485  438716 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:29.133524  438716 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:29.151330  438716 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39213
	I0819 19:12:29.151778  438716 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:29.152271  438716 main.go:141] libmachine: Using API Version  1
	I0819 19:12:29.152301  438716 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:29.152682  438716 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:29.152883  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .DriverName
	I0819 19:12:29.153065  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetState
	I0819 19:12:29.154399  438716 fix.go:112] recreateIfNeeded on old-k8s-version-104669: state=Stopped err=<nil>
	I0819 19:12:29.154444  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .DriverName
	W0819 19:12:29.154684  438716 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 19:12:29.156349  438716 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-104669" ...
	I0819 19:12:29.157631  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .Start
	I0819 19:12:29.157825  438716 main.go:141] libmachine: (old-k8s-version-104669) Ensuring networks are active...
	I0819 19:12:29.158635  438716 main.go:141] libmachine: (old-k8s-version-104669) Ensuring network default is active
	I0819 19:12:29.159041  438716 main.go:141] libmachine: (old-k8s-version-104669) Ensuring network mk-old-k8s-version-104669 is active
	I0819 19:12:29.159509  438716 main.go:141] libmachine: (old-k8s-version-104669) Getting domain xml...
	I0819 19:12:29.160383  438716 main.go:141] libmachine: (old-k8s-version-104669) Creating domain...
	I0819 19:12:30.452488  438716 main.go:141] libmachine: (old-k8s-version-104669) Waiting to get IP...
	I0819 19:12:30.453743  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:30.454237  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:30.454323  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:30.454193  439728 retry.go:31] will retry after 197.440033ms: waiting for machine to come up
	I0819 19:12:29.132812  438295 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724094749.105537362
	
	I0819 19:12:29.132839  438295 fix.go:216] guest clock: 1724094749.105537362
	I0819 19:12:29.132850  438295 fix.go:229] Guest: 2024-08-19 19:12:29.105537362 +0000 UTC Remote: 2024-08-19 19:12:29.018848957 +0000 UTC m=+300.015027560 (delta=86.688405ms)
	I0819 19:12:29.132877  438295 fix.go:200] guest clock delta is within tolerance: 86.688405ms
	I0819 19:12:29.132884  438295 start.go:83] releasing machines lock for "embed-certs-024748", held for 20.396159242s
	I0819 19:12:29.132912  438295 main.go:141] libmachine: (embed-certs-024748) Calling .DriverName
	I0819 19:12:29.133179  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetIP
	I0819 19:12:29.136110  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:29.136532  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:29.136565  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:29.136750  438295 main.go:141] libmachine: (embed-certs-024748) Calling .DriverName
	I0819 19:12:29.137307  438295 main.go:141] libmachine: (embed-certs-024748) Calling .DriverName
	I0819 19:12:29.137532  438295 main.go:141] libmachine: (embed-certs-024748) Calling .DriverName
	I0819 19:12:29.137616  438295 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 19:12:29.137690  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHHostname
	I0819 19:12:29.137758  438295 ssh_runner.go:195] Run: cat /version.json
	I0819 19:12:29.137781  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHHostname
	I0819 19:12:29.140500  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:29.140820  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:29.140870  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:29.140903  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:29.141067  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHPort
	I0819 19:12:29.141266  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:29.141385  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:29.141430  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:29.141443  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHUsername
	I0819 19:12:29.141586  438295 sshutil.go:53] new ssh client: &{IP:192.168.72.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/embed-certs-024748/id_rsa Username:docker}
	I0819 19:12:29.141639  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHPort
	I0819 19:12:29.141790  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:29.141957  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHUsername
	I0819 19:12:29.142123  438295 sshutil.go:53] new ssh client: &{IP:192.168.72.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/embed-certs-024748/id_rsa Username:docker}
	I0819 19:12:29.242886  438295 ssh_runner.go:195] Run: systemctl --version
	I0819 19:12:29.249276  438295 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 19:12:29.393872  438295 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 19:12:29.401874  438295 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 19:12:29.401954  438295 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 19:12:29.421973  438295 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 19:12:29.422004  438295 start.go:495] detecting cgroup driver to use...
	I0819 19:12:29.422081  438295 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 19:12:29.442823  438295 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 19:12:29.462663  438295 docker.go:217] disabling cri-docker service (if available) ...
	I0819 19:12:29.462720  438295 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 19:12:29.477896  438295 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 19:12:29.492591  438295 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 19:12:29.613759  438295 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 19:12:29.770719  438295 docker.go:233] disabling docker service ...
	I0819 19:12:29.770805  438295 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 19:12:29.785787  438295 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 19:12:29.802879  438295 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 19:12:29.947633  438295 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 19:12:30.082602  438295 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 19:12:30.097628  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 19:12:30.118671  438295 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 19:12:30.118735  438295 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:30.131287  438295 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 19:12:30.131354  438295 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:30.143008  438295 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:30.156358  438295 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:30.172123  438295 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 19:12:30.188196  438295 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:30.201487  438295 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:30.219887  438295 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:30.235685  438295 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 19:12:30.246112  438295 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 19:12:30.246202  438295 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 19:12:30.259732  438295 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 19:12:30.269866  438295 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:12:30.397522  438295 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 19:12:30.545249  438295 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 19:12:30.545349  438295 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 19:12:30.550473  438295 start.go:563] Will wait 60s for crictl version
	I0819 19:12:30.550528  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:12:30.554782  438295 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 19:12:30.597634  438295 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 19:12:30.597736  438295 ssh_runner.go:195] Run: crio --version
	I0819 19:12:30.628137  438295 ssh_runner.go:195] Run: crio --version
	I0819 19:12:30.660912  438295 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 19:12:29.146475  438245 pod_ready.go:103] pod "coredns-6f6b679f8f-dwbnt" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:31.147618  438245 pod_ready.go:93] pod "coredns-6f6b679f8f-dwbnt" in "kube-system" namespace has status "Ready":"True"
	I0819 19:12:31.147651  438245 pod_ready.go:82] duration metric: took 9.00827926s for pod "coredns-6f6b679f8f-dwbnt" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:31.147665  438245 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:31.153305  438245 pod_ready.go:93] pod "etcd-default-k8s-diff-port-982795" in "kube-system" namespace has status "Ready":"True"
	I0819 19:12:31.153331  438245 pod_ready.go:82] duration metric: took 5.657625ms for pod "etcd-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:31.153347  438245 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:31.159009  438245 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-982795" in "kube-system" namespace has status "Ready":"True"
	I0819 19:12:31.159037  438245 pod_ready.go:82] duration metric: took 5.680194ms for pod "kube-apiserver-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:31.159050  438245 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:31.165478  438245 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-982795" in "kube-system" namespace has status "Ready":"True"
	I0819 19:12:31.165504  438245 pod_ready.go:82] duration metric: took 6.444529ms for pod "kube-controller-manager-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:31.165517  438245 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-wrczx" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:31.180293  438245 pod_ready.go:93] pod "kube-proxy-wrczx" in "kube-system" namespace has status "Ready":"True"
	I0819 19:12:31.180324  438245 pod_ready.go:82] duration metric: took 14.798883ms for pod "kube-proxy-wrczx" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:31.180337  438245 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:30.662168  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetIP
	I0819 19:12:30.665057  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:30.665455  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:30.665486  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:30.665660  438295 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0819 19:12:30.669911  438295 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 19:12:30.682755  438295 kubeadm.go:883] updating cluster {Name:embed-certs-024748 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-024748 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.96 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 19:12:30.682883  438295 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 19:12:30.682936  438295 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 19:12:30.724160  438295 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0819 19:12:30.724233  438295 ssh_runner.go:195] Run: which lz4
	I0819 19:12:30.728710  438295 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 19:12:30.733279  438295 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 19:12:30.733317  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0819 19:12:32.178568  438295 crio.go:462] duration metric: took 1.449881121s to copy over tarball
	I0819 19:12:32.178642  438295 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 19:12:30.653917  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:30.654521  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:30.654566  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:30.654436  439728 retry.go:31] will retry after 317.038756ms: waiting for machine to come up
	I0819 19:12:30.973003  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:30.973530  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:30.973560  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:30.973487  439728 retry.go:31] will retry after 486.945032ms: waiting for machine to come up
	I0819 19:12:31.461937  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:31.462438  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:31.462470  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:31.462389  439728 retry.go:31] will retry after 441.288745ms: waiting for machine to come up
	I0819 19:12:31.904947  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:31.905564  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:31.905617  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:31.905472  439728 retry.go:31] will retry after 752.583403ms: waiting for machine to come up
	I0819 19:12:32.659642  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:32.660175  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:32.660207  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:32.660128  439728 retry.go:31] will retry after 932.705928ms: waiting for machine to come up
	I0819 19:12:33.594983  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:33.595529  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:33.595556  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:33.595466  439728 retry.go:31] will retry after 936.558157ms: waiting for machine to come up
	I0819 19:12:34.533158  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:34.533717  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:34.533743  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:34.533656  439728 retry.go:31] will retry after 1.435945188s: waiting for machine to come up
	I0819 19:12:33.186835  438245 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-982795" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:35.187500  438245 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-982795" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:35.686905  438245 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-982795" in "kube-system" namespace has status "Ready":"True"
	I0819 19:12:35.686932  438245 pod_ready.go:82] duration metric: took 4.50658625s for pod "kube-scheduler-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:35.686945  438245 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:34.321347  438295 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.14267077s)
	I0819 19:12:34.321379  438295 crio.go:469] duration metric: took 2.142777016s to extract the tarball
	I0819 19:12:34.321390  438295 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 19:12:34.357670  438295 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 19:12:34.403313  438295 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 19:12:34.403344  438295 cache_images.go:84] Images are preloaded, skipping loading
	I0819 19:12:34.403358  438295 kubeadm.go:934] updating node { 192.168.72.96 8443 v1.31.0 crio true true} ...
	I0819 19:12:34.403495  438295 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-024748 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.96
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-024748 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 19:12:34.403576  438295 ssh_runner.go:195] Run: crio config
	I0819 19:12:34.450415  438295 cni.go:84] Creating CNI manager for ""
	I0819 19:12:34.450443  438295 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 19:12:34.450461  438295 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 19:12:34.450490  438295 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.96 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-024748 NodeName:embed-certs-024748 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.96"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.96 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 19:12:34.450646  438295 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.96
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-024748"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.96
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.96"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 19:12:34.450723  438295 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 19:12:34.461183  438295 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 19:12:34.461313  438295 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 19:12:34.470516  438295 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0819 19:12:34.488844  438295 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 19:12:34.505450  438295 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0819 19:12:34.522456  438295 ssh_runner.go:195] Run: grep 192.168.72.96	control-plane.minikube.internal$ /etc/hosts
	I0819 19:12:34.526272  438295 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.96	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 19:12:34.539079  438295 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:12:34.665665  438295 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 19:12:34.683237  438295 certs.go:68] Setting up /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/embed-certs-024748 for IP: 192.168.72.96
	I0819 19:12:34.683265  438295 certs.go:194] generating shared ca certs ...
	I0819 19:12:34.683287  438295 certs.go:226] acquiring lock for ca certs: {Name:mk639e03f593e0bccac045f6e9f5ba3b96cc81e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:12:34.683471  438295 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key
	I0819 19:12:34.683536  438295 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key
	I0819 19:12:34.683550  438295 certs.go:256] generating profile certs ...
	I0819 19:12:34.683687  438295 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/embed-certs-024748/client.key
	I0819 19:12:34.683776  438295 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/embed-certs-024748/apiserver.key.89193d03
	I0819 19:12:34.683828  438295 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/embed-certs-024748/proxy-client.key
	I0819 19:12:34.683991  438295 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem (1338 bytes)
	W0819 19:12:34.684035  438295 certs.go:480] ignoring /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009_empty.pem, impossibly tiny 0 bytes
	I0819 19:12:34.684047  438295 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 19:12:34.684074  438295 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem (1082 bytes)
	I0819 19:12:34.684112  438295 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem (1123 bytes)
	I0819 19:12:34.684159  438295 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem (1675 bytes)
	I0819 19:12:34.684224  438295 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem (1708 bytes)
	I0819 19:12:34.685127  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 19:12:34.718591  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 19:12:34.758439  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 19:12:34.790143  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 19:12:34.828113  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/embed-certs-024748/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0819 19:12:34.860389  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/embed-certs-024748/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 19:12:34.898361  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/embed-certs-024748/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 19:12:34.924677  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/embed-certs-024748/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 19:12:34.951630  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem --> /usr/share/ca-certificates/3800092.pem (1708 bytes)
	I0819 19:12:34.977435  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 19:12:35.002048  438295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem --> /usr/share/ca-certificates/380009.pem (1338 bytes)
	I0819 19:12:35.026934  438295 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 19:12:35.044476  438295 ssh_runner.go:195] Run: openssl version
	I0819 19:12:35.050174  438295 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3800092.pem && ln -fs /usr/share/ca-certificates/3800092.pem /etc/ssl/certs/3800092.pem"
	I0819 19:12:35.061299  438295 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3800092.pem
	I0819 19:12:35.065978  438295 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 17:56 /usr/share/ca-certificates/3800092.pem
	I0819 19:12:35.066047  438295 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3800092.pem
	I0819 19:12:35.072572  438295 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3800092.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 19:12:35.083760  438295 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 19:12:35.094492  438295 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:12:35.099152  438295 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:12:35.099229  438295 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:12:35.105124  438295 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 19:12:35.115950  438295 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/380009.pem && ln -fs /usr/share/ca-certificates/380009.pem /etc/ssl/certs/380009.pem"
	I0819 19:12:35.126845  438295 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/380009.pem
	I0819 19:12:35.131568  438295 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 17:56 /usr/share/ca-certificates/380009.pem
	I0819 19:12:35.131650  438295 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/380009.pem
	I0819 19:12:35.137851  438295 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/380009.pem /etc/ssl/certs/51391683.0"
	I0819 19:12:35.148818  438295 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 19:12:35.153800  438295 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 19:12:35.159720  438295 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 19:12:35.165740  438295 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 19:12:35.171705  438295 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 19:12:35.177574  438295 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 19:12:35.183935  438295 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 19:12:35.192681  438295 kubeadm.go:392] StartCluster: {Name:embed-certs-024748 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-024748 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.96 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 19:12:35.192845  438295 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 19:12:35.192908  438295 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 19:12:35.231688  438295 cri.go:89] found id: ""
	I0819 19:12:35.231791  438295 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 19:12:35.242835  438295 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 19:12:35.242859  438295 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 19:12:35.242944  438295 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 19:12:35.255695  438295 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 19:12:35.257036  438295 kubeconfig.go:125] found "embed-certs-024748" server: "https://192.168.72.96:8443"
	I0819 19:12:35.259422  438295 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 19:12:35.271730  438295 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.96
	I0819 19:12:35.271758  438295 kubeadm.go:1160] stopping kube-system containers ...
	I0819 19:12:35.271772  438295 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0819 19:12:35.271820  438295 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 19:12:35.321065  438295 cri.go:89] found id: ""
	I0819 19:12:35.321155  438295 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0819 19:12:35.337802  438295 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 19:12:35.347699  438295 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 19:12:35.347726  438295 kubeadm.go:157] found existing configuration files:
	
	I0819 19:12:35.347785  438295 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 19:12:35.357108  438295 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 19:12:35.357178  438295 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 19:12:35.366805  438295 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 19:12:35.376864  438295 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 19:12:35.376938  438295 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 19:12:35.387018  438295 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 19:12:35.396966  438295 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 19:12:35.397045  438295 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 19:12:35.406192  438295 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 19:12:35.415325  438295 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 19:12:35.415401  438295 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 19:12:35.424450  438295 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 19:12:35.433931  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:12:35.549294  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:12:36.306930  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:12:36.517086  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:12:36.587680  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:12:36.680728  438295 api_server.go:52] waiting for apiserver process to appear ...
	I0819 19:12:36.680825  438295 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:12:37.181054  438295 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:12:37.681059  438295 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:12:38.181588  438295 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:12:38.197155  438295 api_server.go:72] duration metric: took 1.516436456s to wait for apiserver process to appear ...
	I0819 19:12:38.197184  438295 api_server.go:88] waiting for apiserver healthz status ...
	I0819 19:12:38.197212  438295 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0819 19:12:35.971138  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:35.971576  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:35.971607  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:35.971514  439728 retry.go:31] will retry after 1.521077744s: waiting for machine to come up
	I0819 19:12:37.493931  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:37.494389  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:37.494415  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:37.494361  439728 retry.go:31] will retry after 1.632508579s: waiting for machine to come up
	I0819 19:12:39.128939  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:39.129429  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:39.129456  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:39.129392  439728 retry.go:31] will retry after 2.634061376s: waiting for machine to come up
	I0819 19:12:40.567608  438295 api_server.go:279] https://192.168.72.96:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 19:12:40.567654  438295 api_server.go:103] status: https://192.168.72.96:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 19:12:40.567669  438295 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0819 19:12:40.593405  438295 api_server.go:279] https://192.168.72.96:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 19:12:40.593456  438295 api_server.go:103] status: https://192.168.72.96:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 19:12:40.697607  438295 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0819 19:12:40.713767  438295 api_server.go:279] https://192.168.72.96:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 19:12:40.713806  438295 api_server.go:103] status: https://192.168.72.96:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 19:12:41.197299  438295 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0819 19:12:41.203307  438295 api_server.go:279] https://192.168.72.96:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 19:12:41.203338  438295 api_server.go:103] status: https://192.168.72.96:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 19:12:41.697903  438295 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0819 19:12:41.705142  438295 api_server.go:279] https://192.168.72.96:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 19:12:41.705174  438295 api_server.go:103] status: https://192.168.72.96:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 19:12:42.197361  438295 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0819 19:12:42.202272  438295 api_server.go:279] https://192.168.72.96:8443/healthz returned 200:
	ok
	I0819 19:12:42.209788  438295 api_server.go:141] control plane version: v1.31.0
	I0819 19:12:42.209819  438295 api_server.go:131] duration metric: took 4.012627755s to wait for apiserver health ...
	I0819 19:12:42.209829  438295 cni.go:84] Creating CNI manager for ""
	I0819 19:12:42.209836  438295 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 19:12:42.211612  438295 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 19:12:37.693171  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:39.693397  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:41.693523  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:42.212889  438295 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 19:12:42.223277  438295 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 19:12:42.242392  438295 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 19:12:42.256273  438295 system_pods.go:59] 8 kube-system pods found
	I0819 19:12:42.256321  438295 system_pods.go:61] "coredns-6f6b679f8f-7ww4z" [bbde00d4-6027-4d8d-b51e-bd68915da166] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0819 19:12:42.256331  438295 system_pods.go:61] "etcd-embed-certs-024748" [846ff0f0-5399-43fd-8e7b-1f64997cd291] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0819 19:12:42.256348  438295 system_pods.go:61] "kube-apiserver-embed-certs-024748" [3ff558d6-e82e-47a0-bb81-15244bee6470] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0819 19:12:42.256366  438295 system_pods.go:61] "kube-controller-manager-embed-certs-024748" [993b82ba-e8e7-4896-a06b-87c4f08d5985] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0819 19:12:42.256383  438295 system_pods.go:61] "kube-proxy-bmmbh" [1f77f152-f5f4-40f6-9632-1eaa36b9ea31] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0819 19:12:42.256393  438295 system_pods.go:61] "kube-scheduler-embed-certs-024748" [34684d4c-2479-45c5-883b-158cf9f974f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0819 19:12:42.256403  438295 system_pods.go:61] "metrics-server-6867b74b74-kxcwh" [15f86629-d916-4fdc-9ecf-9cb1b6c83f85] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 19:12:42.256409  438295 system_pods.go:61] "storage-provisioner" [7acb6ce1-21b6-4cdd-a5cb-76d694fc0a38] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0819 19:12:42.256418  438295 system_pods.go:74] duration metric: took 14.004598ms to wait for pod list to return data ...
	I0819 19:12:42.256428  438295 node_conditions.go:102] verifying NodePressure condition ...
	I0819 19:12:42.263308  438295 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 19:12:42.263340  438295 node_conditions.go:123] node cpu capacity is 2
	I0819 19:12:42.263354  438295 node_conditions.go:105] duration metric: took 6.920993ms to run NodePressure ...
	I0819 19:12:42.263376  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:12:42.533917  438295 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0819 19:12:42.545853  438295 kubeadm.go:739] kubelet initialised
	I0819 19:12:42.545886  438295 kubeadm.go:740] duration metric: took 11.931664ms waiting for restarted kubelet to initialise ...
	I0819 19:12:42.545899  438295 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 19:12:42.553125  438295 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-7ww4z" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:42.559120  438295 pod_ready.go:98] node "embed-certs-024748" hosting pod "coredns-6f6b679f8f-7ww4z" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:42.559148  438295 pod_ready.go:82] duration metric: took 5.984169ms for pod "coredns-6f6b679f8f-7ww4z" in "kube-system" namespace to be "Ready" ...
	E0819 19:12:42.559158  438295 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-024748" hosting pod "coredns-6f6b679f8f-7ww4z" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:42.559164  438295 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:42.564830  438295 pod_ready.go:98] node "embed-certs-024748" hosting pod "etcd-embed-certs-024748" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:42.564852  438295 pod_ready.go:82] duration metric: took 5.681326ms for pod "etcd-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	E0819 19:12:42.564860  438295 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-024748" hosting pod "etcd-embed-certs-024748" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:42.564867  438295 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:42.571982  438295 pod_ready.go:98] node "embed-certs-024748" hosting pod "kube-apiserver-embed-certs-024748" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:42.572027  438295 pod_ready.go:82] duration metric: took 7.150945ms for pod "kube-apiserver-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	E0819 19:12:42.572038  438295 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-024748" hosting pod "kube-apiserver-embed-certs-024748" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:42.572045  438295 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:42.648692  438295 pod_ready.go:98] node "embed-certs-024748" hosting pod "kube-controller-manager-embed-certs-024748" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:42.648721  438295 pod_ready.go:82] duration metric: took 76.665633ms for pod "kube-controller-manager-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	E0819 19:12:42.648730  438295 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-024748" hosting pod "kube-controller-manager-embed-certs-024748" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:42.648737  438295 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-bmmbh" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:43.045619  438295 pod_ready.go:98] node "embed-certs-024748" hosting pod "kube-proxy-bmmbh" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:43.045648  438295 pod_ready.go:82] duration metric: took 396.90414ms for pod "kube-proxy-bmmbh" in "kube-system" namespace to be "Ready" ...
	E0819 19:12:43.045658  438295 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-024748" hosting pod "kube-proxy-bmmbh" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:43.045665  438295 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:43.446302  438295 pod_ready.go:98] node "embed-certs-024748" hosting pod "kube-scheduler-embed-certs-024748" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:43.446331  438295 pod_ready.go:82] duration metric: took 400.658861ms for pod "kube-scheduler-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	E0819 19:12:43.446342  438295 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-024748" hosting pod "kube-scheduler-embed-certs-024748" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:43.446359  438295 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:43.845457  438295 pod_ready.go:98] node "embed-certs-024748" hosting pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:43.845488  438295 pod_ready.go:82] duration metric: took 399.120328ms for pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace to be "Ready" ...
	E0819 19:12:43.845499  438295 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-024748" hosting pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:43.845506  438295 pod_ready.go:39] duration metric: took 1.299593775s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 19:12:43.845526  438295 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 19:12:43.864357  438295 ops.go:34] apiserver oom_adj: -16
	I0819 19:12:43.864384  438295 kubeadm.go:597] duration metric: took 8.621518076s to restartPrimaryControlPlane
	I0819 19:12:43.864394  438295 kubeadm.go:394] duration metric: took 8.671725617s to StartCluster
	I0819 19:12:43.864414  438295 settings.go:142] acquiring lock: {Name:mk396fcf49a1d0e69583cf37ff3c819e37118163 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:12:43.864495  438295 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19468-372744/kubeconfig
	I0819 19:12:43.866775  438295 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/kubeconfig: {Name:mk8e7b4e1bb7da665111d2acd83eb48882c66853 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:12:43.867073  438295 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.96 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 19:12:43.867296  438295 config.go:182] Loaded profile config "embed-certs-024748": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:12:43.867195  438295 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 19:12:43.867354  438295 addons.go:69] Setting metrics-server=true in profile "embed-certs-024748"
	I0819 19:12:43.867362  438295 addons.go:69] Setting default-storageclass=true in profile "embed-certs-024748"
	I0819 19:12:43.867397  438295 addons.go:234] Setting addon metrics-server=true in "embed-certs-024748"
	I0819 19:12:43.867402  438295 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-024748"
	W0819 19:12:43.867409  438295 addons.go:243] addon metrics-server should already be in state true
	I0819 19:12:43.867437  438295 host.go:66] Checking if "embed-certs-024748" exists ...
	I0819 19:12:43.867354  438295 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-024748"
	I0819 19:12:43.867502  438295 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-024748"
	W0819 19:12:43.867514  438295 addons.go:243] addon storage-provisioner should already be in state true
	I0819 19:12:43.867538  438295 host.go:66] Checking if "embed-certs-024748" exists ...
	I0819 19:12:43.867761  438295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:43.867796  438295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:43.867839  438295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:43.867873  438295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:43.867889  438295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:43.867908  438295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:43.869989  438295 out.go:177] * Verifying Kubernetes components...
	I0819 19:12:43.871464  438295 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:12:43.883655  438295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33557
	I0819 19:12:43.883871  438295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33763
	I0819 19:12:43.884279  438295 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:43.884323  438295 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:43.884790  438295 main.go:141] libmachine: Using API Version  1
	I0819 19:12:43.884809  438295 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:43.884935  438295 main.go:141] libmachine: Using API Version  1
	I0819 19:12:43.884953  438295 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:43.885204  438295 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:43.885275  438295 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:43.885380  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetState
	I0819 19:12:43.885886  438295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:43.885928  438295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:43.886840  438295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40467
	I0819 19:12:43.887309  438295 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:43.887792  438295 main.go:141] libmachine: Using API Version  1
	I0819 19:12:43.887802  438295 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:43.888109  438295 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:43.888670  438295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:43.888697  438295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:43.888973  438295 addons.go:234] Setting addon default-storageclass=true in "embed-certs-024748"
	W0819 19:12:43.888988  438295 addons.go:243] addon default-storageclass should already be in state true
	I0819 19:12:43.889020  438295 host.go:66] Checking if "embed-certs-024748" exists ...
	I0819 19:12:43.889270  438295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:43.889304  438295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:43.905278  438295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40907
	I0819 19:12:43.905278  438295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41133
	I0819 19:12:43.905734  438295 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:43.905877  438295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33393
	I0819 19:12:43.905983  438295 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:43.906299  438295 main.go:141] libmachine: Using API Version  1
	I0819 19:12:43.906320  438295 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:43.906366  438295 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:43.906443  438295 main.go:141] libmachine: Using API Version  1
	I0819 19:12:43.906457  438295 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:43.906822  438295 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:43.906898  438295 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:43.906995  438295 main.go:141] libmachine: Using API Version  1
	I0819 19:12:43.907006  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetState
	I0819 19:12:43.907012  438295 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:43.907371  438295 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:43.907473  438295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:43.907523  438295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:43.907534  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetState
	I0819 19:12:43.909443  438295 main.go:141] libmachine: (embed-certs-024748) Calling .DriverName
	I0819 19:12:43.909529  438295 main.go:141] libmachine: (embed-certs-024748) Calling .DriverName
	I0819 19:12:43.911431  438295 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0819 19:12:43.911437  438295 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:12:43.913061  438295 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0819 19:12:43.913090  438295 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0819 19:12:43.913115  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHHostname
	I0819 19:12:43.913180  438295 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 19:12:43.913199  438295 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 19:12:43.913216  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHHostname
	I0819 19:12:43.916642  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:43.916813  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:43.917110  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:43.917135  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:43.917166  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:43.917193  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:43.917463  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHPort
	I0819 19:12:43.917668  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHPort
	I0819 19:12:43.917671  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:43.917846  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:43.917867  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHUsername
	I0819 19:12:43.918014  438295 sshutil.go:53] new ssh client: &{IP:192.168.72.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/embed-certs-024748/id_rsa Username:docker}
	I0819 19:12:43.918032  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHUsername
	I0819 19:12:43.918148  438295 sshutil.go:53] new ssh client: &{IP:192.168.72.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/embed-certs-024748/id_rsa Username:docker}
	I0819 19:12:43.926337  438295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46687
	I0819 19:12:43.926813  438295 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:43.927333  438295 main.go:141] libmachine: Using API Version  1
	I0819 19:12:43.927354  438295 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:43.927762  438295 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:43.927965  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetState
	I0819 19:12:43.929591  438295 main.go:141] libmachine: (embed-certs-024748) Calling .DriverName
	I0819 19:12:43.929910  438295 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 19:12:43.929926  438295 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 19:12:43.929942  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHHostname
	I0819 19:12:43.933032  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:43.933387  438295 main.go:141] libmachine: (embed-certs-024748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:8b:43", ip: ""} in network mk-embed-certs-024748: {Iface:virbr4 ExpiryTime:2024-08-19 20:03:29 +0000 UTC Type:0 Mac:52:54:00:f0:8b:43 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-024748 Clientid:01:52:54:00:f0:8b:43}
	I0819 19:12:43.933406  438295 main.go:141] libmachine: (embed-certs-024748) DBG | domain embed-certs-024748 has defined IP address 192.168.72.96 and MAC address 52:54:00:f0:8b:43 in network mk-embed-certs-024748
	I0819 19:12:43.933626  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHPort
	I0819 19:12:43.933850  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHKeyPath
	I0819 19:12:43.933992  438295 main.go:141] libmachine: (embed-certs-024748) Calling .GetSSHUsername
	I0819 19:12:43.934118  438295 sshutil.go:53] new ssh client: &{IP:192.168.72.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/embed-certs-024748/id_rsa Username:docker}
	I0819 19:12:44.078901  438295 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 19:12:44.098542  438295 node_ready.go:35] waiting up to 6m0s for node "embed-certs-024748" to be "Ready" ...
	I0819 19:12:44.180050  438295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 19:12:44.196186  438295 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0819 19:12:44.196210  438295 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0819 19:12:44.220001  438295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 19:12:44.231145  438295 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0819 19:12:44.231180  438295 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0819 19:12:44.267800  438295 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 19:12:44.267831  438295 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0819 19:12:44.323078  438295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 19:12:45.276298  438295 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.096199779s)
	I0819 19:12:45.276336  438295 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.056298773s)
	I0819 19:12:45.276383  438295 main.go:141] libmachine: Making call to close driver server
	I0819 19:12:45.276395  438295 main.go:141] libmachine: (embed-certs-024748) Calling .Close
	I0819 19:12:45.276385  438295 main.go:141] libmachine: Making call to close driver server
	I0819 19:12:45.276462  438295 main.go:141] libmachine: (embed-certs-024748) Calling .Close
	I0819 19:12:45.276714  438295 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:12:45.276757  438295 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:12:45.276777  438295 main.go:141] libmachine: Making call to close driver server
	I0819 19:12:45.276793  438295 main.go:141] libmachine: (embed-certs-024748) Calling .Close
	I0819 19:12:45.276860  438295 main.go:141] libmachine: (embed-certs-024748) DBG | Closing plugin on server side
	I0819 19:12:45.276874  438295 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:12:45.276940  438295 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:12:45.276956  438295 main.go:141] libmachine: Making call to close driver server
	I0819 19:12:45.276964  438295 main.go:141] libmachine: (embed-certs-024748) Calling .Close
	I0819 19:12:45.277134  438295 main.go:141] libmachine: (embed-certs-024748) DBG | Closing plugin on server side
	I0819 19:12:45.277195  438295 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:12:45.277239  438295 main.go:141] libmachine: (embed-certs-024748) DBG | Closing plugin on server side
	I0819 19:12:45.277258  438295 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:12:45.277277  438295 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:12:45.277304  438295 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:12:45.284982  438295 main.go:141] libmachine: Making call to close driver server
	I0819 19:12:45.285007  438295 main.go:141] libmachine: (embed-certs-024748) Calling .Close
	I0819 19:12:45.285304  438295 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:12:45.285324  438295 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:12:45.293973  438295 main.go:141] libmachine: Making call to close driver server
	I0819 19:12:45.293994  438295 main.go:141] libmachine: (embed-certs-024748) Calling .Close
	I0819 19:12:45.294247  438295 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:12:45.294265  438295 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:12:45.294274  438295 main.go:141] libmachine: Making call to close driver server
	I0819 19:12:45.294282  438295 main.go:141] libmachine: (embed-certs-024748) Calling .Close
	I0819 19:12:45.295704  438295 main.go:141] libmachine: (embed-certs-024748) DBG | Closing plugin on server side
	I0819 19:12:45.295787  438295 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:12:45.295813  438295 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:12:45.295828  438295 addons.go:475] Verifying addon metrics-server=true in "embed-certs-024748"
	I0819 19:12:45.297684  438295 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0819 19:12:41.765706  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:41.766129  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:41.766182  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:41.766093  439728 retry.go:31] will retry after 3.464758587s: waiting for machine to come up
	I0819 19:12:45.232640  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:45.233118  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | unable to find current IP address of domain old-k8s-version-104669 in network mk-old-k8s-version-104669
	I0819 19:12:45.233151  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | I0819 19:12:45.233066  439728 retry.go:31] will retry after 3.551527195s: waiting for machine to come up
	I0819 19:12:43.694387  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:46.194627  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:45.298844  438295 addons.go:510] duration metric: took 1.431699078s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0819 19:12:46.103096  438295 node_ready.go:53] node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:48.603205  438295 node_ready.go:53] node "embed-certs-024748" has status "Ready":"False"
	I0819 19:12:50.084809  438001 start.go:364] duration metric: took 55.89796214s to acquireMachinesLock for "no-preload-278232"
	I0819 19:12:50.084884  438001 start.go:96] Skipping create...Using existing machine configuration
	I0819 19:12:50.084895  438001 fix.go:54] fixHost starting: 
	I0819 19:12:50.085416  438001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:12:50.085459  438001 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:12:50.103796  438001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41569
	I0819 19:12:50.104278  438001 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:12:50.104900  438001 main.go:141] libmachine: Using API Version  1
	I0819 19:12:50.104934  438001 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:12:50.105335  438001 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:12:50.105544  438001 main.go:141] libmachine: (no-preload-278232) Calling .DriverName
	I0819 19:12:50.105703  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetState
	I0819 19:12:50.107422  438001 fix.go:112] recreateIfNeeded on no-preload-278232: state=Stopped err=<nil>
	I0819 19:12:50.107444  438001 main.go:141] libmachine: (no-preload-278232) Calling .DriverName
	W0819 19:12:50.107602  438001 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 19:12:50.109328  438001 out.go:177] * Restarting existing kvm2 VM for "no-preload-278232" ...
	I0819 19:12:48.787197  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:48.787586  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has current primary IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:48.787611  438716 main.go:141] libmachine: (old-k8s-version-104669) Found IP for machine: 192.168.50.32
	I0819 19:12:48.787625  438716 main.go:141] libmachine: (old-k8s-version-104669) Reserving static IP address...
	I0819 19:12:48.788104  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "old-k8s-version-104669", mac: "52:54:00:8c:ff:a3", ip: "192.168.50.32"} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:48.788140  438716 main.go:141] libmachine: (old-k8s-version-104669) Reserved static IP address: 192.168.50.32
	I0819 19:12:48.788164  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | skip adding static IP to network mk-old-k8s-version-104669 - found existing host DHCP lease matching {name: "old-k8s-version-104669", mac: "52:54:00:8c:ff:a3", ip: "192.168.50.32"}
	I0819 19:12:48.788186  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | Getting to WaitForSSH function...
	I0819 19:12:48.788202  438716 main.go:141] libmachine: (old-k8s-version-104669) Waiting for SSH to be available...
	I0819 19:12:48.790365  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:48.790765  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:48.790793  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:48.790994  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | Using SSH client type: external
	I0819 19:12:48.791034  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | Using SSH private key: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/old-k8s-version-104669/id_rsa (-rw-------)
	I0819 19:12:48.791073  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.32 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19468-372744/.minikube/machines/old-k8s-version-104669/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 19:12:48.791087  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | About to run SSH command:
	I0819 19:12:48.791103  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | exit 0
	I0819 19:12:48.920087  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | SSH cmd err, output: <nil>: 
	I0819 19:12:48.920464  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetConfigRaw
	I0819 19:12:48.921105  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetIP
	I0819 19:12:48.923637  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:48.924022  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:48.924053  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:48.924242  438716 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/config.json ...
	I0819 19:12:48.924429  438716 machine.go:93] provisionDockerMachine start ...
	I0819 19:12:48.924447  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .DriverName
	I0819 19:12:48.924655  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHHostname
	I0819 19:12:48.926885  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:48.927345  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:48.927376  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:48.927527  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHPort
	I0819 19:12:48.927723  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:48.927846  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:48.927968  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHUsername
	I0819 19:12:48.928241  438716 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:48.928453  438716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I0819 19:12:48.928475  438716 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 19:12:49.039908  438716 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 19:12:49.039944  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetMachineName
	I0819 19:12:49.040200  438716 buildroot.go:166] provisioning hostname "old-k8s-version-104669"
	I0819 19:12:49.040236  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetMachineName
	I0819 19:12:49.040454  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHHostname
	I0819 19:12:49.043462  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.043860  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:49.043892  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.044061  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHPort
	I0819 19:12:49.044256  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:49.044472  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:49.044613  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHUsername
	I0819 19:12:49.044837  438716 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:49.045014  438716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I0819 19:12:49.045027  438716 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-104669 && echo "old-k8s-version-104669" | sudo tee /etc/hostname
	I0819 19:12:49.170660  438716 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-104669
	
	I0819 19:12:49.170695  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHHostname
	I0819 19:12:49.173564  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.173855  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:49.173882  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.174059  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHPort
	I0819 19:12:49.174239  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:49.174432  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:49.174564  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHUsername
	I0819 19:12:49.174732  438716 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:49.174923  438716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I0819 19:12:49.174941  438716 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-104669' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-104669/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-104669' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 19:12:49.298689  438716 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 19:12:49.298731  438716 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19468-372744/.minikube CaCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19468-372744/.minikube}
	I0819 19:12:49.298764  438716 buildroot.go:174] setting up certificates
	I0819 19:12:49.298778  438716 provision.go:84] configureAuth start
	I0819 19:12:49.298793  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetMachineName
	I0819 19:12:49.299157  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetIP
	I0819 19:12:49.301897  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.302290  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:49.302326  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.302462  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHHostname
	I0819 19:12:49.304592  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.304960  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:49.304987  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.305150  438716 provision.go:143] copyHostCerts
	I0819 19:12:49.305219  438716 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem, removing ...
	I0819 19:12:49.305243  438716 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem
	I0819 19:12:49.305310  438716 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem (1082 bytes)
	I0819 19:12:49.305437  438716 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem, removing ...
	I0819 19:12:49.305449  438716 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem
	I0819 19:12:49.305477  438716 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem (1123 bytes)
	I0819 19:12:49.305571  438716 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem, removing ...
	I0819 19:12:49.305583  438716 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem
	I0819 19:12:49.305612  438716 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem (1675 bytes)
	I0819 19:12:49.305699  438716 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-104669 san=[127.0.0.1 192.168.50.32 localhost minikube old-k8s-version-104669]
	I0819 19:12:49.394004  438716 provision.go:177] copyRemoteCerts
	I0819 19:12:49.394074  438716 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 19:12:49.394112  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHHostname
	I0819 19:12:49.396645  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.396906  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:49.396951  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.397108  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHPort
	I0819 19:12:49.397321  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:49.397504  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHUsername
	I0819 19:12:49.397709  438716 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/old-k8s-version-104669/id_rsa Username:docker}
	I0819 19:12:49.483061  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 19:12:49.508297  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 19:12:49.533821  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0819 19:12:49.560064  438716 provision.go:87] duration metric: took 261.270909ms to configureAuth
	I0819 19:12:49.560093  438716 buildroot.go:189] setting minikube options for container-runtime
	I0819 19:12:49.560310  438716 config.go:182] Loaded profile config "old-k8s-version-104669": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0819 19:12:49.560409  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHHostname
	I0819 19:12:49.563173  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.563604  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:49.563633  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.563882  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHPort
	I0819 19:12:49.564075  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:49.564274  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:49.564479  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHUsername
	I0819 19:12:49.564707  438716 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:49.564925  438716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I0819 19:12:49.564948  438716 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 19:12:49.837237  438716 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 19:12:49.837267  438716 machine.go:96] duration metric: took 912.825625ms to provisionDockerMachine
	I0819 19:12:49.837281  438716 start.go:293] postStartSetup for "old-k8s-version-104669" (driver="kvm2")
	I0819 19:12:49.837297  438716 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 19:12:49.837341  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .DriverName
	I0819 19:12:49.837716  438716 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 19:12:49.837757  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHHostname
	I0819 19:12:49.840409  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.840759  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:49.840789  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.840988  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHPort
	I0819 19:12:49.841183  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:49.841345  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHUsername
	I0819 19:12:49.841473  438716 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/old-k8s-version-104669/id_rsa Username:docker}
	I0819 19:12:49.931067  438716 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 19:12:49.935562  438716 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 19:12:49.935590  438716 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-372744/.minikube/addons for local assets ...
	I0819 19:12:49.935694  438716 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-372744/.minikube/files for local assets ...
	I0819 19:12:49.935815  438716 filesync.go:149] local asset: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem -> 3800092.pem in /etc/ssl/certs
	I0819 19:12:49.935941  438716 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 19:12:49.945418  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem --> /etc/ssl/certs/3800092.pem (1708 bytes)
	I0819 19:12:49.969454  438716 start.go:296] duration metric: took 132.15677ms for postStartSetup
	I0819 19:12:49.969494  438716 fix.go:56] duration metric: took 20.836438665s for fixHost
	I0819 19:12:49.969517  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHHostname
	I0819 19:12:49.972127  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.972502  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:49.972542  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:49.972758  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHPort
	I0819 19:12:49.973000  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:49.973190  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:49.973355  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHUsername
	I0819 19:12:49.973548  438716 main.go:141] libmachine: Using SSH client type: native
	I0819 19:12:49.973753  438716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I0819 19:12:49.973766  438716 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 19:12:50.084645  438716 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724094770.056929881
	
	I0819 19:12:50.084672  438716 fix.go:216] guest clock: 1724094770.056929881
	I0819 19:12:50.084681  438716 fix.go:229] Guest: 2024-08-19 19:12:50.056929881 +0000 UTC Remote: 2024-08-19 19:12:49.969497734 +0000 UTC m=+259.472837552 (delta=87.432147ms)
	I0819 19:12:50.084711  438716 fix.go:200] guest clock delta is within tolerance: 87.432147ms
	I0819 19:12:50.084718  438716 start.go:83] releasing machines lock for "old-k8s-version-104669", held for 20.951701853s
	I0819 19:12:50.084752  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .DriverName
	I0819 19:12:50.085050  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetIP
	I0819 19:12:50.087976  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:50.088363  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:50.088391  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:50.088572  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .DriverName
	I0819 19:12:50.089141  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .DriverName
	I0819 19:12:50.089360  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .DriverName
	I0819 19:12:50.089460  438716 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 19:12:50.089526  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHHostname
	I0819 19:12:50.089572  438716 ssh_runner.go:195] Run: cat /version.json
	I0819 19:12:50.089599  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHHostname
	I0819 19:12:50.092427  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:50.092591  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:50.092772  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:50.092797  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:50.092933  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:50.092965  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:50.092965  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHPort
	I0819 19:12:50.093147  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:50.093248  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHPort
	I0819 19:12:50.093328  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHUsername
	I0819 19:12:50.093409  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHKeyPath
	I0819 19:12:50.093503  438716 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/old-k8s-version-104669/id_rsa Username:docker}
	I0819 19:12:50.093532  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetSSHUsername
	I0819 19:12:50.093650  438716 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/old-k8s-version-104669/id_rsa Username:docker}
	I0819 19:12:50.177322  438716 ssh_runner.go:195] Run: systemctl --version
	I0819 19:12:50.200999  438716 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 19:12:50.349276  438716 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 19:12:50.357011  438716 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 19:12:50.357090  438716 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 19:12:50.377691  438716 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 19:12:50.377721  438716 start.go:495] detecting cgroup driver to use...
	I0819 19:12:50.377790  438716 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 19:12:50.394502  438716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 19:12:50.408481  438716 docker.go:217] disabling cri-docker service (if available) ...
	I0819 19:12:50.408556  438716 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 19:12:50.421818  438716 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 19:12:50.434899  438716 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 19:12:50.559399  438716 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 19:12:50.708621  438716 docker.go:233] disabling docker service ...
	I0819 19:12:50.708695  438716 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 19:12:50.726699  438716 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 19:12:50.740605  438716 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 19:12:50.896815  438716 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 19:12:51.037560  438716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 19:12:51.052554  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 19:12:51.072292  438716 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0819 19:12:51.072360  438716 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:51.083248  438716 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 19:12:51.083334  438716 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:51.093721  438716 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:51.105212  438716 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:12:51.119349  438716 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 19:12:51.134647  438716 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 19:12:51.144553  438716 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 19:12:51.144598  438716 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 19:12:51.159151  438716 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 19:12:51.171260  438716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:12:51.328931  438716 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 19:12:51.500761  438716 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 19:12:51.500831  438716 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 19:12:51.505982  438716 start.go:563] Will wait 60s for crictl version
	I0819 19:12:51.506057  438716 ssh_runner.go:195] Run: which crictl
	I0819 19:12:51.510447  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 19:12:51.552892  438716 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 19:12:51.552982  438716 ssh_runner.go:195] Run: crio --version
	I0819 19:12:51.581931  438716 ssh_runner.go:195] Run: crio --version
	I0819 19:12:51.614565  438716 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0819 19:12:50.110718  438001 main.go:141] libmachine: (no-preload-278232) Calling .Start
	I0819 19:12:50.110888  438001 main.go:141] libmachine: (no-preload-278232) Ensuring networks are active...
	I0819 19:12:50.111809  438001 main.go:141] libmachine: (no-preload-278232) Ensuring network default is active
	I0819 19:12:50.112149  438001 main.go:141] libmachine: (no-preload-278232) Ensuring network mk-no-preload-278232 is active
	I0819 19:12:50.112709  438001 main.go:141] libmachine: (no-preload-278232) Getting domain xml...
	I0819 19:12:50.113441  438001 main.go:141] libmachine: (no-preload-278232) Creating domain...
	I0819 19:12:51.494803  438001 main.go:141] libmachine: (no-preload-278232) Waiting to get IP...
	I0819 19:12:51.495733  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:12:51.496203  438001 main.go:141] libmachine: (no-preload-278232) DBG | unable to find current IP address of domain no-preload-278232 in network mk-no-preload-278232
	I0819 19:12:51.496302  438001 main.go:141] libmachine: (no-preload-278232) DBG | I0819 19:12:51.496187  439925 retry.go:31] will retry after 190.334257ms: waiting for machine to come up
	I0819 19:12:48.694017  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:50.694533  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:51.102764  438295 node_ready.go:49] node "embed-certs-024748" has status "Ready":"True"
	I0819 19:12:51.102791  438295 node_ready.go:38] duration metric: took 7.004204889s for node "embed-certs-024748" to be "Ready" ...
	I0819 19:12:51.102814  438295 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 19:12:51.109122  438295 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-7ww4z" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:51.114649  438295 pod_ready.go:93] pod "coredns-6f6b679f8f-7ww4z" in "kube-system" namespace has status "Ready":"True"
	I0819 19:12:51.114679  438295 pod_ready.go:82] duration metric: took 5.529339ms for pod "coredns-6f6b679f8f-7ww4z" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:51.114692  438295 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:51.121699  438295 pod_ready.go:93] pod "etcd-embed-certs-024748" in "kube-system" namespace has status "Ready":"True"
	I0819 19:12:51.121729  438295 pod_ready.go:82] duration metric: took 7.027906ms for pod "etcd-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:51.121742  438295 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:51.129040  438295 pod_ready.go:93] pod "kube-apiserver-embed-certs-024748" in "kube-system" namespace has status "Ready":"True"
	I0819 19:12:51.129066  438295 pod_ready.go:82] duration metric: took 7.315166ms for pod "kube-apiserver-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:51.129078  438295 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:51.636173  438295 pod_ready.go:93] pod "kube-controller-manager-embed-certs-024748" in "kube-system" namespace has status "Ready":"True"
	I0819 19:12:51.636226  438295 pod_ready.go:82] duration metric: took 507.130455ms for pod "kube-controller-manager-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:51.636243  438295 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bmmbh" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:51.904734  438295 pod_ready.go:93] pod "kube-proxy-bmmbh" in "kube-system" namespace has status "Ready":"True"
	I0819 19:12:51.904776  438295 pod_ready.go:82] duration metric: took 268.522999ms for pod "kube-proxy-bmmbh" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:51.904806  438295 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:53.911857  438295 pod_ready.go:103] pod "kube-scheduler-embed-certs-024748" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:51.615865  438716 main.go:141] libmachine: (old-k8s-version-104669) Calling .GetIP
	I0819 19:12:51.618782  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:51.619238  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:ff:a3", ip: ""} in network mk-old-k8s-version-104669: {Iface:virbr2 ExpiryTime:2024-08-19 20:12:41 +0000 UTC Type:0 Mac:52:54:00:8c:ff:a3 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:old-k8s-version-104669 Clientid:01:52:54:00:8c:ff:a3}
	I0819 19:12:51.619268  438716 main.go:141] libmachine: (old-k8s-version-104669) DBG | domain old-k8s-version-104669 has defined IP address 192.168.50.32 and MAC address 52:54:00:8c:ff:a3 in network mk-old-k8s-version-104669
	I0819 19:12:51.619508  438716 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0819 19:12:51.624020  438716 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 19:12:51.640765  438716 kubeadm.go:883] updating cluster {Name:old-k8s-version-104669 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-104669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.32 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 19:12:51.640905  438716 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0819 19:12:51.640982  438716 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 19:12:51.696872  438716 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0819 19:12:51.696931  438716 ssh_runner.go:195] Run: which lz4
	I0819 19:12:51.702194  438716 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 19:12:51.707228  438716 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 19:12:51.707265  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0819 19:12:53.435062  438716 crio.go:462] duration metric: took 1.732918912s to copy over tarball
	I0819 19:12:53.435149  438716 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 19:12:51.688680  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:12:51.689287  438001 main.go:141] libmachine: (no-preload-278232) DBG | unable to find current IP address of domain no-preload-278232 in network mk-no-preload-278232
	I0819 19:12:51.689326  438001 main.go:141] libmachine: (no-preload-278232) DBG | I0819 19:12:51.689222  439925 retry.go:31] will retry after 351.943478ms: waiting for machine to come up
	I0819 19:12:52.042810  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:12:52.043142  438001 main.go:141] libmachine: (no-preload-278232) DBG | unable to find current IP address of domain no-preload-278232 in network mk-no-preload-278232
	I0819 19:12:52.043163  438001 main.go:141] libmachine: (no-preload-278232) DBG | I0819 19:12:52.043070  439925 retry.go:31] will retry after 332.731922ms: waiting for machine to come up
	I0819 19:12:52.377750  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:12:52.378418  438001 main.go:141] libmachine: (no-preload-278232) DBG | unable to find current IP address of domain no-preload-278232 in network mk-no-preload-278232
	I0819 19:12:52.378442  438001 main.go:141] libmachine: (no-preload-278232) DBG | I0819 19:12:52.378377  439925 retry.go:31] will retry after 601.079013ms: waiting for machine to come up
	I0819 19:12:52.980930  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:12:52.981446  438001 main.go:141] libmachine: (no-preload-278232) DBG | unable to find current IP address of domain no-preload-278232 in network mk-no-preload-278232
	I0819 19:12:52.981474  438001 main.go:141] libmachine: (no-preload-278232) DBG | I0819 19:12:52.981396  439925 retry.go:31] will retry after 621.686612ms: waiting for machine to come up
	I0819 19:12:53.605240  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:12:53.605716  438001 main.go:141] libmachine: (no-preload-278232) DBG | unable to find current IP address of domain no-preload-278232 in network mk-no-preload-278232
	I0819 19:12:53.605751  438001 main.go:141] libmachine: (no-preload-278232) DBG | I0819 19:12:53.605666  439925 retry.go:31] will retry after 627.115747ms: waiting for machine to come up
	I0819 19:12:54.234095  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:12:54.234590  438001 main.go:141] libmachine: (no-preload-278232) DBG | unable to find current IP address of domain no-preload-278232 in network mk-no-preload-278232
	I0819 19:12:54.234613  438001 main.go:141] libmachine: (no-preload-278232) DBG | I0819 19:12:54.234541  439925 retry.go:31] will retry after 1.137953362s: waiting for machine to come up
	I0819 19:12:55.373941  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:12:55.374412  438001 main.go:141] libmachine: (no-preload-278232) DBG | unable to find current IP address of domain no-preload-278232 in network mk-no-preload-278232
	I0819 19:12:55.374440  438001 main.go:141] libmachine: (no-preload-278232) DBG | I0819 19:12:55.374368  439925 retry.go:31] will retry after 1.437610965s: waiting for machine to come up
	I0819 19:12:52.696277  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:54.704463  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:57.195001  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:55.412162  438295 pod_ready.go:93] pod "kube-scheduler-embed-certs-024748" in "kube-system" namespace has status "Ready":"True"
	I0819 19:12:55.412198  438295 pod_ready.go:82] duration metric: took 3.507380249s for pod "kube-scheduler-embed-certs-024748" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:55.412214  438295 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace to be "Ready" ...
	I0819 19:12:57.419600  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:56.399941  438716 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.96472478s)
	I0819 19:12:56.399971  438716 crio.go:469] duration metric: took 2.964877539s to extract the tarball
	I0819 19:12:56.399986  438716 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 19:12:56.447075  438716 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 19:12:56.491773  438716 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0819 19:12:56.491800  438716 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0819 19:12:56.491876  438716 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:12:56.491876  438716 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0819 19:12:56.491956  438716 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0819 19:12:56.491961  438716 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 19:12:56.492041  438716 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0819 19:12:56.492059  438716 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0819 19:12:56.492280  438716 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0819 19:12:56.492494  438716 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0819 19:12:56.493750  438716 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 19:12:56.493762  438716 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0819 19:12:56.493756  438716 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:12:56.493762  438716 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0819 19:12:56.493765  438716 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0819 19:12:56.493831  438716 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0819 19:12:56.493806  438716 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0819 19:12:56.494099  438716 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0819 19:12:56.694872  438716 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0819 19:12:56.711504  438716 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0819 19:12:56.754045  438716 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0819 19:12:56.754096  438716 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0819 19:12:56.754136  438716 ssh_runner.go:195] Run: which crictl
	I0819 19:12:56.770451  438716 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0819 19:12:56.770510  438716 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0819 19:12:56.770574  438716 ssh_runner.go:195] Run: which crictl
	I0819 19:12:56.770573  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0819 19:12:56.804839  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0819 19:12:56.804872  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0819 19:12:56.825837  438716 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0819 19:12:56.832063  438716 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0819 19:12:56.834072  438716 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0819 19:12:56.837029  438716 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0819 19:12:56.837697  438716 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 19:12:56.902843  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0819 19:12:56.902930  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0819 19:12:57.020902  438716 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0819 19:12:57.020962  438716 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0819 19:12:57.020988  438716 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0819 19:12:57.021017  438716 ssh_runner.go:195] Run: which crictl
	I0819 19:12:57.021025  438716 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0819 19:12:57.021098  438716 ssh_runner.go:195] Run: which crictl
	I0819 19:12:57.023363  438716 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0819 19:12:57.023411  438716 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0819 19:12:57.023457  438716 ssh_runner.go:195] Run: which crictl
	I0819 19:12:57.023541  438716 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0819 19:12:57.023569  438716 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0819 19:12:57.023605  438716 ssh_runner.go:195] Run: which crictl
	I0819 19:12:57.034648  438716 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0819 19:12:57.034698  438716 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 19:12:57.034719  438716 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0819 19:12:57.034748  438716 ssh_runner.go:195] Run: which crictl
	I0819 19:12:57.039577  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0819 19:12:57.039648  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0819 19:12:57.039715  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0819 19:12:57.041644  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0819 19:12:57.041983  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0819 19:12:57.045383  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 19:12:57.149677  438716 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0819 19:12:57.164701  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0819 19:12:57.164821  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0819 19:12:57.202353  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0819 19:12:57.202434  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0819 19:12:57.202465  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 19:12:57.258824  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0819 19:12:57.258858  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0819 19:12:57.285756  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0819 19:12:57.326148  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 19:12:57.326237  438716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0819 19:12:57.378322  438716 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0819 19:12:57.378369  438716 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0819 19:12:57.390369  438716 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0819 19:12:57.419554  438716 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0819 19:12:57.419627  438716 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0819 19:12:57.438485  438716 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:12:57.583634  438716 cache_images.go:92] duration metric: took 1.091812972s to LoadCachedImages
	W0819 19:12:57.583757  438716 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0819 19:12:57.583777  438716 kubeadm.go:934] updating node { 192.168.50.32 8443 v1.20.0 crio true true} ...
	I0819 19:12:57.583915  438716 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-104669 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.32
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-104669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 19:12:57.584007  438716 ssh_runner.go:195] Run: crio config
	I0819 19:12:57.636714  438716 cni.go:84] Creating CNI manager for ""
	I0819 19:12:57.636738  438716 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 19:12:57.636752  438716 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 19:12:57.636776  438716 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.32 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-104669 NodeName:old-k8s-version-104669 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.32"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.32 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0819 19:12:57.636951  438716 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.32
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-104669"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.32
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.32"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 19:12:57.637028  438716 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0819 19:12:57.648002  438716 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 19:12:57.648093  438716 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 19:12:57.658889  438716 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0819 19:12:57.677316  438716 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 19:12:57.695825  438716 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0819 19:12:57.715396  438716 ssh_runner.go:195] Run: grep 192.168.50.32	control-plane.minikube.internal$ /etc/hosts
	I0819 19:12:57.719886  438716 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.32	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 19:12:57.733179  438716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:12:57.854139  438716 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 19:12:57.871590  438716 certs.go:68] Setting up /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669 for IP: 192.168.50.32
	I0819 19:12:57.871619  438716 certs.go:194] generating shared ca certs ...
	I0819 19:12:57.871642  438716 certs.go:226] acquiring lock for ca certs: {Name:mk639e03f593e0bccac045f6e9f5ba3b96cc81e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:12:57.871850  438716 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key
	I0819 19:12:57.871916  438716 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key
	I0819 19:12:57.871930  438716 certs.go:256] generating profile certs ...
	I0819 19:12:57.872060  438716 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/client.key
	I0819 19:12:57.872131  438716 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/apiserver.key.7101f8a0
	I0819 19:12:57.872197  438716 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/proxy-client.key
	I0819 19:12:57.872336  438716 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem (1338 bytes)
	W0819 19:12:57.872365  438716 certs.go:480] ignoring /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009_empty.pem, impossibly tiny 0 bytes
	I0819 19:12:57.872371  438716 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 19:12:57.872390  438716 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem (1082 bytes)
	I0819 19:12:57.872419  438716 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem (1123 bytes)
	I0819 19:12:57.872441  438716 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem (1675 bytes)
	I0819 19:12:57.872488  438716 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem (1708 bytes)
	I0819 19:12:57.873259  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 19:12:57.907576  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 19:12:57.943535  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 19:12:57.977770  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 19:12:58.021213  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0819 19:12:58.051043  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 19:12:58.080442  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 19:12:58.110888  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/old-k8s-version-104669/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 19:12:58.158635  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 19:12:58.184168  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem --> /usr/share/ca-certificates/380009.pem (1338 bytes)
	I0819 19:12:58.210064  438716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem --> /usr/share/ca-certificates/3800092.pem (1708 bytes)
	I0819 19:12:58.235366  438716 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 19:12:58.254667  438716 ssh_runner.go:195] Run: openssl version
	I0819 19:12:58.260977  438716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3800092.pem && ln -fs /usr/share/ca-certificates/3800092.pem /etc/ssl/certs/3800092.pem"
	I0819 19:12:58.272995  438716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3800092.pem
	I0819 19:12:58.278056  438716 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 17:56 /usr/share/ca-certificates/3800092.pem
	I0819 19:12:58.278154  438716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3800092.pem
	I0819 19:12:58.284420  438716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3800092.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 19:12:58.296945  438716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 19:12:58.309288  438716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:12:58.314695  438716 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:12:58.314774  438716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:12:58.321016  438716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 19:12:58.332728  438716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/380009.pem && ln -fs /usr/share/ca-certificates/380009.pem /etc/ssl/certs/380009.pem"
	I0819 19:12:58.344766  438716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/380009.pem
	I0819 19:12:58.349610  438716 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 17:56 /usr/share/ca-certificates/380009.pem
	I0819 19:12:58.349681  438716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/380009.pem
	I0819 19:12:58.355942  438716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/380009.pem /etc/ssl/certs/51391683.0"
	I0819 19:12:58.368869  438716 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 19:12:58.373681  438716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 19:12:58.380415  438716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 19:12:58.386741  438716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 19:12:58.393362  438716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 19:12:58.399665  438716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 19:12:58.406108  438716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 19:12:58.412486  438716 kubeadm.go:392] StartCluster: {Name:old-k8s-version-104669 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-104669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.32 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 19:12:58.412606  438716 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 19:12:58.412655  438716 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 19:12:58.462379  438716 cri.go:89] found id: ""
	I0819 19:12:58.462463  438716 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 19:12:58.474029  438716 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 19:12:58.474054  438716 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 19:12:58.474112  438716 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 19:12:58.485755  438716 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 19:12:58.486762  438716 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-104669" does not appear in /home/jenkins/minikube-integration/19468-372744/kubeconfig
	I0819 19:12:58.487464  438716 kubeconfig.go:62] /home/jenkins/minikube-integration/19468-372744/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-104669" cluster setting kubeconfig missing "old-k8s-version-104669" context setting]
	I0819 19:12:58.489361  438716 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/kubeconfig: {Name:mk8e7b4e1bb7da665111d2acd83eb48882c66853 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:12:58.508865  438716 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 19:12:58.520577  438716 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.32
	I0819 19:12:58.520622  438716 kubeadm.go:1160] stopping kube-system containers ...
	I0819 19:12:58.520637  438716 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0819 19:12:58.520728  438716 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 19:12:58.561900  438716 cri.go:89] found id: ""
	I0819 19:12:58.561984  438716 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0819 19:12:58.580483  438716 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 19:12:58.591734  438716 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 19:12:58.591754  438716 kubeadm.go:157] found existing configuration files:
	
	I0819 19:12:58.591804  438716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 19:12:58.601694  438716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 19:12:58.601771  438716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 19:12:58.612132  438716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 19:12:58.621911  438716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 19:12:58.621984  438716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 19:12:58.631525  438716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 19:12:58.640802  438716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 19:12:58.640872  438716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 19:12:58.650216  438716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 19:12:58.660647  438716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 19:12:58.660720  438716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 19:12:58.669992  438716 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 19:12:58.679709  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:12:58.809302  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:12:59.757994  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:13:00.006386  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:13:00.136752  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:13:00.222424  438716 api_server.go:52] waiting for apiserver process to appear ...
	I0819 19:13:00.222542  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:12:56.813279  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:12:56.813777  438001 main.go:141] libmachine: (no-preload-278232) DBG | unable to find current IP address of domain no-preload-278232 in network mk-no-preload-278232
	I0819 19:12:56.813807  438001 main.go:141] libmachine: (no-preload-278232) DBG | I0819 19:12:56.813725  439925 retry.go:31] will retry after 1.504132921s: waiting for machine to come up
	I0819 19:12:58.319408  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:12:58.319880  438001 main.go:141] libmachine: (no-preload-278232) DBG | unable to find current IP address of domain no-preload-278232 in network mk-no-preload-278232
	I0819 19:12:58.319910  438001 main.go:141] libmachine: (no-preload-278232) DBG | I0819 19:12:58.319832  439925 retry.go:31] will retry after 1.921699926s: waiting for machine to come up
	I0819 19:13:00.243504  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:00.243995  438001 main.go:141] libmachine: (no-preload-278232) DBG | unable to find current IP address of domain no-preload-278232 in network mk-no-preload-278232
	I0819 19:13:00.244021  438001 main.go:141] libmachine: (no-preload-278232) DBG | I0819 19:13:00.243952  439925 retry.go:31] will retry after 2.040704792s: waiting for machine to come up
	I0819 19:12:59.195084  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:01.693648  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:12:59.419644  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:01.918769  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:00.723213  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:01.222908  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:01.723081  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:02.223465  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:02.722589  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:03.222706  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:03.722930  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:04.222826  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:04.722638  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:05.222666  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:02.287044  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:02.287490  438001 main.go:141] libmachine: (no-preload-278232) DBG | unable to find current IP address of domain no-preload-278232 in network mk-no-preload-278232
	I0819 19:13:02.287526  438001 main.go:141] libmachine: (no-preload-278232) DBG | I0819 19:13:02.287416  439925 retry.go:31] will retry after 2.562055052s: waiting for machine to come up
	I0819 19:13:04.852682  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:04.853097  438001 main.go:141] libmachine: (no-preload-278232) DBG | unable to find current IP address of domain no-preload-278232 in network mk-no-preload-278232
	I0819 19:13:04.853125  438001 main.go:141] libmachine: (no-preload-278232) DBG | I0819 19:13:04.853062  439925 retry.go:31] will retry after 3.627213972s: waiting for machine to come up
	I0819 19:13:04.194149  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:06.194831  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:04.418550  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:06.919083  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:05.723627  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:06.222663  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:06.723230  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:07.222666  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:07.722653  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:08.222861  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:08.723248  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:09.222831  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:09.722738  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:10.223069  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:08.484125  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.484586  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has current primary IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.484612  438001 main.go:141] libmachine: (no-preload-278232) Found IP for machine: 192.168.39.106
	I0819 19:13:08.484642  438001 main.go:141] libmachine: (no-preload-278232) Reserving static IP address...
	I0819 19:13:08.485049  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "no-preload-278232", mac: "52:54:00:14:f3:b1", ip: "192.168.39.106"} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:08.485091  438001 main.go:141] libmachine: (no-preload-278232) Reserved static IP address: 192.168.39.106
	I0819 19:13:08.485112  438001 main.go:141] libmachine: (no-preload-278232) DBG | skip adding static IP to network mk-no-preload-278232 - found existing host DHCP lease matching {name: "no-preload-278232", mac: "52:54:00:14:f3:b1", ip: "192.168.39.106"}
	I0819 19:13:08.485129  438001 main.go:141] libmachine: (no-preload-278232) DBG | Getting to WaitForSSH function...
	I0819 19:13:08.485145  438001 main.go:141] libmachine: (no-preload-278232) Waiting for SSH to be available...
	I0819 19:13:08.486998  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.487266  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:08.487290  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.487402  438001 main.go:141] libmachine: (no-preload-278232) DBG | Using SSH client type: external
	I0819 19:13:08.487429  438001 main.go:141] libmachine: (no-preload-278232) DBG | Using SSH private key: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/no-preload-278232/id_rsa (-rw-------)
	I0819 19:13:08.487463  438001 main.go:141] libmachine: (no-preload-278232) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.106 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19468-372744/.minikube/machines/no-preload-278232/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 19:13:08.487476  438001 main.go:141] libmachine: (no-preload-278232) DBG | About to run SSH command:
	I0819 19:13:08.487487  438001 main.go:141] libmachine: (no-preload-278232) DBG | exit 0
	I0819 19:13:08.611459  438001 main.go:141] libmachine: (no-preload-278232) DBG | SSH cmd err, output: <nil>: 
	I0819 19:13:08.611934  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetConfigRaw
	I0819 19:13:08.612610  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetIP
	I0819 19:13:08.615212  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.615564  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:08.615594  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.615919  438001 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/no-preload-278232/config.json ...
	I0819 19:13:08.616140  438001 machine.go:93] provisionDockerMachine start ...
	I0819 19:13:08.616162  438001 main.go:141] libmachine: (no-preload-278232) Calling .DriverName
	I0819 19:13:08.616387  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHHostname
	I0819 19:13:08.618650  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.618956  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:08.618988  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.619098  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHPort
	I0819 19:13:08.619291  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:08.619433  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:08.619569  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHUsername
	I0819 19:13:08.619727  438001 main.go:141] libmachine: Using SSH client type: native
	I0819 19:13:08.619893  438001 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I0819 19:13:08.619903  438001 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 19:13:08.724912  438001 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 19:13:08.724955  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetMachineName
	I0819 19:13:08.725264  438001 buildroot.go:166] provisioning hostname "no-preload-278232"
	I0819 19:13:08.725291  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetMachineName
	I0819 19:13:08.725486  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHHostname
	I0819 19:13:08.728810  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.729237  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:08.729274  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.729434  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHPort
	I0819 19:13:08.729667  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:08.729887  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:08.730067  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHUsername
	I0819 19:13:08.730244  438001 main.go:141] libmachine: Using SSH client type: native
	I0819 19:13:08.730490  438001 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I0819 19:13:08.730511  438001 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-278232 && echo "no-preload-278232" | sudo tee /etc/hostname
	I0819 19:13:08.854474  438001 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-278232
	
	I0819 19:13:08.854499  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHHostname
	I0819 19:13:08.857179  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.857511  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:08.857540  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.857713  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHPort
	I0819 19:13:08.857912  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:08.858075  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:08.858189  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHUsername
	I0819 19:13:08.858356  438001 main.go:141] libmachine: Using SSH client type: native
	I0819 19:13:08.858556  438001 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I0819 19:13:08.858579  438001 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-278232' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-278232/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-278232' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 19:13:08.973053  438001 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 19:13:08.973090  438001 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19468-372744/.minikube CaCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19468-372744/.minikube}
	I0819 19:13:08.973115  438001 buildroot.go:174] setting up certificates
	I0819 19:13:08.973125  438001 provision.go:84] configureAuth start
	I0819 19:13:08.973135  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetMachineName
	I0819 19:13:08.973417  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetIP
	I0819 19:13:08.976100  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.976459  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:08.976487  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.976690  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHHostname
	I0819 19:13:08.978902  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.979342  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:08.979370  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:08.979530  438001 provision.go:143] copyHostCerts
	I0819 19:13:08.979605  438001 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem, removing ...
	I0819 19:13:08.979628  438001 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem
	I0819 19:13:08.979717  438001 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/ca.pem (1082 bytes)
	I0819 19:13:08.979830  438001 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem, removing ...
	I0819 19:13:08.979842  438001 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem
	I0819 19:13:08.979874  438001 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/cert.pem (1123 bytes)
	I0819 19:13:08.979963  438001 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem, removing ...
	I0819 19:13:08.979974  438001 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem
	I0819 19:13:08.980002  438001 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19468-372744/.minikube/key.pem (1675 bytes)
	I0819 19:13:08.980075  438001 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem org=jenkins.no-preload-278232 san=[127.0.0.1 192.168.39.106 localhost minikube no-preload-278232]
	I0819 19:13:09.092643  438001 provision.go:177] copyRemoteCerts
	I0819 19:13:09.092707  438001 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 19:13:09.092739  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHHostname
	I0819 19:13:09.095542  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:09.095929  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:09.095960  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:09.096099  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHPort
	I0819 19:13:09.096318  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:09.096481  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHUsername
	I0819 19:13:09.096635  438001 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/no-preload-278232/id_rsa Username:docker}
	I0819 19:13:09.179713  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 19:13:09.206363  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0819 19:13:09.231180  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 19:13:09.256764  438001 provision.go:87] duration metric: took 283.626537ms to configureAuth
	I0819 19:13:09.256810  438001 buildroot.go:189] setting minikube options for container-runtime
	I0819 19:13:09.256993  438001 config.go:182] Loaded profile config "no-preload-278232": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:13:09.257079  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHHostname
	I0819 19:13:09.259661  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:09.260061  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:09.260094  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:09.260253  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHPort
	I0819 19:13:09.260461  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:09.260640  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:09.260796  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHUsername
	I0819 19:13:09.260973  438001 main.go:141] libmachine: Using SSH client type: native
	I0819 19:13:09.261150  438001 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I0819 19:13:09.261166  438001 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 19:13:09.534325  438001 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 19:13:09.534357  438001 machine.go:96] duration metric: took 918.201944ms to provisionDockerMachine
	I0819 19:13:09.534371  438001 start.go:293] postStartSetup for "no-preload-278232" (driver="kvm2")
	I0819 19:13:09.534387  438001 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 19:13:09.534412  438001 main.go:141] libmachine: (no-preload-278232) Calling .DriverName
	I0819 19:13:09.534794  438001 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 19:13:09.534826  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHHostname
	I0819 19:13:09.537623  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:09.537974  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:09.538002  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:09.538138  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHPort
	I0819 19:13:09.538349  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:09.538534  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHUsername
	I0819 19:13:09.538669  438001 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/no-preload-278232/id_rsa Username:docker}
	I0819 19:13:09.627085  438001 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 19:13:09.631714  438001 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 19:13:09.631740  438001 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-372744/.minikube/addons for local assets ...
	I0819 19:13:09.631817  438001 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-372744/.minikube/files for local assets ...
	I0819 19:13:09.631911  438001 filesync.go:149] local asset: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem -> 3800092.pem in /etc/ssl/certs
	I0819 19:13:09.632035  438001 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 19:13:09.642942  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem --> /etc/ssl/certs/3800092.pem (1708 bytes)
	I0819 19:13:09.669242  438001 start.go:296] duration metric: took 134.853886ms for postStartSetup
	I0819 19:13:09.669294  438001 fix.go:56] duration metric: took 19.584399031s for fixHost
	I0819 19:13:09.669325  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHHostname
	I0819 19:13:09.672072  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:09.672461  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:09.672494  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:09.672635  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHPort
	I0819 19:13:09.672937  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:09.673116  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:09.673331  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHUsername
	I0819 19:13:09.673517  438001 main.go:141] libmachine: Using SSH client type: native
	I0819 19:13:09.673699  438001 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I0819 19:13:09.673717  438001 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 19:13:09.780601  438001 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724094789.749951838
	
	I0819 19:13:09.780628  438001 fix.go:216] guest clock: 1724094789.749951838
	I0819 19:13:09.780640  438001 fix.go:229] Guest: 2024-08-19 19:13:09.749951838 +0000 UTC Remote: 2024-08-19 19:13:09.669301343 +0000 UTC m=+358.073543000 (delta=80.650495ms)
	I0819 19:13:09.780668  438001 fix.go:200] guest clock delta is within tolerance: 80.650495ms
	I0819 19:13:09.780676  438001 start.go:83] releasing machines lock for "no-preload-278232", held for 19.69582363s
	I0819 19:13:09.780703  438001 main.go:141] libmachine: (no-preload-278232) Calling .DriverName
	I0819 19:13:09.781042  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetIP
	I0819 19:13:09.783578  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:09.783967  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:09.783996  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:09.784149  438001 main.go:141] libmachine: (no-preload-278232) Calling .DriverName
	I0819 19:13:09.784649  438001 main.go:141] libmachine: (no-preload-278232) Calling .DriverName
	I0819 19:13:09.784855  438001 main.go:141] libmachine: (no-preload-278232) Calling .DriverName
	I0819 19:13:09.784946  438001 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 19:13:09.785037  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHHostname
	I0819 19:13:09.785073  438001 ssh_runner.go:195] Run: cat /version.json
	I0819 19:13:09.785107  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHHostname
	I0819 19:13:09.787346  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:09.787706  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:09.787763  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:09.787788  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:09.787977  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHPort
	I0819 19:13:09.788162  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:09.788226  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:09.788251  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:09.788327  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHUsername
	I0819 19:13:09.788447  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHPort
	I0819 19:13:09.788500  438001 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/no-preload-278232/id_rsa Username:docker}
	I0819 19:13:09.788622  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:09.788805  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHUsername
	I0819 19:13:09.788994  438001 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/no-preload-278232/id_rsa Username:docker}
	I0819 19:13:09.864596  438001 ssh_runner.go:195] Run: systemctl --version
	I0819 19:13:09.890038  438001 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 19:13:10.039016  438001 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 19:13:10.045269  438001 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 19:13:10.045352  438001 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 19:13:10.061345  438001 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 19:13:10.061380  438001 start.go:495] detecting cgroup driver to use...
	I0819 19:13:10.061467  438001 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 19:13:10.079229  438001 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 19:13:10.094396  438001 docker.go:217] disabling cri-docker service (if available) ...
	I0819 19:13:10.094471  438001 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 19:13:10.109307  438001 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 19:13:10.123389  438001 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 19:13:10.241132  438001 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 19:13:10.395346  438001 docker.go:233] disabling docker service ...
	I0819 19:13:10.395444  438001 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 19:13:10.409604  438001 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 19:13:10.424149  438001 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 19:13:10.544180  438001 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 19:13:10.671038  438001 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 19:13:10.685563  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 19:13:10.704754  438001 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 19:13:10.704819  438001 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:13:10.716002  438001 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 19:13:10.716077  438001 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:13:10.728085  438001 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:13:10.739292  438001 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:13:10.750083  438001 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 19:13:10.760832  438001 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:13:10.771231  438001 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:13:10.788807  438001 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 19:13:10.799472  438001 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 19:13:10.809354  438001 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 19:13:10.809432  438001 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 19:13:10.824339  438001 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 19:13:10.833761  438001 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:13:10.953587  438001 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 19:13:11.091264  438001 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 19:13:11.091336  438001 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 19:13:11.096092  438001 start.go:563] Will wait 60s for crictl version
	I0819 19:13:11.096161  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:13:11.100040  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 19:13:11.142512  438001 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 19:13:11.142612  438001 ssh_runner.go:195] Run: crio --version
	I0819 19:13:11.176967  438001 ssh_runner.go:195] Run: crio --version
	I0819 19:13:11.208687  438001 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 19:13:11.209819  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetIP
	I0819 19:13:11.212533  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:11.212876  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:11.212900  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:11.213098  438001 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 19:13:11.217234  438001 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 19:13:11.229995  438001 kubeadm.go:883] updating cluster {Name:no-preload-278232 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-278232 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.106 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 19:13:11.230124  438001 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 19:13:11.230168  438001 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 19:13:11.265699  438001 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0819 19:13:11.265730  438001 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0819 19:13:11.265816  438001 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0819 19:13:11.265836  438001 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0819 19:13:11.265843  438001 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0819 19:13:11.265816  438001 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:13:11.265875  438001 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 19:13:11.265941  438001 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0819 19:13:11.265955  438001 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0819 19:13:11.266027  438001 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0819 19:13:11.267344  438001 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0819 19:13:11.267364  438001 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0819 19:13:11.267344  438001 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0819 19:13:11.267408  438001 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0819 19:13:11.267349  438001 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0819 19:13:11.267445  438001 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0819 19:13:11.267408  438001 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 19:13:11.267407  438001 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:13:11.411117  438001 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0819 19:13:11.435022  438001 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0819 19:13:11.437707  438001 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0819 19:13:11.439226  438001 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 19:13:11.446384  438001 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0819 19:13:11.448011  438001 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0819 19:13:11.463921  438001 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0819 19:13:11.476902  438001 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0819 19:13:11.476956  438001 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0819 19:13:11.477011  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:13:11.561762  438001 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0819 19:13:11.561827  438001 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0819 19:13:11.561889  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:13:08.694513  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:11.193505  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:09.419409  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:11.919413  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:13.931174  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:10.722882  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:11.223650  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:11.722917  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:12.223146  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:12.723410  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:13.222692  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:13.722636  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:14.223152  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:14.722661  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:15.223297  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:11.657022  438001 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0819 19:13:11.657071  438001 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0819 19:13:11.657092  438001 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0819 19:13:11.657123  438001 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 19:13:11.657127  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:13:11.657164  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:13:11.657176  438001 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0819 19:13:11.657195  438001 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0819 19:13:11.657217  438001 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0819 19:13:11.657216  438001 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0819 19:13:11.657254  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:13:11.657260  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:13:11.729671  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0819 19:13:11.729903  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0819 19:13:11.730476  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 19:13:11.730489  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0819 19:13:11.730510  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0819 19:13:11.730544  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0819 19:13:11.853411  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0819 19:13:11.853647  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0819 19:13:11.872296  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0819 19:13:11.872370  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0819 19:13:11.876801  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 19:13:11.877002  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0819 19:13:11.982642  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0819 19:13:12.007940  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0819 19:13:12.031132  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0819 19:13:12.031150  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0819 19:13:12.031163  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 19:13:12.031275  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0819 19:13:12.130991  438001 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0819 19:13:12.131099  438001 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0819 19:13:12.130994  438001 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0819 19:13:12.131231  438001 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1
	I0819 19:13:12.162852  438001 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0819 19:13:12.162911  438001 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0819 19:13:12.162916  438001 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0819 19:13:12.162967  438001 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0819 19:13:12.162984  438001 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0819 19:13:12.162984  438001 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0819 19:13:12.163035  438001 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0819 19:13:12.163044  438001 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0819 19:13:12.163053  438001 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0819 19:13:12.163055  438001 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0819 19:13:12.163086  438001 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0819 19:13:12.163095  438001 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0819 19:13:12.177377  438001 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0819 19:13:12.177438  438001 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0819 19:13:12.177438  438001 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0819 19:13:12.229301  438001 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:13:14.745129  438001 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (2.582015913s)
	I0819 19:13:14.745162  438001 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0819 19:13:14.745196  438001 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0: (2.582131532s)
	I0819 19:13:14.745215  438001 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.515891614s)
	I0819 19:13:14.745232  438001 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0819 19:13:14.745200  438001 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0819 19:13:14.745247  438001 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0819 19:13:14.745285  438001 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:13:14.745298  438001 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0819 19:13:14.745325  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:13:13.693752  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:15.693871  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:16.419552  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:18.920189  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:15.723053  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:16.223486  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:16.722740  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:17.223337  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:17.723160  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:18.222651  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:18.723509  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:19.223686  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:19.723376  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:20.222953  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:16.728557  438001 ssh_runner.go:235] Completed: which crictl: (1.983204878s)
	I0819 19:13:16.728614  438001 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.983294709s)
	I0819 19:13:16.728635  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:13:16.728642  438001 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0819 19:13:16.728673  438001 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0819 19:13:16.728714  438001 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0819 19:13:16.771574  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:13:20.532388  438001 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.760772797s)
	I0819 19:13:20.532421  438001 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.80368813s)
	I0819 19:13:20.532437  438001 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0819 19:13:20.532469  438001 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0819 19:13:20.532480  438001 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:13:20.532500  438001 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0819 19:13:18.193852  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:20.692752  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:21.419154  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:23.419271  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:20.723620  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:21.223286  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:21.723663  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:22.223594  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:22.723415  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:23.223643  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:23.723395  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:24.223476  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:24.723236  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:25.223620  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:22.500967  438001 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.968455152s)
	I0819 19:13:22.501030  438001 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0819 19:13:22.501036  438001 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (1.968509024s)
	I0819 19:13:22.501068  438001 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0819 19:13:22.501108  438001 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0819 19:13:22.501138  438001 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0819 19:13:22.501175  438001 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0819 19:13:22.506796  438001 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0819 19:13:23.962797  438001 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.461519717s)
	I0819 19:13:23.962838  438001 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0819 19:13:23.962876  438001 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0819 19:13:23.962959  438001 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0819 19:13:25.927805  438001 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.964816993s)
	I0819 19:13:25.927836  438001 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0819 19:13:25.927868  438001 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0819 19:13:25.927922  438001 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0819 19:13:26.572310  438001 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19468-372744/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0819 19:13:26.572368  438001 cache_images.go:123] Successfully loaded all cached images
	I0819 19:13:26.572376  438001 cache_images.go:92] duration metric: took 15.306632126s to LoadCachedImages
	I0819 19:13:26.572397  438001 kubeadm.go:934] updating node { 192.168.39.106 8443 v1.31.0 crio true true} ...
	I0819 19:13:26.572549  438001 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-278232 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.106
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-278232 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 19:13:26.572635  438001 ssh_runner.go:195] Run: crio config
	I0819 19:13:26.623839  438001 cni.go:84] Creating CNI manager for ""
	I0819 19:13:26.623862  438001 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 19:13:26.623872  438001 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 19:13:26.623896  438001 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.106 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-278232 NodeName:no-preload-278232 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.106"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.106 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 19:13:26.624138  438001 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.106
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-278232"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.106
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.106"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 19:13:26.624226  438001 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 19:13:22.693093  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:24.694313  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:26.695312  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:25.918793  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:27.919721  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:25.722593  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:26.223582  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:26.722927  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:27.223364  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:27.723223  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:28.223458  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:28.723262  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:29.222823  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:29.722837  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:30.223196  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:26.634770  438001 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 19:13:26.634844  438001 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 19:13:26.644193  438001 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0819 19:13:26.661226  438001 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 19:13:26.677413  438001 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0819 19:13:26.696260  438001 ssh_runner.go:195] Run: grep 192.168.39.106	control-plane.minikube.internal$ /etc/hosts
	I0819 19:13:26.700029  438001 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.106	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 19:13:26.711667  438001 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:13:26.849658  438001 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 19:13:26.867185  438001 certs.go:68] Setting up /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/no-preload-278232 for IP: 192.168.39.106
	I0819 19:13:26.867216  438001 certs.go:194] generating shared ca certs ...
	I0819 19:13:26.867240  438001 certs.go:226] acquiring lock for ca certs: {Name:mk639e03f593e0bccac045f6e9f5ba3b96cc81e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:13:26.867431  438001 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key
	I0819 19:13:26.867489  438001 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key
	I0819 19:13:26.867502  438001 certs.go:256] generating profile certs ...
	I0819 19:13:26.867600  438001 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/no-preload-278232/client.key
	I0819 19:13:26.867705  438001 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/no-preload-278232/apiserver.key.4086521c
	I0819 19:13:26.867759  438001 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/no-preload-278232/proxy-client.key
	I0819 19:13:26.867936  438001 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem (1338 bytes)
	W0819 19:13:26.867980  438001 certs.go:480] ignoring /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009_empty.pem, impossibly tiny 0 bytes
	I0819 19:13:26.867995  438001 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 19:13:26.868037  438001 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/ca.pem (1082 bytes)
	I0819 19:13:26.868075  438001 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/cert.pem (1123 bytes)
	I0819 19:13:26.868107  438001 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/certs/key.pem (1675 bytes)
	I0819 19:13:26.868171  438001 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem (1708 bytes)
	I0819 19:13:26.869217  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 19:13:26.903250  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 19:13:26.928593  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 19:13:26.957098  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 19:13:26.982422  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/no-preload-278232/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0819 19:13:27.009252  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/no-preload-278232/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 19:13:27.038043  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/no-preload-278232/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 19:13:27.075400  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/no-preload-278232/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 19:13:27.101568  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/ssl/certs/3800092.pem --> /usr/share/ca-certificates/3800092.pem (1708 bytes)
	I0819 19:13:27.127162  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 19:13:27.152327  438001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-372744/.minikube/certs/380009.pem --> /usr/share/ca-certificates/380009.pem (1338 bytes)
	I0819 19:13:27.176207  438001 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 19:13:27.194919  438001 ssh_runner.go:195] Run: openssl version
	I0819 19:13:27.201002  438001 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3800092.pem && ln -fs /usr/share/ca-certificates/3800092.pem /etc/ssl/certs/3800092.pem"
	I0819 19:13:27.212050  438001 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3800092.pem
	I0819 19:13:27.216607  438001 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 17:56 /usr/share/ca-certificates/3800092.pem
	I0819 19:13:27.216663  438001 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3800092.pem
	I0819 19:13:27.222437  438001 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3800092.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 19:13:27.234112  438001 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 19:13:27.245472  438001 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:13:27.250203  438001 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:13:27.250257  438001 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 19:13:27.256045  438001 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 19:13:27.266746  438001 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/380009.pem && ln -fs /usr/share/ca-certificates/380009.pem /etc/ssl/certs/380009.pem"
	I0819 19:13:27.277316  438001 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/380009.pem
	I0819 19:13:27.281660  438001 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 17:56 /usr/share/ca-certificates/380009.pem
	I0819 19:13:27.281721  438001 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/380009.pem
	I0819 19:13:27.287223  438001 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/380009.pem /etc/ssl/certs/51391683.0"
	I0819 19:13:27.299791  438001 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 19:13:27.304470  438001 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 19:13:27.310642  438001 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 19:13:27.316259  438001 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 19:13:27.322248  438001 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 19:13:27.327902  438001 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 19:13:27.333447  438001 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 19:13:27.339044  438001 kubeadm.go:392] StartCluster: {Name:no-preload-278232 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-278232 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.106 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 19:13:27.339165  438001 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 19:13:27.339241  438001 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 19:13:27.378362  438001 cri.go:89] found id: ""
	I0819 19:13:27.378436  438001 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 19:13:27.388560  438001 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 19:13:27.388580  438001 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 19:13:27.388623  438001 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 19:13:27.397834  438001 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 19:13:27.399336  438001 kubeconfig.go:125] found "no-preload-278232" server: "https://192.168.39.106:8443"
	I0819 19:13:27.402651  438001 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 19:13:27.412108  438001 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.106
	I0819 19:13:27.412155  438001 kubeadm.go:1160] stopping kube-system containers ...
	I0819 19:13:27.412170  438001 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0819 19:13:27.412230  438001 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 19:13:27.450332  438001 cri.go:89] found id: ""
	I0819 19:13:27.450431  438001 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0819 19:13:27.466943  438001 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 19:13:27.476741  438001 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 19:13:27.476765  438001 kubeadm.go:157] found existing configuration files:
	
	I0819 19:13:27.476810  438001 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 19:13:27.485630  438001 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 19:13:27.485695  438001 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 19:13:27.495232  438001 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 19:13:27.504379  438001 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 19:13:27.504449  438001 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 19:13:27.513723  438001 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 19:13:27.522864  438001 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 19:13:27.522946  438001 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 19:13:27.532402  438001 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 19:13:27.541502  438001 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 19:13:27.541592  438001 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 19:13:27.550934  438001 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 19:13:27.560650  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:13:27.684890  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:13:28.534223  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:13:28.757538  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:13:28.831313  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:13:28.897644  438001 api_server.go:52] waiting for apiserver process to appear ...
	I0819 19:13:28.897735  438001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:29.398486  438001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:29.898494  438001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:29.924881  438001 api_server.go:72] duration metric: took 1.027247684s to wait for apiserver process to appear ...
	I0819 19:13:29.924918  438001 api_server.go:88] waiting for apiserver healthz status ...
	I0819 19:13:29.924944  438001 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8443/healthz ...
	I0819 19:13:29.925535  438001 api_server.go:269] stopped: https://192.168.39.106:8443/healthz: Get "https://192.168.39.106:8443/healthz": dial tcp 192.168.39.106:8443: connect: connection refused
	I0819 19:13:30.425624  438001 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8443/healthz ...
	I0819 19:13:29.193722  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:31.194540  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:32.406445  438001 api_server.go:279] https://192.168.39.106:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 19:13:32.406476  438001 api_server.go:103] status: https://192.168.39.106:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 19:13:32.406491  438001 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8443/healthz ...
	I0819 19:13:32.470160  438001 api_server.go:279] https://192.168.39.106:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 19:13:32.470195  438001 api_server.go:103] status: https://192.168.39.106:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 19:13:32.470211  438001 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8443/healthz ...
	I0819 19:13:32.486292  438001 api_server.go:279] https://192.168.39.106:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 19:13:32.486322  438001 api_server.go:103] status: https://192.168.39.106:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 19:13:32.925943  438001 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8443/healthz ...
	I0819 19:13:32.933024  438001 api_server.go:279] https://192.168.39.106:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 19:13:32.933068  438001 api_server.go:103] status: https://192.168.39.106:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 19:13:33.425638  438001 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8443/healthz ...
	I0819 19:13:33.431919  438001 api_server.go:279] https://192.168.39.106:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 19:13:33.432051  438001 api_server.go:103] status: https://192.168.39.106:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 19:13:33.925369  438001 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8443/healthz ...
	I0819 19:13:33.930489  438001 api_server.go:279] https://192.168.39.106:8443/healthz returned 200:
	ok
	I0819 19:13:33.937758  438001 api_server.go:141] control plane version: v1.31.0
	I0819 19:13:33.937789  438001 api_server.go:131] duration metric: took 4.012862801s to wait for apiserver health ...
	I0819 19:13:33.937800  438001 cni.go:84] Creating CNI manager for ""
	I0819 19:13:33.937807  438001 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 19:13:33.939711  438001 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 19:13:30.419241  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:32.419437  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:30.723537  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:31.223437  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:31.723289  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:32.222714  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:32.723037  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:33.223138  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:33.723303  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:34.223334  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:34.722692  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:35.223021  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:33.941055  438001 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 19:13:33.953427  438001 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 19:13:33.982889  438001 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 19:13:33.998701  438001 system_pods.go:59] 8 kube-system pods found
	I0819 19:13:33.998750  438001 system_pods.go:61] "coredns-6f6b679f8f-22lbt" [c8a5cabd-41d4-41cb-91c1-2db1f3471db3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0819 19:13:33.998762  438001 system_pods.go:61] "etcd-no-preload-278232" [36d555a1-33e4-4c6c-b24e-2fee4fd84f2b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0819 19:13:33.998775  438001 system_pods.go:61] "kube-apiserver-no-preload-278232" [af7173e5-c4ac-4ece-b8b9-bb81cb6b9bfd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0819 19:13:33.998784  438001 system_pods.go:61] "kube-controller-manager-no-preload-278232" [2463d97a-5221-40ce-8fd7-08151165d6f7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0819 19:13:33.998794  438001 system_pods.go:61] "kube-proxy-rcf49" [85d5814a-1ba9-46be-ab11-17bf40c0f029] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0819 19:13:33.998807  438001 system_pods.go:61] "kube-scheduler-no-preload-278232" [3b327704-f70c-4d6f-a774-15427a305472] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0819 19:13:33.998819  438001 system_pods.go:61] "metrics-server-6867b74b74-vxwrs" [e8b74128-b393-4f0f-90fe-e05f20d54acd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 19:13:33.998827  438001 system_pods.go:61] "storage-provisioner" [24766475-1a5b-4f1a-9350-3e891b5272cc] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0819 19:13:33.998841  438001 system_pods.go:74] duration metric: took 15.918876ms to wait for pod list to return data ...
	I0819 19:13:33.998853  438001 node_conditions.go:102] verifying NodePressure condition ...
	I0819 19:13:34.003102  438001 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 19:13:34.003131  438001 node_conditions.go:123] node cpu capacity is 2
	I0819 19:13:34.003145  438001 node_conditions.go:105] duration metric: took 4.283682ms to run NodePressure ...
	I0819 19:13:34.003163  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 19:13:34.300052  438001 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0819 19:13:34.304483  438001 kubeadm.go:739] kubelet initialised
	I0819 19:13:34.304505  438001 kubeadm.go:740] duration metric: took 4.421894ms waiting for restarted kubelet to initialise ...
	I0819 19:13:34.304513  438001 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 19:13:34.310575  438001 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-22lbt" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:34.316040  438001 pod_ready.go:98] node "no-preload-278232" hosting pod "coredns-6f6b679f8f-22lbt" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:34.316068  438001 pod_ready.go:82] duration metric: took 5.462078ms for pod "coredns-6f6b679f8f-22lbt" in "kube-system" namespace to be "Ready" ...
	E0819 19:13:34.316080  438001 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-278232" hosting pod "coredns-6f6b679f8f-22lbt" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:34.316088  438001 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:34.320731  438001 pod_ready.go:98] node "no-preload-278232" hosting pod "etcd-no-preload-278232" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:34.320751  438001 pod_ready.go:82] duration metric: took 4.649545ms for pod "etcd-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	E0819 19:13:34.320758  438001 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-278232" hosting pod "etcd-no-preload-278232" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:34.320763  438001 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:34.325499  438001 pod_ready.go:98] node "no-preload-278232" hosting pod "kube-apiserver-no-preload-278232" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:34.325519  438001 pod_ready.go:82] duration metric: took 4.750861ms for pod "kube-apiserver-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	E0819 19:13:34.325526  438001 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-278232" hosting pod "kube-apiserver-no-preload-278232" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:34.325531  438001 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:34.388221  438001 pod_ready.go:98] node "no-preload-278232" hosting pod "kube-controller-manager-no-preload-278232" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:34.388248  438001 pod_ready.go:82] duration metric: took 62.708596ms for pod "kube-controller-manager-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	E0819 19:13:34.388259  438001 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-278232" hosting pod "kube-controller-manager-no-preload-278232" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:34.388265  438001 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-rcf49" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:34.787164  438001 pod_ready.go:98] node "no-preload-278232" hosting pod "kube-proxy-rcf49" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:34.787193  438001 pod_ready.go:82] duration metric: took 398.919585ms for pod "kube-proxy-rcf49" in "kube-system" namespace to be "Ready" ...
	E0819 19:13:34.787203  438001 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-278232" hosting pod "kube-proxy-rcf49" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:34.787210  438001 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:35.186336  438001 pod_ready.go:98] node "no-preload-278232" hosting pod "kube-scheduler-no-preload-278232" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:35.186365  438001 pod_ready.go:82] duration metric: took 399.147858ms for pod "kube-scheduler-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	E0819 19:13:35.186377  438001 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-278232" hosting pod "kube-scheduler-no-preload-278232" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:35.186386  438001 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:35.586266  438001 pod_ready.go:98] node "no-preload-278232" hosting pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:35.586292  438001 pod_ready.go:82] duration metric: took 399.895038ms for pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace to be "Ready" ...
	E0819 19:13:35.586301  438001 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-278232" hosting pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:35.586307  438001 pod_ready.go:39] duration metric: took 1.281785432s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 19:13:35.586326  438001 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 19:13:35.598523  438001 ops.go:34] apiserver oom_adj: -16
	I0819 19:13:35.598545  438001 kubeadm.go:597] duration metric: took 8.20995933s to restartPrimaryControlPlane
	I0819 19:13:35.598554  438001 kubeadm.go:394] duration metric: took 8.259514907s to StartCluster
	I0819 19:13:35.598576  438001 settings.go:142] acquiring lock: {Name:mk396fcf49a1d0e69583cf37ff3c819e37118163 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:13:35.598662  438001 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19468-372744/kubeconfig
	I0819 19:13:35.600424  438001 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/kubeconfig: {Name:mk8e7b4e1bb7da665111d2acd83eb48882c66853 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:13:35.600672  438001 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.106 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 19:13:35.600768  438001 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 19:13:35.600850  438001 addons.go:69] Setting storage-provisioner=true in profile "no-preload-278232"
	I0819 19:13:35.600879  438001 addons.go:69] Setting metrics-server=true in profile "no-preload-278232"
	I0819 19:13:35.600924  438001 addons.go:234] Setting addon metrics-server=true in "no-preload-278232"
	W0819 19:13:35.600938  438001 addons.go:243] addon metrics-server should already be in state true
	I0819 19:13:35.600884  438001 addons.go:234] Setting addon storage-provisioner=true in "no-preload-278232"
	W0819 19:13:35.600969  438001 addons.go:243] addon storage-provisioner should already be in state true
	I0819 19:13:35.600966  438001 config.go:182] Loaded profile config "no-preload-278232": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:13:35.600976  438001 host.go:66] Checking if "no-preload-278232" exists ...
	I0819 19:13:35.600988  438001 host.go:66] Checking if "no-preload-278232" exists ...
	I0819 19:13:35.601395  438001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:13:35.601428  438001 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:13:35.601436  438001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:13:35.601453  438001 addons.go:69] Setting default-storageclass=true in profile "no-preload-278232"
	I0819 19:13:35.601501  438001 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-278232"
	I0819 19:13:35.601463  438001 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:13:35.601898  438001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:13:35.601948  438001 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:13:35.602507  438001 out.go:177] * Verifying Kubernetes components...
	I0819 19:13:35.604092  438001 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:13:35.617515  438001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34839
	I0819 19:13:35.617538  438001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36157
	I0819 19:13:35.617521  438001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35771
	I0819 19:13:35.618045  438001 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:13:35.618101  438001 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:13:35.618163  438001 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:13:35.618570  438001 main.go:141] libmachine: Using API Version  1
	I0819 19:13:35.618598  438001 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:13:35.618712  438001 main.go:141] libmachine: Using API Version  1
	I0819 19:13:35.618734  438001 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:13:35.618715  438001 main.go:141] libmachine: Using API Version  1
	I0819 19:13:35.618754  438001 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:13:35.618989  438001 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:13:35.619109  438001 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:13:35.619111  438001 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:13:35.619177  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetState
	I0819 19:13:35.619649  438001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:13:35.619693  438001 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:13:35.619695  438001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:13:35.619768  438001 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:13:35.641244  438001 addons.go:234] Setting addon default-storageclass=true in "no-preload-278232"
	W0819 19:13:35.641268  438001 addons.go:243] addon default-storageclass should already be in state true
	I0819 19:13:35.641298  438001 host.go:66] Checking if "no-preload-278232" exists ...
	I0819 19:13:35.641558  438001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:13:35.641610  438001 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:13:35.659392  438001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39373
	I0819 19:13:35.659999  438001 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:13:35.660432  438001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38477
	I0819 19:13:35.660432  438001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35477
	I0819 19:13:35.660604  438001 main.go:141] libmachine: Using API Version  1
	I0819 19:13:35.660631  438001 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:13:35.661089  438001 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:13:35.661149  438001 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:13:35.661169  438001 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:13:35.661641  438001 main.go:141] libmachine: Using API Version  1
	I0819 19:13:35.661661  438001 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:13:35.661757  438001 main.go:141] libmachine: Using API Version  1
	I0819 19:13:35.661772  438001 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:13:35.661792  438001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:13:35.661826  438001 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:13:35.662039  438001 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:13:35.662142  438001 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:13:35.662222  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetState
	I0819 19:13:35.662375  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetState
	I0819 19:13:35.664221  438001 main.go:141] libmachine: (no-preload-278232) Calling .DriverName
	I0819 19:13:35.664397  438001 main.go:141] libmachine: (no-preload-278232) Calling .DriverName
	I0819 19:13:35.666459  438001 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0819 19:13:35.666471  438001 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:13:35.667849  438001 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0819 19:13:35.667864  438001 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0819 19:13:35.667882  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHHostname
	I0819 19:13:35.667944  438001 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 19:13:35.667959  438001 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 19:13:35.667977  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHHostname
	I0819 19:13:35.673516  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHPort
	I0819 19:13:35.673544  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:35.673520  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:35.673578  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:35.673593  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:35.673602  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:35.673521  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHPort
	I0819 19:13:35.673615  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:35.673793  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:35.673937  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:35.673986  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHUsername
	I0819 19:13:35.674150  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHUsername
	I0819 19:13:35.674324  438001 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/no-preload-278232/id_rsa Username:docker}
	I0819 19:13:35.674350  438001 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/no-preload-278232/id_rsa Username:docker}
	I0819 19:13:35.683691  438001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39783
	I0819 19:13:35.684219  438001 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:13:35.684806  438001 main.go:141] libmachine: Using API Version  1
	I0819 19:13:35.684831  438001 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:13:35.685251  438001 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:13:35.685515  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetState
	I0819 19:13:35.687268  438001 main.go:141] libmachine: (no-preload-278232) Calling .DriverName
	I0819 19:13:35.687485  438001 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 19:13:35.687503  438001 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 19:13:35.687524  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHHostname
	I0819 19:13:35.690504  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:35.691297  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHPort
	I0819 19:13:35.691333  438001 main.go:141] libmachine: (no-preload-278232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f3:b1", ip: ""} in network mk-no-preload-278232: {Iface:virbr1 ExpiryTime:2024-08-19 20:13:02 +0000 UTC Type:0 Mac:52:54:00:14:f3:b1 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-278232 Clientid:01:52:54:00:14:f3:b1}
	I0819 19:13:35.691356  438001 main.go:141] libmachine: (no-preload-278232) DBG | domain no-preload-278232 has defined IP address 192.168.39.106 and MAC address 52:54:00:14:f3:b1 in network mk-no-preload-278232
	I0819 19:13:35.691477  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHKeyPath
	I0819 19:13:35.691659  438001 main.go:141] libmachine: (no-preload-278232) Calling .GetSSHUsername
	I0819 19:13:35.691814  438001 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/no-preload-278232/id_rsa Username:docker}
	I0819 19:13:35.833054  438001 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 19:13:35.855442  438001 node_ready.go:35] waiting up to 6m0s for node "no-preload-278232" to be "Ready" ...
	I0819 19:13:35.923521  438001 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0819 19:13:35.923551  438001 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0819 19:13:35.940005  438001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 19:13:35.965657  438001 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0819 19:13:35.965686  438001 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0819 19:13:36.002636  438001 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 19:13:36.002665  438001 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0819 19:13:36.024764  438001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 19:13:36.058824  438001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 19:13:36.420421  438001 main.go:141] libmachine: Making call to close driver server
	I0819 19:13:36.420452  438001 main.go:141] libmachine: (no-preload-278232) Calling .Close
	I0819 19:13:36.420785  438001 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:13:36.420804  438001 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:13:36.420844  438001 main.go:141] libmachine: (no-preload-278232) DBG | Closing plugin on server side
	I0819 19:13:36.420904  438001 main.go:141] libmachine: Making call to close driver server
	I0819 19:13:36.420918  438001 main.go:141] libmachine: (no-preload-278232) Calling .Close
	I0819 19:13:36.421185  438001 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:13:36.421210  438001 main.go:141] libmachine: (no-preload-278232) DBG | Closing plugin on server side
	I0819 19:13:36.421224  438001 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:13:36.429463  438001 main.go:141] libmachine: Making call to close driver server
	I0819 19:13:36.429481  438001 main.go:141] libmachine: (no-preload-278232) Calling .Close
	I0819 19:13:36.429811  438001 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:13:36.429830  438001 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:13:37.141893  438001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.117083882s)
	I0819 19:13:37.141987  438001 main.go:141] libmachine: Making call to close driver server
	I0819 19:13:37.141999  438001 main.go:141] libmachine: (no-preload-278232) Calling .Close
	I0819 19:13:37.142472  438001 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:13:37.142495  438001 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:13:37.142506  438001 main.go:141] libmachine: Making call to close driver server
	I0819 19:13:37.142515  438001 main.go:141] libmachine: (no-preload-278232) Calling .Close
	I0819 19:13:37.142788  438001 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:13:37.142808  438001 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:13:37.142814  438001 main.go:141] libmachine: (no-preload-278232) DBG | Closing plugin on server side
	I0819 19:13:37.161659  438001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.10278963s)
	I0819 19:13:37.161723  438001 main.go:141] libmachine: Making call to close driver server
	I0819 19:13:37.161739  438001 main.go:141] libmachine: (no-preload-278232) Calling .Close
	I0819 19:13:37.162067  438001 main.go:141] libmachine: (no-preload-278232) DBG | Closing plugin on server side
	I0819 19:13:37.162099  438001 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:13:37.162125  438001 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:13:37.162142  438001 main.go:141] libmachine: Making call to close driver server
	I0819 19:13:37.162154  438001 main.go:141] libmachine: (no-preload-278232) Calling .Close
	I0819 19:13:37.162404  438001 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:13:37.162420  438001 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:13:37.162432  438001 addons.go:475] Verifying addon metrics-server=true in "no-preload-278232"
	I0819 19:13:37.164423  438001 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0819 19:13:33.694203  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:35.694403  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:34.918988  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:36.919564  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:35.722784  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:36.223168  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:36.723041  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:37.222801  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:37.722855  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:38.223296  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:38.722936  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:39.223326  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:39.722883  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:40.223284  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:37.165767  438001 addons.go:510] duration metric: took 1.565026237s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0819 19:13:37.859454  438001 node_ready.go:53] node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:39.859662  438001 node_ready.go:53] node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:38.193207  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:40.694127  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:39.418572  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:41.918302  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:43.918558  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:40.722612  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:41.222700  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:41.723144  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:42.223369  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:42.723209  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:43.222849  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:43.723518  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:44.223585  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:44.722772  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:45.223078  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:41.859965  438001 node_ready.go:53] node "no-preload-278232" has status "Ready":"False"
	I0819 19:13:43.359120  438001 node_ready.go:49] node "no-preload-278232" has status "Ready":"True"
	I0819 19:13:43.359151  438001 node_ready.go:38] duration metric: took 7.503671074s for node "no-preload-278232" to be "Ready" ...
	I0819 19:13:43.359169  438001 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 19:13:43.365307  438001 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-22lbt" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:43.369626  438001 pod_ready.go:93] pod "coredns-6f6b679f8f-22lbt" in "kube-system" namespace has status "Ready":"True"
	I0819 19:13:43.369646  438001 pod_ready.go:82] duration metric: took 4.316734ms for pod "coredns-6f6b679f8f-22lbt" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:43.369654  438001 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:45.377672  438001 pod_ready.go:103] pod "etcd-no-preload-278232" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:43.193775  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:45.693494  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:45.919705  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:48.418981  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:45.723287  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:46.223666  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:46.722754  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:47.223414  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:47.723567  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:48.222938  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:48.723011  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:49.223076  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:49.723443  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:50.223627  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:47.875409  438001 pod_ready.go:103] pod "etcd-no-preload-278232" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:48.377127  438001 pod_ready.go:93] pod "etcd-no-preload-278232" in "kube-system" namespace has status "Ready":"True"
	I0819 19:13:48.377155  438001 pod_ready.go:82] duration metric: took 5.007493319s for pod "etcd-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:48.377169  438001 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:48.381841  438001 pod_ready.go:93] pod "kube-apiserver-no-preload-278232" in "kube-system" namespace has status "Ready":"True"
	I0819 19:13:48.381864  438001 pod_ready.go:82] duration metric: took 4.686309ms for pod "kube-apiserver-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:48.381877  438001 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:48.386382  438001 pod_ready.go:93] pod "kube-controller-manager-no-preload-278232" in "kube-system" namespace has status "Ready":"True"
	I0819 19:13:48.386397  438001 pod_ready.go:82] duration metric: took 4.514361ms for pod "kube-controller-manager-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:48.386405  438001 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rcf49" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:48.390940  438001 pod_ready.go:93] pod "kube-proxy-rcf49" in "kube-system" namespace has status "Ready":"True"
	I0819 19:13:48.390955  438001 pod_ready.go:82] duration metric: took 4.544499ms for pod "kube-proxy-rcf49" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:48.390963  438001 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:48.395159  438001 pod_ready.go:93] pod "kube-scheduler-no-preload-278232" in "kube-system" namespace has status "Ready":"True"
	I0819 19:13:48.395180  438001 pod_ready.go:82] duration metric: took 4.211012ms for pod "kube-scheduler-no-preload-278232" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:48.395197  438001 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace to be "Ready" ...
	I0819 19:13:50.402109  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:47.693601  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:50.193183  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:50.918811  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:52.919981  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:50.723259  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:51.222697  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:51.723284  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:52.222757  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:52.723414  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:53.223202  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:53.722721  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:54.223578  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:54.723400  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:55.222730  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:52.901901  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:54.903583  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:52.693231  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:54.693934  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:56.695700  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:55.418965  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:57.918885  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:55.723644  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:56.223212  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:56.722729  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:57.223226  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:57.723045  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:58.222901  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:58.722710  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:59.223149  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:13:59.723186  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:00.222763  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:00.222844  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:00.271266  438716 cri.go:89] found id: ""
	I0819 19:14:00.271296  438716 logs.go:276] 0 containers: []
	W0819 19:14:00.271305  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:00.271312  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:00.271373  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:00.311870  438716 cri.go:89] found id: ""
	I0819 19:14:00.311900  438716 logs.go:276] 0 containers: []
	W0819 19:14:00.311936  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:00.311946  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:00.312011  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:00.350476  438716 cri.go:89] found id: ""
	I0819 19:14:00.350505  438716 logs.go:276] 0 containers: []
	W0819 19:14:00.350514  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:00.350520  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:00.350586  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:00.387404  438716 cri.go:89] found id: ""
	I0819 19:14:00.387438  438716 logs.go:276] 0 containers: []
	W0819 19:14:00.387447  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:00.387457  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:00.387516  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:00.423493  438716 cri.go:89] found id: ""
	I0819 19:14:00.423521  438716 logs.go:276] 0 containers: []
	W0819 19:14:00.423529  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:00.423535  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:00.423596  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:00.458593  438716 cri.go:89] found id: ""
	I0819 19:14:00.458630  438716 logs.go:276] 0 containers: []
	W0819 19:14:00.458642  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:00.458651  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:00.458722  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:00.495645  438716 cri.go:89] found id: ""
	I0819 19:14:00.495695  438716 logs.go:276] 0 containers: []
	W0819 19:14:00.495709  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:00.495717  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:00.495782  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:00.531464  438716 cri.go:89] found id: ""
	I0819 19:14:00.531498  438716 logs.go:276] 0 containers: []
	W0819 19:14:00.531508  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:00.531529  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:00.531543  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:13:57.401329  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:59.402701  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:13:59.192781  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:01.194411  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:00.419287  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:02.918450  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:00.584029  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:00.584078  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:00.597870  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:00.597908  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:00.746061  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:00.746085  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:00.746098  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:00.818001  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:00.818042  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:03.358509  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:03.371262  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:03.371345  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:03.408201  438716 cri.go:89] found id: ""
	I0819 19:14:03.408231  438716 logs.go:276] 0 containers: []
	W0819 19:14:03.408241  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:03.408248  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:03.408306  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:03.445354  438716 cri.go:89] found id: ""
	I0819 19:14:03.445386  438716 logs.go:276] 0 containers: []
	W0819 19:14:03.445396  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:03.445408  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:03.445470  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:03.481144  438716 cri.go:89] found id: ""
	I0819 19:14:03.481178  438716 logs.go:276] 0 containers: []
	W0819 19:14:03.481188  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:03.481195  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:03.481260  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:03.529069  438716 cri.go:89] found id: ""
	I0819 19:14:03.529109  438716 logs.go:276] 0 containers: []
	W0819 19:14:03.529141  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:03.529148  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:03.529216  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:03.590325  438716 cri.go:89] found id: ""
	I0819 19:14:03.590364  438716 logs.go:276] 0 containers: []
	W0819 19:14:03.590377  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:03.590386  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:03.590456  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:03.634924  438716 cri.go:89] found id: ""
	I0819 19:14:03.634969  438716 logs.go:276] 0 containers: []
	W0819 19:14:03.634981  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:03.634990  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:03.635062  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:03.684133  438716 cri.go:89] found id: ""
	I0819 19:14:03.684164  438716 logs.go:276] 0 containers: []
	W0819 19:14:03.684176  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:03.684184  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:03.684253  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:03.722285  438716 cri.go:89] found id: ""
	I0819 19:14:03.722312  438716 logs.go:276] 0 containers: []
	W0819 19:14:03.722321  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:03.722330  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:03.722372  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:03.735937  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:03.735965  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:03.814906  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:03.814931  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:03.814948  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:03.896323  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:03.896363  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:03.943002  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:03.943037  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:01.901154  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:03.902972  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:05.903388  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:03.694686  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:06.193228  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:04.919332  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:07.419221  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:06.496886  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:06.510719  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:06.510790  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:06.544692  438716 cri.go:89] found id: ""
	I0819 19:14:06.544724  438716 logs.go:276] 0 containers: []
	W0819 19:14:06.544737  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:06.544747  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:06.544818  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:06.578935  438716 cri.go:89] found id: ""
	I0819 19:14:06.578962  438716 logs.go:276] 0 containers: []
	W0819 19:14:06.578971  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:06.578979  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:06.579033  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:06.614488  438716 cri.go:89] found id: ""
	I0819 19:14:06.614516  438716 logs.go:276] 0 containers: []
	W0819 19:14:06.614525  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:06.614532  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:06.614583  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:06.648579  438716 cri.go:89] found id: ""
	I0819 19:14:06.648612  438716 logs.go:276] 0 containers: []
	W0819 19:14:06.648623  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:06.648630  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:06.648685  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:06.685168  438716 cri.go:89] found id: ""
	I0819 19:14:06.685198  438716 logs.go:276] 0 containers: []
	W0819 19:14:06.685208  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:06.685217  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:06.685280  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:06.720391  438716 cri.go:89] found id: ""
	I0819 19:14:06.720424  438716 logs.go:276] 0 containers: []
	W0819 19:14:06.720433  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:06.720440  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:06.720491  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:06.758183  438716 cri.go:89] found id: ""
	I0819 19:14:06.758217  438716 logs.go:276] 0 containers: []
	W0819 19:14:06.758228  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:06.758237  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:06.758307  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:06.800182  438716 cri.go:89] found id: ""
	I0819 19:14:06.800215  438716 logs.go:276] 0 containers: []
	W0819 19:14:06.800224  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:06.800234  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:06.800247  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:06.852735  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:06.852777  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:06.867214  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:06.867249  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:06.938942  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:06.938967  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:06.938980  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:07.023950  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:07.023992  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:09.568889  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:09.588481  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:09.588545  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:09.630790  438716 cri.go:89] found id: ""
	I0819 19:14:09.630825  438716 logs.go:276] 0 containers: []
	W0819 19:14:09.630839  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:09.630848  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:09.630926  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:09.673258  438716 cri.go:89] found id: ""
	I0819 19:14:09.673291  438716 logs.go:276] 0 containers: []
	W0819 19:14:09.673302  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:09.673311  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:09.673374  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:09.709500  438716 cri.go:89] found id: ""
	I0819 19:14:09.709530  438716 logs.go:276] 0 containers: []
	W0819 19:14:09.709541  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:09.709549  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:09.709617  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:09.743110  438716 cri.go:89] found id: ""
	I0819 19:14:09.743139  438716 logs.go:276] 0 containers: []
	W0819 19:14:09.743150  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:09.743164  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:09.743238  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:09.776717  438716 cri.go:89] found id: ""
	I0819 19:14:09.776746  438716 logs.go:276] 0 containers: []
	W0819 19:14:09.776754  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:09.776761  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:09.776820  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:09.811381  438716 cri.go:89] found id: ""
	I0819 19:14:09.811409  438716 logs.go:276] 0 containers: []
	W0819 19:14:09.811417  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:09.811423  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:09.811474  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:09.843699  438716 cri.go:89] found id: ""
	I0819 19:14:09.843730  438716 logs.go:276] 0 containers: []
	W0819 19:14:09.843741  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:09.843750  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:09.843822  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:09.882972  438716 cri.go:89] found id: ""
	I0819 19:14:09.883005  438716 logs.go:276] 0 containers: []
	W0819 19:14:09.883018  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:09.883033  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:09.883050  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:09.973077  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:09.973114  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:10.014505  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:10.014556  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:10.069779  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:10.069819  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:10.084337  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:10.084367  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:10.164870  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:08.402464  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:10.900684  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:08.193980  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:10.194818  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:09.918852  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:12.419687  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:12.665929  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:12.679881  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:12.679960  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:12.718305  438716 cri.go:89] found id: ""
	I0819 19:14:12.718332  438716 logs.go:276] 0 containers: []
	W0819 19:14:12.718341  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:12.718348  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:12.718398  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:12.759084  438716 cri.go:89] found id: ""
	I0819 19:14:12.759112  438716 logs.go:276] 0 containers: []
	W0819 19:14:12.759127  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:12.759135  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:12.759205  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:12.793193  438716 cri.go:89] found id: ""
	I0819 19:14:12.793228  438716 logs.go:276] 0 containers: []
	W0819 19:14:12.793238  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:12.793245  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:12.793299  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:12.828283  438716 cri.go:89] found id: ""
	I0819 19:14:12.828310  438716 logs.go:276] 0 containers: []
	W0819 19:14:12.828322  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:12.828329  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:12.828379  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:12.861971  438716 cri.go:89] found id: ""
	I0819 19:14:12.862004  438716 logs.go:276] 0 containers: []
	W0819 19:14:12.862016  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:12.862025  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:12.862092  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:12.898173  438716 cri.go:89] found id: ""
	I0819 19:14:12.898203  438716 logs.go:276] 0 containers: []
	W0819 19:14:12.898214  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:12.898223  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:12.898287  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:12.940203  438716 cri.go:89] found id: ""
	I0819 19:14:12.940234  438716 logs.go:276] 0 containers: []
	W0819 19:14:12.940246  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:12.940254  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:12.940309  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:12.978092  438716 cri.go:89] found id: ""
	I0819 19:14:12.978123  438716 logs.go:276] 0 containers: []
	W0819 19:14:12.978134  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:12.978147  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:12.978172  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:12.992082  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:12.992117  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:13.073609  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:13.073636  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:13.073649  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:13.153060  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:13.153105  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:13.196535  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:13.196581  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:12.903116  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:15.401183  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:12.693872  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:14.694252  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:17.193116  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:14.919563  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:17.418946  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:15.750298  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:15.763913  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:15.763996  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:15.804515  438716 cri.go:89] found id: ""
	I0819 19:14:15.804542  438716 logs.go:276] 0 containers: []
	W0819 19:14:15.804551  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:15.804558  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:15.804624  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:15.847077  438716 cri.go:89] found id: ""
	I0819 19:14:15.847112  438716 logs.go:276] 0 containers: []
	W0819 19:14:15.847125  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:15.847133  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:15.847200  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:15.882316  438716 cri.go:89] found id: ""
	I0819 19:14:15.882348  438716 logs.go:276] 0 containers: []
	W0819 19:14:15.882358  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:15.882365  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:15.882417  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:15.919084  438716 cri.go:89] found id: ""
	I0819 19:14:15.919114  438716 logs.go:276] 0 containers: []
	W0819 19:14:15.919125  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:15.919132  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:15.919202  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:15.953139  438716 cri.go:89] found id: ""
	I0819 19:14:15.953175  438716 logs.go:276] 0 containers: []
	W0819 19:14:15.953188  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:15.953209  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:15.953276  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:15.993231  438716 cri.go:89] found id: ""
	I0819 19:14:15.993259  438716 logs.go:276] 0 containers: []
	W0819 19:14:15.993268  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:15.993286  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:15.993337  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:16.030382  438716 cri.go:89] found id: ""
	I0819 19:14:16.030412  438716 logs.go:276] 0 containers: []
	W0819 19:14:16.030422  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:16.030428  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:16.030482  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:16.065834  438716 cri.go:89] found id: ""
	I0819 19:14:16.065861  438716 logs.go:276] 0 containers: []
	W0819 19:14:16.065872  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:16.065885  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:16.065901  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:16.117943  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:16.117983  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:16.132010  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:16.132041  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:16.202398  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:16.202416  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:16.202429  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:16.286609  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:16.286653  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:18.830502  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:18.844022  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:18.844107  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:18.880539  438716 cri.go:89] found id: ""
	I0819 19:14:18.880576  438716 logs.go:276] 0 containers: []
	W0819 19:14:18.880588  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:18.880595  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:18.880657  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:18.918426  438716 cri.go:89] found id: ""
	I0819 19:14:18.918454  438716 logs.go:276] 0 containers: []
	W0819 19:14:18.918463  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:18.918470  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:18.918531  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:18.954534  438716 cri.go:89] found id: ""
	I0819 19:14:18.954566  438716 logs.go:276] 0 containers: []
	W0819 19:14:18.954578  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:18.954587  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:18.954651  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:18.993820  438716 cri.go:89] found id: ""
	I0819 19:14:18.993852  438716 logs.go:276] 0 containers: []
	W0819 19:14:18.993864  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:18.993885  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:18.993967  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:19.026947  438716 cri.go:89] found id: ""
	I0819 19:14:19.026982  438716 logs.go:276] 0 containers: []
	W0819 19:14:19.026995  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:19.027005  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:19.027072  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:19.062097  438716 cri.go:89] found id: ""
	I0819 19:14:19.062130  438716 logs.go:276] 0 containers: []
	W0819 19:14:19.062142  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:19.062150  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:19.062207  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:19.099522  438716 cri.go:89] found id: ""
	I0819 19:14:19.099549  438716 logs.go:276] 0 containers: []
	W0819 19:14:19.099559  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:19.099567  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:19.099630  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:19.134766  438716 cri.go:89] found id: ""
	I0819 19:14:19.134803  438716 logs.go:276] 0 containers: []
	W0819 19:14:19.134815  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:19.134850  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:19.134867  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:19.176428  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:19.176458  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:19.231448  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:19.231484  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:19.245631  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:19.245687  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:19.318679  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:19.318703  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:19.318717  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:17.401916  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:19.402628  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:19.195224  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:21.693528  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:19.918727  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:21.918863  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:23.919050  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:21.898430  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:21.913840  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:21.913911  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:21.955682  438716 cri.go:89] found id: ""
	I0819 19:14:21.955720  438716 logs.go:276] 0 containers: []
	W0819 19:14:21.955732  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:21.955743  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:21.955820  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:21.994798  438716 cri.go:89] found id: ""
	I0819 19:14:21.994836  438716 logs.go:276] 0 containers: []
	W0819 19:14:21.994845  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:21.994852  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:21.994904  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:22.029155  438716 cri.go:89] found id: ""
	I0819 19:14:22.029191  438716 logs.go:276] 0 containers: []
	W0819 19:14:22.029202  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:22.029210  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:22.029281  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:22.072489  438716 cri.go:89] found id: ""
	I0819 19:14:22.072534  438716 logs.go:276] 0 containers: []
	W0819 19:14:22.072546  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:22.072559  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:22.072621  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:22.109160  438716 cri.go:89] found id: ""
	I0819 19:14:22.109192  438716 logs.go:276] 0 containers: []
	W0819 19:14:22.109203  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:22.109211  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:22.109281  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:22.146161  438716 cri.go:89] found id: ""
	I0819 19:14:22.146194  438716 logs.go:276] 0 containers: []
	W0819 19:14:22.146206  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:22.146215  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:22.146276  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:22.183005  438716 cri.go:89] found id: ""
	I0819 19:14:22.183033  438716 logs.go:276] 0 containers: []
	W0819 19:14:22.183046  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:22.183054  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:22.183108  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:22.220745  438716 cri.go:89] found id: ""
	I0819 19:14:22.220772  438716 logs.go:276] 0 containers: []
	W0819 19:14:22.220784  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:22.220798  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:22.220817  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:22.297377  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:22.297403  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:22.297416  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:22.373503  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:22.373542  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:22.414922  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:22.414956  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:22.477902  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:22.477944  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:24.993405  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:25.007305  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:25.007379  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:25.041157  438716 cri.go:89] found id: ""
	I0819 19:14:25.041191  438716 logs.go:276] 0 containers: []
	W0819 19:14:25.041203  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:25.041211  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:25.041278  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:25.078572  438716 cri.go:89] found id: ""
	I0819 19:14:25.078605  438716 logs.go:276] 0 containers: []
	W0819 19:14:25.078617  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:25.078625  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:25.078695  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:25.114571  438716 cri.go:89] found id: ""
	I0819 19:14:25.114603  438716 logs.go:276] 0 containers: []
	W0819 19:14:25.114615  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:25.114624  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:25.114690  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:25.154341  438716 cri.go:89] found id: ""
	I0819 19:14:25.154366  438716 logs.go:276] 0 containers: []
	W0819 19:14:25.154375  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:25.154381  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:25.154434  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:25.192592  438716 cri.go:89] found id: ""
	I0819 19:14:25.192620  438716 logs.go:276] 0 containers: []
	W0819 19:14:25.192631  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:25.192640  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:25.192705  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:25.227813  438716 cri.go:89] found id: ""
	I0819 19:14:25.227847  438716 logs.go:276] 0 containers: []
	W0819 19:14:25.227860  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:25.227869  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:25.227933  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:25.264321  438716 cri.go:89] found id: ""
	I0819 19:14:25.264349  438716 logs.go:276] 0 containers: []
	W0819 19:14:25.264357  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:25.264364  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:25.264427  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:25.298562  438716 cri.go:89] found id: ""
	I0819 19:14:25.298596  438716 logs.go:276] 0 containers: []
	W0819 19:14:25.298608  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:25.298621  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:25.298638  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:25.352659  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:25.352695  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:25.366638  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:25.366665  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:25.432964  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:25.432992  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:25.433010  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:25.511487  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:25.511549  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:21.902660  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:24.401454  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:26.402255  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:24.193406  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:26.194758  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:25.919090  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:28.420031  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:28.057003  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:28.070849  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:28.070914  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:28.107817  438716 cri.go:89] found id: ""
	I0819 19:14:28.107852  438716 logs.go:276] 0 containers: []
	W0819 19:14:28.107865  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:28.107875  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:28.107948  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:28.141816  438716 cri.go:89] found id: ""
	I0819 19:14:28.141862  438716 logs.go:276] 0 containers: []
	W0819 19:14:28.141874  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:28.141887  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:28.141958  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:28.179854  438716 cri.go:89] found id: ""
	I0819 19:14:28.179885  438716 logs.go:276] 0 containers: []
	W0819 19:14:28.179893  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:28.179905  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:28.179972  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:28.217335  438716 cri.go:89] found id: ""
	I0819 19:14:28.217364  438716 logs.go:276] 0 containers: []
	W0819 19:14:28.217372  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:28.217380  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:28.217438  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:28.254161  438716 cri.go:89] found id: ""
	I0819 19:14:28.254193  438716 logs.go:276] 0 containers: []
	W0819 19:14:28.254204  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:28.254213  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:28.254276  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:28.288658  438716 cri.go:89] found id: ""
	I0819 19:14:28.288682  438716 logs.go:276] 0 containers: []
	W0819 19:14:28.288691  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:28.288698  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:28.288749  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:28.321957  438716 cri.go:89] found id: ""
	I0819 19:14:28.321987  438716 logs.go:276] 0 containers: []
	W0819 19:14:28.321996  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:28.322004  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:28.322057  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:28.355032  438716 cri.go:89] found id: ""
	I0819 19:14:28.355068  438716 logs.go:276] 0 containers: []
	W0819 19:14:28.355080  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:28.355094  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:28.355111  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:28.406220  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:28.406253  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:28.420877  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:28.420907  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:28.502576  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:28.502598  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:28.502614  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:28.582717  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:28.582769  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:28.904716  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:31.401098  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:28.195001  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:30.693605  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:30.917957  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:32.918239  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:31.121960  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:31.135502  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:31.135568  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:31.170423  438716 cri.go:89] found id: ""
	I0819 19:14:31.170451  438716 logs.go:276] 0 containers: []
	W0819 19:14:31.170461  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:31.170467  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:31.170532  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:31.207328  438716 cri.go:89] found id: ""
	I0819 19:14:31.207356  438716 logs.go:276] 0 containers: []
	W0819 19:14:31.207364  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:31.207370  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:31.207430  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:31.245655  438716 cri.go:89] found id: ""
	I0819 19:14:31.245687  438716 logs.go:276] 0 containers: []
	W0819 19:14:31.245698  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:31.245707  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:31.245773  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:31.282174  438716 cri.go:89] found id: ""
	I0819 19:14:31.282208  438716 logs.go:276] 0 containers: []
	W0819 19:14:31.282221  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:31.282230  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:31.282303  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:31.316779  438716 cri.go:89] found id: ""
	I0819 19:14:31.316810  438716 logs.go:276] 0 containers: []
	W0819 19:14:31.316818  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:31.316826  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:31.316879  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:31.356849  438716 cri.go:89] found id: ""
	I0819 19:14:31.356884  438716 logs.go:276] 0 containers: []
	W0819 19:14:31.356894  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:31.356900  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:31.356963  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:31.395102  438716 cri.go:89] found id: ""
	I0819 19:14:31.395135  438716 logs.go:276] 0 containers: []
	W0819 19:14:31.395143  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:31.395150  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:31.395205  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:31.433018  438716 cri.go:89] found id: ""
	I0819 19:14:31.433045  438716 logs.go:276] 0 containers: []
	W0819 19:14:31.433076  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:31.433091  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:31.433108  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:31.446294  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:31.446319  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:31.518158  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:31.518180  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:31.518196  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:31.600568  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:31.600611  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:31.642356  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:31.642386  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:34.195665  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:34.210300  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:34.210370  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:34.248715  438716 cri.go:89] found id: ""
	I0819 19:14:34.248753  438716 logs.go:276] 0 containers: []
	W0819 19:14:34.248767  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:34.248775  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:34.248849  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:34.285305  438716 cri.go:89] found id: ""
	I0819 19:14:34.285334  438716 logs.go:276] 0 containers: []
	W0819 19:14:34.285347  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:34.285355  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:34.285438  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:34.326114  438716 cri.go:89] found id: ""
	I0819 19:14:34.326148  438716 logs.go:276] 0 containers: []
	W0819 19:14:34.326160  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:34.326168  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:34.326235  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:34.360587  438716 cri.go:89] found id: ""
	I0819 19:14:34.360616  438716 logs.go:276] 0 containers: []
	W0819 19:14:34.360628  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:34.360638  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:34.360715  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:34.397452  438716 cri.go:89] found id: ""
	I0819 19:14:34.397483  438716 logs.go:276] 0 containers: []
	W0819 19:14:34.397491  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:34.397498  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:34.397556  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:34.433651  438716 cri.go:89] found id: ""
	I0819 19:14:34.433683  438716 logs.go:276] 0 containers: []
	W0819 19:14:34.433694  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:34.433702  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:34.433771  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:34.468758  438716 cri.go:89] found id: ""
	I0819 19:14:34.468787  438716 logs.go:276] 0 containers: []
	W0819 19:14:34.468796  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:34.468802  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:34.468856  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:34.505787  438716 cri.go:89] found id: ""
	I0819 19:14:34.505816  438716 logs.go:276] 0 containers: []
	W0819 19:14:34.505828  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:34.505842  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:34.505859  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:34.519430  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:34.519463  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:34.592785  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:34.592810  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:34.592827  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:34.671215  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:34.671254  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:34.711248  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:34.711277  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:33.403429  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:35.901124  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:33.194319  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:35.694280  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:34.918372  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:37.418982  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:37.265131  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:37.279035  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:37.279127  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:37.325556  438716 cri.go:89] found id: ""
	I0819 19:14:37.325589  438716 logs.go:276] 0 containers: []
	W0819 19:14:37.325601  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:37.325610  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:37.325676  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:37.360514  438716 cri.go:89] found id: ""
	I0819 19:14:37.360541  438716 logs.go:276] 0 containers: []
	W0819 19:14:37.360553  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:37.360561  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:37.360616  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:37.394428  438716 cri.go:89] found id: ""
	I0819 19:14:37.394456  438716 logs.go:276] 0 containers: []
	W0819 19:14:37.394465  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:37.394472  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:37.394531  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:37.430221  438716 cri.go:89] found id: ""
	I0819 19:14:37.430249  438716 logs.go:276] 0 containers: []
	W0819 19:14:37.430257  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:37.430264  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:37.430324  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:37.466598  438716 cri.go:89] found id: ""
	I0819 19:14:37.466630  438716 logs.go:276] 0 containers: []
	W0819 19:14:37.466641  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:37.466649  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:37.466719  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:37.510455  438716 cri.go:89] found id: ""
	I0819 19:14:37.510484  438716 logs.go:276] 0 containers: []
	W0819 19:14:37.510492  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:37.510499  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:37.510563  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:37.546122  438716 cri.go:89] found id: ""
	I0819 19:14:37.546157  438716 logs.go:276] 0 containers: []
	W0819 19:14:37.546169  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:37.546178  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:37.546247  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:37.579425  438716 cri.go:89] found id: ""
	I0819 19:14:37.579452  438716 logs.go:276] 0 containers: []
	W0819 19:14:37.579463  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:37.579475  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:37.579491  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:37.592673  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:37.592704  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:37.674026  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:37.674048  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:37.674065  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:37.752206  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:37.752244  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:37.791281  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:37.791321  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:40.345520  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:40.358771  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:40.358835  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:40.394515  438716 cri.go:89] found id: ""
	I0819 19:14:40.394549  438716 logs.go:276] 0 containers: []
	W0819 19:14:40.394565  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:40.394575  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:40.394637  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:40.430971  438716 cri.go:89] found id: ""
	I0819 19:14:40.431007  438716 logs.go:276] 0 containers: []
	W0819 19:14:40.431018  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:40.431027  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:40.431094  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:40.471417  438716 cri.go:89] found id: ""
	I0819 19:14:40.471443  438716 logs.go:276] 0 containers: []
	W0819 19:14:40.471452  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:40.471458  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:40.471511  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:40.508641  438716 cri.go:89] found id: ""
	I0819 19:14:40.508670  438716 logs.go:276] 0 containers: []
	W0819 19:14:40.508678  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:40.508684  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:40.508749  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:37.903083  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:40.402562  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:37.695031  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:40.193724  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:39.921480  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:42.420201  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:40.542418  438716 cri.go:89] found id: ""
	I0819 19:14:40.542456  438716 logs.go:276] 0 containers: []
	W0819 19:14:40.542465  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:40.542472  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:40.542533  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:40.577367  438716 cri.go:89] found id: ""
	I0819 19:14:40.577399  438716 logs.go:276] 0 containers: []
	W0819 19:14:40.577408  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:40.577414  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:40.577476  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:40.611111  438716 cri.go:89] found id: ""
	I0819 19:14:40.611138  438716 logs.go:276] 0 containers: []
	W0819 19:14:40.611147  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:40.611155  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:40.611222  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:40.650769  438716 cri.go:89] found id: ""
	I0819 19:14:40.650797  438716 logs.go:276] 0 containers: []
	W0819 19:14:40.650805  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:40.650814  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:40.650827  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:40.688085  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:40.688111  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:40.740187  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:40.740225  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:40.754774  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:40.754803  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:40.828689  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:40.828712  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:40.828728  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:43.419171  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:43.432127  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:43.432201  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:43.468751  438716 cri.go:89] found id: ""
	I0819 19:14:43.468778  438716 logs.go:276] 0 containers: []
	W0819 19:14:43.468787  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:43.468803  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:43.468870  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:43.503290  438716 cri.go:89] found id: ""
	I0819 19:14:43.503319  438716 logs.go:276] 0 containers: []
	W0819 19:14:43.503328  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:43.503334  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:43.503390  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:43.536382  438716 cri.go:89] found id: ""
	I0819 19:14:43.536416  438716 logs.go:276] 0 containers: []
	W0819 19:14:43.536435  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:43.536443  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:43.536494  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:43.571570  438716 cri.go:89] found id: ""
	I0819 19:14:43.571602  438716 logs.go:276] 0 containers: []
	W0819 19:14:43.571611  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:43.571617  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:43.571682  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:43.610421  438716 cri.go:89] found id: ""
	I0819 19:14:43.610455  438716 logs.go:276] 0 containers: []
	W0819 19:14:43.610465  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:43.610473  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:43.610524  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:43.647173  438716 cri.go:89] found id: ""
	I0819 19:14:43.647200  438716 logs.go:276] 0 containers: []
	W0819 19:14:43.647209  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:43.647215  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:43.647266  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:43.684493  438716 cri.go:89] found id: ""
	I0819 19:14:43.684525  438716 logs.go:276] 0 containers: []
	W0819 19:14:43.684535  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:43.684541  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:43.684609  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:43.718781  438716 cri.go:89] found id: ""
	I0819 19:14:43.718811  438716 logs.go:276] 0 containers: []
	W0819 19:14:43.718822  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:43.718834  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:43.718858  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:43.732546  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:43.732578  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:43.819640  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:43.819665  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:43.819700  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:43.900246  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:43.900286  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:43.941751  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:43.941783  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:42.901387  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:44.901876  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:42.693950  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:45.193132  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:44.918631  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:47.417977  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:46.498232  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:46.511167  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:46.511237  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:46.545493  438716 cri.go:89] found id: ""
	I0819 19:14:46.545528  438716 logs.go:276] 0 containers: []
	W0819 19:14:46.545541  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:46.545549  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:46.545607  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:46.580599  438716 cri.go:89] found id: ""
	I0819 19:14:46.580626  438716 logs.go:276] 0 containers: []
	W0819 19:14:46.580634  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:46.580640  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:46.580760  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:46.614515  438716 cri.go:89] found id: ""
	I0819 19:14:46.614551  438716 logs.go:276] 0 containers: []
	W0819 19:14:46.614561  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:46.614570  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:46.614637  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:46.647767  438716 cri.go:89] found id: ""
	I0819 19:14:46.647803  438716 logs.go:276] 0 containers: []
	W0819 19:14:46.647816  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:46.647825  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:46.647893  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:46.681660  438716 cri.go:89] found id: ""
	I0819 19:14:46.681695  438716 logs.go:276] 0 containers: []
	W0819 19:14:46.681707  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:46.681717  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:46.681788  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:46.718828  438716 cri.go:89] found id: ""
	I0819 19:14:46.718858  438716 logs.go:276] 0 containers: []
	W0819 19:14:46.718868  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:46.718875  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:46.718929  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:46.760524  438716 cri.go:89] found id: ""
	I0819 19:14:46.760553  438716 logs.go:276] 0 containers: []
	W0819 19:14:46.760561  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:46.760569  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:46.760634  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:46.799014  438716 cri.go:89] found id: ""
	I0819 19:14:46.799042  438716 logs.go:276] 0 containers: []
	W0819 19:14:46.799054  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:46.799067  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:46.799135  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:46.850769  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:46.850812  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:46.865647  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:46.865698  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:46.942197  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:46.942228  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:46.942244  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:47.019295  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:47.019337  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:49.562713  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:49.575406  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:49.575484  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:49.610067  438716 cri.go:89] found id: ""
	I0819 19:14:49.610105  438716 logs.go:276] 0 containers: []
	W0819 19:14:49.610115  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:49.610121  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:49.610182  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:49.646164  438716 cri.go:89] found id: ""
	I0819 19:14:49.646205  438716 logs.go:276] 0 containers: []
	W0819 19:14:49.646230  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:49.646238  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:49.646317  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:49.680268  438716 cri.go:89] found id: ""
	I0819 19:14:49.680303  438716 logs.go:276] 0 containers: []
	W0819 19:14:49.680314  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:49.680322  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:49.680387  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:49.714952  438716 cri.go:89] found id: ""
	I0819 19:14:49.714981  438716 logs.go:276] 0 containers: []
	W0819 19:14:49.714992  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:49.715001  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:49.715067  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:49.749483  438716 cri.go:89] found id: ""
	I0819 19:14:49.749516  438716 logs.go:276] 0 containers: []
	W0819 19:14:49.749528  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:49.749537  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:49.749616  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:49.794506  438716 cri.go:89] found id: ""
	I0819 19:14:49.794538  438716 logs.go:276] 0 containers: []
	W0819 19:14:49.794550  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:49.794558  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:49.794628  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:49.847284  438716 cri.go:89] found id: ""
	I0819 19:14:49.847313  438716 logs.go:276] 0 containers: []
	W0819 19:14:49.847324  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:49.847334  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:49.847398  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:49.903800  438716 cri.go:89] found id: ""
	I0819 19:14:49.903829  438716 logs.go:276] 0 containers: []
	W0819 19:14:49.903839  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:49.903850  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:49.903867  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:49.972836  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:49.972866  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:49.972885  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:50.049939  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:50.049976  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:50.086514  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:50.086550  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:50.140681  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:50.140718  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:46.903667  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:49.402220  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:51.402281  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:47.693723  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:49.694755  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:52.193220  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:49.919931  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:52.419880  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:52.656573  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:52.670043  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:52.670124  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:52.704514  438716 cri.go:89] found id: ""
	I0819 19:14:52.704541  438716 logs.go:276] 0 containers: []
	W0819 19:14:52.704551  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:52.704558  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:52.704621  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:52.738329  438716 cri.go:89] found id: ""
	I0819 19:14:52.738357  438716 logs.go:276] 0 containers: []
	W0819 19:14:52.738365  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:52.738371  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:52.738423  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:52.774886  438716 cri.go:89] found id: ""
	I0819 19:14:52.774917  438716 logs.go:276] 0 containers: []
	W0819 19:14:52.774926  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:52.774933  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:52.774986  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:52.810262  438716 cri.go:89] found id: ""
	I0819 19:14:52.810288  438716 logs.go:276] 0 containers: []
	W0819 19:14:52.810296  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:52.810303  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:52.810363  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:52.848429  438716 cri.go:89] found id: ""
	I0819 19:14:52.848455  438716 logs.go:276] 0 containers: []
	W0819 19:14:52.848463  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:52.848474  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:52.848539  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:52.886135  438716 cri.go:89] found id: ""
	I0819 19:14:52.886163  438716 logs.go:276] 0 containers: []
	W0819 19:14:52.886179  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:52.886185  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:52.886241  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:52.923288  438716 cri.go:89] found id: ""
	I0819 19:14:52.923314  438716 logs.go:276] 0 containers: []
	W0819 19:14:52.923325  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:52.923333  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:52.923397  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:52.957273  438716 cri.go:89] found id: ""
	I0819 19:14:52.957303  438716 logs.go:276] 0 containers: []
	W0819 19:14:52.957315  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:52.957328  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:52.957345  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:52.970687  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:52.970714  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:53.045081  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:53.045108  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:53.045125  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:53.122233  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:53.122279  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:53.161525  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:53.161554  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:53.901584  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:55.902739  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:54.194220  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:56.197070  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:54.917358  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:56.918562  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:58.919041  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:55.714177  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:55.733726  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:55.733809  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:55.781435  438716 cri.go:89] found id: ""
	I0819 19:14:55.781472  438716 logs.go:276] 0 containers: []
	W0819 19:14:55.781485  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:55.781493  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:55.781560  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:55.846316  438716 cri.go:89] found id: ""
	I0819 19:14:55.846351  438716 logs.go:276] 0 containers: []
	W0819 19:14:55.846362  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:55.846370  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:55.846439  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:55.881587  438716 cri.go:89] found id: ""
	I0819 19:14:55.881623  438716 logs.go:276] 0 containers: []
	W0819 19:14:55.881635  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:55.881644  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:55.881719  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:55.919332  438716 cri.go:89] found id: ""
	I0819 19:14:55.919374  438716 logs.go:276] 0 containers: []
	W0819 19:14:55.919382  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:55.919389  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:55.919441  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:55.954704  438716 cri.go:89] found id: ""
	I0819 19:14:55.954739  438716 logs.go:276] 0 containers: []
	W0819 19:14:55.954752  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:55.954761  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:55.954836  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:55.989289  438716 cri.go:89] found id: ""
	I0819 19:14:55.989321  438716 logs.go:276] 0 containers: []
	W0819 19:14:55.989332  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:55.989340  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:55.989406  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:56.025771  438716 cri.go:89] found id: ""
	I0819 19:14:56.025800  438716 logs.go:276] 0 containers: []
	W0819 19:14:56.025809  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:56.025816  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:56.025883  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:56.065631  438716 cri.go:89] found id: ""
	I0819 19:14:56.065673  438716 logs.go:276] 0 containers: []
	W0819 19:14:56.065686  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:56.065699  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:56.065722  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:56.119482  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:56.119523  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:56.133885  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:56.133915  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:56.207012  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:56.207033  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:56.207045  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:56.288158  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:56.288195  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:58.829677  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:14:58.844085  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:14:58.844158  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:14:58.880900  438716 cri.go:89] found id: ""
	I0819 19:14:58.880934  438716 logs.go:276] 0 containers: []
	W0819 19:14:58.880945  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:14:58.880951  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:14:58.881016  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:14:58.918833  438716 cri.go:89] found id: ""
	I0819 19:14:58.918862  438716 logs.go:276] 0 containers: []
	W0819 19:14:58.918872  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:14:58.918881  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:14:58.918939  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:14:58.956577  438716 cri.go:89] found id: ""
	I0819 19:14:58.956612  438716 logs.go:276] 0 containers: []
	W0819 19:14:58.956623  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:14:58.956634  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:14:58.956705  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:14:58.993884  438716 cri.go:89] found id: ""
	I0819 19:14:58.993914  438716 logs.go:276] 0 containers: []
	W0819 19:14:58.993923  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:14:58.993930  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:14:58.993988  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:14:59.031366  438716 cri.go:89] found id: ""
	I0819 19:14:59.031389  438716 logs.go:276] 0 containers: []
	W0819 19:14:59.031398  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:14:59.031405  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:14:59.031464  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:14:59.072014  438716 cri.go:89] found id: ""
	I0819 19:14:59.072047  438716 logs.go:276] 0 containers: []
	W0819 19:14:59.072058  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:14:59.072065  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:14:59.072129  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:14:59.108713  438716 cri.go:89] found id: ""
	I0819 19:14:59.108744  438716 logs.go:276] 0 containers: []
	W0819 19:14:59.108756  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:14:59.108765  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:14:59.108866  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:14:59.147599  438716 cri.go:89] found id: ""
	I0819 19:14:59.147634  438716 logs.go:276] 0 containers: []
	W0819 19:14:59.147647  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:14:59.147659  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:14:59.147695  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:14:59.224745  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:14:59.224781  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:14:59.264586  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:14:59.264616  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:14:59.317065  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:14:59.317104  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:14:59.331230  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:14:59.331264  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:14:59.398370  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:14:58.401471  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:00.402623  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:14:58.694096  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:01.193262  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:01.418063  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:03.418302  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:01.899123  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:01.912743  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:01.912824  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:01.949717  438716 cri.go:89] found id: ""
	I0819 19:15:01.949748  438716 logs.go:276] 0 containers: []
	W0819 19:15:01.949756  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:01.949763  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:01.949819  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:01.992776  438716 cri.go:89] found id: ""
	I0819 19:15:01.992802  438716 logs.go:276] 0 containers: []
	W0819 19:15:01.992812  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:01.992819  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:01.992884  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:02.030551  438716 cri.go:89] found id: ""
	I0819 19:15:02.030579  438716 logs.go:276] 0 containers: []
	W0819 19:15:02.030592  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:02.030600  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:02.030672  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:02.069927  438716 cri.go:89] found id: ""
	I0819 19:15:02.069955  438716 logs.go:276] 0 containers: []
	W0819 19:15:02.069964  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:02.069971  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:02.070031  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:02.106584  438716 cri.go:89] found id: ""
	I0819 19:15:02.106609  438716 logs.go:276] 0 containers: []
	W0819 19:15:02.106619  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:02.106629  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:02.106695  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:02.145007  438716 cri.go:89] found id: ""
	I0819 19:15:02.145035  438716 logs.go:276] 0 containers: []
	W0819 19:15:02.145044  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:02.145051  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:02.145113  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:02.180693  438716 cri.go:89] found id: ""
	I0819 19:15:02.180730  438716 logs.go:276] 0 containers: []
	W0819 19:15:02.180741  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:02.180748  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:02.180800  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:02.215563  438716 cri.go:89] found id: ""
	I0819 19:15:02.215597  438716 logs.go:276] 0 containers: []
	W0819 19:15:02.215609  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:02.215623  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:02.215641  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:02.285658  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:02.285692  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:02.285711  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:02.363620  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:02.363660  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:02.414240  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:02.414274  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:02.467336  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:02.467380  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:04.981935  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:04.995537  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:04.995611  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:05.032700  438716 cri.go:89] found id: ""
	I0819 19:15:05.032735  438716 logs.go:276] 0 containers: []
	W0819 19:15:05.032748  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:05.032756  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:05.032827  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:05.069132  438716 cri.go:89] found id: ""
	I0819 19:15:05.069162  438716 logs.go:276] 0 containers: []
	W0819 19:15:05.069173  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:05.069181  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:05.069247  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:05.105320  438716 cri.go:89] found id: ""
	I0819 19:15:05.105346  438716 logs.go:276] 0 containers: []
	W0819 19:15:05.105355  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:05.105361  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:05.105421  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:05.142311  438716 cri.go:89] found id: ""
	I0819 19:15:05.142343  438716 logs.go:276] 0 containers: []
	W0819 19:15:05.142354  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:05.142362  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:05.142412  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:05.177398  438716 cri.go:89] found id: ""
	I0819 19:15:05.177426  438716 logs.go:276] 0 containers: []
	W0819 19:15:05.177437  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:05.177450  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:05.177506  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:05.212749  438716 cri.go:89] found id: ""
	I0819 19:15:05.212780  438716 logs.go:276] 0 containers: []
	W0819 19:15:05.212789  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:05.212796  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:05.212854  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:05.246325  438716 cri.go:89] found id: ""
	I0819 19:15:05.246356  438716 logs.go:276] 0 containers: []
	W0819 19:15:05.246364  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:05.246371  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:05.246420  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:05.287429  438716 cri.go:89] found id: ""
	I0819 19:15:05.287456  438716 logs.go:276] 0 containers: []
	W0819 19:15:05.287466  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:05.287476  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:05.287489  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:05.338742  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:05.338787  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:05.352948  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:05.352978  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:05.421478  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:05.421502  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:05.421529  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:05.497772  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:05.497809  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:02.902202  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:05.403518  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:03.193491  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:05.194340  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:05.419361  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:07.918522  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:08.040403  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:08.053761  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:08.053827  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:08.087047  438716 cri.go:89] found id: ""
	I0819 19:15:08.087073  438716 logs.go:276] 0 containers: []
	W0819 19:15:08.087082  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:08.087089  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:08.087140  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:08.122012  438716 cri.go:89] found id: ""
	I0819 19:15:08.122048  438716 logs.go:276] 0 containers: []
	W0819 19:15:08.122059  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:08.122068  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:08.122134  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:08.155319  438716 cri.go:89] found id: ""
	I0819 19:15:08.155349  438716 logs.go:276] 0 containers: []
	W0819 19:15:08.155360  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:08.155368  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:08.155447  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:08.196003  438716 cri.go:89] found id: ""
	I0819 19:15:08.196027  438716 logs.go:276] 0 containers: []
	W0819 19:15:08.196035  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:08.196041  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:08.196091  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:08.230798  438716 cri.go:89] found id: ""
	I0819 19:15:08.230826  438716 logs.go:276] 0 containers: []
	W0819 19:15:08.230836  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:08.230845  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:08.230910  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:08.267522  438716 cri.go:89] found id: ""
	I0819 19:15:08.267554  438716 logs.go:276] 0 containers: []
	W0819 19:15:08.267562  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:08.267569  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:08.267621  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:08.304775  438716 cri.go:89] found id: ""
	I0819 19:15:08.304801  438716 logs.go:276] 0 containers: []
	W0819 19:15:08.304809  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:08.304815  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:08.304866  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:08.344694  438716 cri.go:89] found id: ""
	I0819 19:15:08.344720  438716 logs.go:276] 0 containers: []
	W0819 19:15:08.344734  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:08.344744  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:08.344757  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:08.383581  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:08.383619  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:08.433868  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:08.433905  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:08.447627  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:08.447657  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:08.518846  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:08.518869  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:08.518887  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:07.901746  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:09.902647  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:07.693351  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:10.193893  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:12.194400  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:09.919436  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:12.418215  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:11.104449  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:11.118149  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:11.118228  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:11.157917  438716 cri.go:89] found id: ""
	I0819 19:15:11.157951  438716 logs.go:276] 0 containers: []
	W0819 19:15:11.157963  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:11.157971  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:11.158040  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:11.196685  438716 cri.go:89] found id: ""
	I0819 19:15:11.196711  438716 logs.go:276] 0 containers: []
	W0819 19:15:11.196721  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:11.196729  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:11.196788  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:11.231089  438716 cri.go:89] found id: ""
	I0819 19:15:11.231124  438716 logs.go:276] 0 containers: []
	W0819 19:15:11.231135  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:11.231144  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:11.231223  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:11.267001  438716 cri.go:89] found id: ""
	I0819 19:15:11.267032  438716 logs.go:276] 0 containers: []
	W0819 19:15:11.267041  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:11.267048  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:11.267113  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:11.302178  438716 cri.go:89] found id: ""
	I0819 19:15:11.302210  438716 logs.go:276] 0 containers: []
	W0819 19:15:11.302223  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:11.302232  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:11.302292  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:11.336335  438716 cri.go:89] found id: ""
	I0819 19:15:11.336368  438716 logs.go:276] 0 containers: []
	W0819 19:15:11.336442  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:11.336458  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:11.336525  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:11.370891  438716 cri.go:89] found id: ""
	I0819 19:15:11.370926  438716 logs.go:276] 0 containers: []
	W0819 19:15:11.370937  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:11.370945  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:11.371007  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:11.407439  438716 cri.go:89] found id: ""
	I0819 19:15:11.407466  438716 logs.go:276] 0 containers: []
	W0819 19:15:11.407473  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:11.407482  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:11.407497  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:11.458692  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:11.458735  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:11.473104  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:11.473133  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:11.542004  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:11.542031  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:11.542050  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:11.619972  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:11.620014  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:14.159220  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:14.173135  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:14.173204  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:14.210347  438716 cri.go:89] found id: ""
	I0819 19:15:14.210377  438716 logs.go:276] 0 containers: []
	W0819 19:15:14.210389  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:14.210398  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:14.210468  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:14.247143  438716 cri.go:89] found id: ""
	I0819 19:15:14.247169  438716 logs.go:276] 0 containers: []
	W0819 19:15:14.247180  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:14.247187  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:14.247260  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:14.284949  438716 cri.go:89] found id: ""
	I0819 19:15:14.284981  438716 logs.go:276] 0 containers: []
	W0819 19:15:14.284995  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:14.285003  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:14.285071  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:14.326801  438716 cri.go:89] found id: ""
	I0819 19:15:14.326826  438716 logs.go:276] 0 containers: []
	W0819 19:15:14.326834  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:14.326842  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:14.326903  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:14.362730  438716 cri.go:89] found id: ""
	I0819 19:15:14.362764  438716 logs.go:276] 0 containers: []
	W0819 19:15:14.362775  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:14.362783  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:14.362852  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:14.403406  438716 cri.go:89] found id: ""
	I0819 19:15:14.403437  438716 logs.go:276] 0 containers: []
	W0819 19:15:14.403448  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:14.403456  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:14.403514  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:14.440641  438716 cri.go:89] found id: ""
	I0819 19:15:14.440670  438716 logs.go:276] 0 containers: []
	W0819 19:15:14.440678  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:14.440685  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:14.440737  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:14.479477  438716 cri.go:89] found id: ""
	I0819 19:15:14.479511  438716 logs.go:276] 0 containers: []
	W0819 19:15:14.479521  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:14.479530  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:14.479544  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:14.530573  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:14.530620  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:14.545329  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:14.545368  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:14.619632  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:14.619652  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:14.619680  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:14.694923  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:14.694956  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:12.401350  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:14.402845  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:14.693534  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:16.693737  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:14.420872  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:16.918227  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:18.919244  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:17.237830  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:17.250579  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:17.250645  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:17.284706  438716 cri.go:89] found id: ""
	I0819 19:15:17.284738  438716 logs.go:276] 0 containers: []
	W0819 19:15:17.284750  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:17.284759  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:17.284832  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:17.320313  438716 cri.go:89] found id: ""
	I0819 19:15:17.320342  438716 logs.go:276] 0 containers: []
	W0819 19:15:17.320350  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:17.320356  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:17.320419  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:17.355974  438716 cri.go:89] found id: ""
	I0819 19:15:17.356008  438716 logs.go:276] 0 containers: []
	W0819 19:15:17.356018  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:17.356027  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:17.356093  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:17.390759  438716 cri.go:89] found id: ""
	I0819 19:15:17.390786  438716 logs.go:276] 0 containers: []
	W0819 19:15:17.390795  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:17.390803  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:17.390861  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:17.431951  438716 cri.go:89] found id: ""
	I0819 19:15:17.431982  438716 logs.go:276] 0 containers: []
	W0819 19:15:17.431993  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:17.432001  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:17.432068  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:17.467183  438716 cri.go:89] found id: ""
	I0819 19:15:17.467215  438716 logs.go:276] 0 containers: []
	W0819 19:15:17.467227  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:17.467236  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:17.467306  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:17.502678  438716 cri.go:89] found id: ""
	I0819 19:15:17.502709  438716 logs.go:276] 0 containers: []
	W0819 19:15:17.502721  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:17.502730  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:17.502801  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:17.537597  438716 cri.go:89] found id: ""
	I0819 19:15:17.537629  438716 logs.go:276] 0 containers: []
	W0819 19:15:17.537643  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:17.537656  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:17.537672  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:17.620076  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:17.620117  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:17.659979  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:17.660009  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:17.710963  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:17.711006  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:17.725556  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:17.725590  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:17.796176  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:20.297246  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:20.311395  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:20.311476  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:20.352279  438716 cri.go:89] found id: ""
	I0819 19:15:20.352317  438716 logs.go:276] 0 containers: []
	W0819 19:15:20.352328  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:20.352338  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:20.352401  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:20.390335  438716 cri.go:89] found id: ""
	I0819 19:15:20.390368  438716 logs.go:276] 0 containers: []
	W0819 19:15:20.390377  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:20.390384  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:20.390450  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:20.430264  438716 cri.go:89] found id: ""
	I0819 19:15:20.430300  438716 logs.go:276] 0 containers: []
	W0819 19:15:20.430312  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:20.430320  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:20.430386  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:20.469670  438716 cri.go:89] found id: ""
	I0819 19:15:20.469703  438716 logs.go:276] 0 containers: []
	W0819 19:15:20.469715  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:20.469723  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:20.469790  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:20.503233  438716 cri.go:89] found id: ""
	I0819 19:15:20.503263  438716 logs.go:276] 0 containers: []
	W0819 19:15:20.503274  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:20.503283  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:20.503371  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:16.902246  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:19.402407  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:18.693921  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:21.193124  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:21.418463  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:23.418730  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:20.538180  438716 cri.go:89] found id: ""
	I0819 19:15:20.538211  438716 logs.go:276] 0 containers: []
	W0819 19:15:20.538223  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:20.538231  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:20.538302  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:20.573301  438716 cri.go:89] found id: ""
	I0819 19:15:20.573329  438716 logs.go:276] 0 containers: []
	W0819 19:15:20.573337  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:20.573352  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:20.573411  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:20.606962  438716 cri.go:89] found id: ""
	I0819 19:15:20.606995  438716 logs.go:276] 0 containers: []
	W0819 19:15:20.607007  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:20.607019  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:20.607035  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:20.658392  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:20.658428  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:20.672063  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:20.672092  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:20.747987  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:20.748010  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:20.748035  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:20.829367  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:20.829415  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:23.378885  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:23.393711  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:23.393778  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:23.430629  438716 cri.go:89] found id: ""
	I0819 19:15:23.430655  438716 logs.go:276] 0 containers: []
	W0819 19:15:23.430665  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:23.430675  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:23.430727  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:23.467509  438716 cri.go:89] found id: ""
	I0819 19:15:23.467541  438716 logs.go:276] 0 containers: []
	W0819 19:15:23.467552  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:23.467560  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:23.467634  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:23.505313  438716 cri.go:89] found id: ""
	I0819 19:15:23.505351  438716 logs.go:276] 0 containers: []
	W0819 19:15:23.505359  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:23.505366  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:23.505416  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:23.543393  438716 cri.go:89] found id: ""
	I0819 19:15:23.543428  438716 logs.go:276] 0 containers: []
	W0819 19:15:23.543441  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:23.543450  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:23.543514  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:23.578265  438716 cri.go:89] found id: ""
	I0819 19:15:23.578293  438716 logs.go:276] 0 containers: []
	W0819 19:15:23.578301  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:23.578308  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:23.578376  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:23.613951  438716 cri.go:89] found id: ""
	I0819 19:15:23.613981  438716 logs.go:276] 0 containers: []
	W0819 19:15:23.613989  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:23.613996  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:23.614061  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:23.647387  438716 cri.go:89] found id: ""
	I0819 19:15:23.647418  438716 logs.go:276] 0 containers: []
	W0819 19:15:23.647426  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:23.647433  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:23.647501  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:23.682482  438716 cri.go:89] found id: ""
	I0819 19:15:23.682510  438716 logs.go:276] 0 containers: []
	W0819 19:15:23.682519  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:23.682530  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:23.682547  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:23.696601  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:23.696629  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:23.766762  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:23.766788  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:23.766804  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:23.850947  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:23.850988  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:23.891113  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:23.891146  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:21.902926  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:24.401874  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:23.193192  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:25.193347  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:25.919555  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:28.419920  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:26.444086  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:26.457774  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:26.457844  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:26.494525  438716 cri.go:89] found id: ""
	I0819 19:15:26.494552  438716 logs.go:276] 0 containers: []
	W0819 19:15:26.494560  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:26.494567  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:26.494618  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:26.535317  438716 cri.go:89] found id: ""
	I0819 19:15:26.535348  438716 logs.go:276] 0 containers: []
	W0819 19:15:26.535359  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:26.535368  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:26.535437  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:26.570853  438716 cri.go:89] found id: ""
	I0819 19:15:26.570886  438716 logs.go:276] 0 containers: []
	W0819 19:15:26.570896  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:26.570920  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:26.570987  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:26.610739  438716 cri.go:89] found id: ""
	I0819 19:15:26.610773  438716 logs.go:276] 0 containers: []
	W0819 19:15:26.610785  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:26.610794  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:26.610885  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:26.651274  438716 cri.go:89] found id: ""
	I0819 19:15:26.651303  438716 logs.go:276] 0 containers: []
	W0819 19:15:26.651311  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:26.651318  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:26.651367  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:26.689963  438716 cri.go:89] found id: ""
	I0819 19:15:26.689993  438716 logs.go:276] 0 containers: []
	W0819 19:15:26.690005  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:26.690013  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:26.690083  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:26.729433  438716 cri.go:89] found id: ""
	I0819 19:15:26.729465  438716 logs.go:276] 0 containers: []
	W0819 19:15:26.729475  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:26.729483  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:26.729548  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:26.768386  438716 cri.go:89] found id: ""
	I0819 19:15:26.768418  438716 logs.go:276] 0 containers: []
	W0819 19:15:26.768427  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:26.768436  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:26.768449  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:26.821526  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:26.821564  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:26.835714  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:26.835763  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:26.907981  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:26.908007  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:26.908023  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:26.991969  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:26.992008  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:29.529743  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:29.544812  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:29.544883  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:29.581455  438716 cri.go:89] found id: ""
	I0819 19:15:29.581486  438716 logs.go:276] 0 containers: []
	W0819 19:15:29.581496  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:29.581503  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:29.581559  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:29.634542  438716 cri.go:89] found id: ""
	I0819 19:15:29.634576  438716 logs.go:276] 0 containers: []
	W0819 19:15:29.634587  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:29.634596  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:29.634663  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:29.670388  438716 cri.go:89] found id: ""
	I0819 19:15:29.670422  438716 logs.go:276] 0 containers: []
	W0819 19:15:29.670439  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:29.670449  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:29.670511  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:29.712267  438716 cri.go:89] found id: ""
	I0819 19:15:29.712293  438716 logs.go:276] 0 containers: []
	W0819 19:15:29.712304  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:29.712313  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:29.712376  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:29.752392  438716 cri.go:89] found id: ""
	I0819 19:15:29.752423  438716 logs.go:276] 0 containers: []
	W0819 19:15:29.752432  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:29.752438  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:29.752500  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:29.791734  438716 cri.go:89] found id: ""
	I0819 19:15:29.791763  438716 logs.go:276] 0 containers: []
	W0819 19:15:29.791772  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:29.791778  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:29.791830  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:29.832882  438716 cri.go:89] found id: ""
	I0819 19:15:29.832910  438716 logs.go:276] 0 containers: []
	W0819 19:15:29.832921  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:29.832929  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:29.832986  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:29.872035  438716 cri.go:89] found id: ""
	I0819 19:15:29.872068  438716 logs.go:276] 0 containers: []
	W0819 19:15:29.872076  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:29.872086  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:29.872098  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:29.926551  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:29.926588  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:29.940500  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:29.940537  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:30.010327  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:30.010348  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:30.010368  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:30.090864  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:30.090910  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:26.902881  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:29.401449  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:27.692753  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:29.693161  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:32.193256  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:30.421066  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:32.918642  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:32.636291  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:32.649264  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:32.649334  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:32.683746  438716 cri.go:89] found id: ""
	I0819 19:15:32.683774  438716 logs.go:276] 0 containers: []
	W0819 19:15:32.683785  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:32.683794  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:32.683867  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:32.723805  438716 cri.go:89] found id: ""
	I0819 19:15:32.723838  438716 logs.go:276] 0 containers: []
	W0819 19:15:32.723850  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:32.723858  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:32.723917  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:32.758119  438716 cri.go:89] found id: ""
	I0819 19:15:32.758148  438716 logs.go:276] 0 containers: []
	W0819 19:15:32.758157  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:32.758164  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:32.758215  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:32.792726  438716 cri.go:89] found id: ""
	I0819 19:15:32.792754  438716 logs.go:276] 0 containers: []
	W0819 19:15:32.792768  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:32.792775  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:32.792823  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:32.829180  438716 cri.go:89] found id: ""
	I0819 19:15:32.829208  438716 logs.go:276] 0 containers: []
	W0819 19:15:32.829217  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:32.829224  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:32.829274  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:32.869045  438716 cri.go:89] found id: ""
	I0819 19:15:32.869081  438716 logs.go:276] 0 containers: []
	W0819 19:15:32.869093  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:32.869102  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:32.869172  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:32.904780  438716 cri.go:89] found id: ""
	I0819 19:15:32.904803  438716 logs.go:276] 0 containers: []
	W0819 19:15:32.904811  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:32.904818  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:32.904870  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:32.940846  438716 cri.go:89] found id: ""
	I0819 19:15:32.940876  438716 logs.go:276] 0 containers: []
	W0819 19:15:32.940886  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:32.940900  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:32.940924  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:33.008569  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:33.008592  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:33.008606  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:33.092605  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:33.092657  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:33.133016  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:33.133045  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:33.188335  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:33.188376  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:31.901719  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:34.401060  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:36.401983  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:34.193690  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:36.694042  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:34.918948  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:37.418186  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:35.704043  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:35.717647  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:35.717708  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:35.752337  438716 cri.go:89] found id: ""
	I0819 19:15:35.752364  438716 logs.go:276] 0 containers: []
	W0819 19:15:35.752372  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:35.752378  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:35.752431  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:35.787233  438716 cri.go:89] found id: ""
	I0819 19:15:35.787261  438716 logs.go:276] 0 containers: []
	W0819 19:15:35.787269  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:35.787275  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:35.787334  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:35.819641  438716 cri.go:89] found id: ""
	I0819 19:15:35.819667  438716 logs.go:276] 0 containers: []
	W0819 19:15:35.819697  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:35.819705  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:35.819775  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:35.856133  438716 cri.go:89] found id: ""
	I0819 19:15:35.856160  438716 logs.go:276] 0 containers: []
	W0819 19:15:35.856169  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:35.856176  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:35.856240  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:35.889390  438716 cri.go:89] found id: ""
	I0819 19:15:35.889422  438716 logs.go:276] 0 containers: []
	W0819 19:15:35.889432  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:35.889438  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:35.889501  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:35.927477  438716 cri.go:89] found id: ""
	I0819 19:15:35.927519  438716 logs.go:276] 0 containers: []
	W0819 19:15:35.927531  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:35.927539  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:35.927600  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:35.961787  438716 cri.go:89] found id: ""
	I0819 19:15:35.961825  438716 logs.go:276] 0 containers: []
	W0819 19:15:35.961837  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:35.961845  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:35.961912  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:35.998350  438716 cri.go:89] found id: ""
	I0819 19:15:35.998384  438716 logs.go:276] 0 containers: []
	W0819 19:15:35.998396  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:35.998407  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:35.998419  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:36.054352  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:36.054394  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:36.078278  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:36.078311  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:36.166388  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:36.166416  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:36.166433  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:36.247222  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:36.247269  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:38.786510  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:38.800306  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:38.800364  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:38.834555  438716 cri.go:89] found id: ""
	I0819 19:15:38.834583  438716 logs.go:276] 0 containers: []
	W0819 19:15:38.834591  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:38.834598  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:38.834648  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:38.869078  438716 cri.go:89] found id: ""
	I0819 19:15:38.869105  438716 logs.go:276] 0 containers: []
	W0819 19:15:38.869114  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:38.869120  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:38.869174  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:38.903702  438716 cri.go:89] found id: ""
	I0819 19:15:38.903728  438716 logs.go:276] 0 containers: []
	W0819 19:15:38.903736  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:38.903743  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:38.903795  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:38.938326  438716 cri.go:89] found id: ""
	I0819 19:15:38.938352  438716 logs.go:276] 0 containers: []
	W0819 19:15:38.938360  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:38.938367  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:38.938422  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:38.976032  438716 cri.go:89] found id: ""
	I0819 19:15:38.976063  438716 logs.go:276] 0 containers: []
	W0819 19:15:38.976075  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:38.976084  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:38.976149  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:39.009957  438716 cri.go:89] found id: ""
	I0819 19:15:39.009991  438716 logs.go:276] 0 containers: []
	W0819 19:15:39.010002  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:39.010011  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:39.010077  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:39.046381  438716 cri.go:89] found id: ""
	I0819 19:15:39.046408  438716 logs.go:276] 0 containers: []
	W0819 19:15:39.046416  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:39.046422  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:39.046474  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:39.083022  438716 cri.go:89] found id: ""
	I0819 19:15:39.083050  438716 logs.go:276] 0 containers: []
	W0819 19:15:39.083058  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:39.083067  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:39.083079  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:39.160731  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:39.160768  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:39.204846  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:39.204879  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:39.259248  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:39.259287  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:39.273764  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:39.273796  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:39.344477  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:38.402275  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:40.901494  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:39.194367  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:41.692933  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:39.419291  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:41.919708  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:43.919984  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:41.845258  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:41.861691  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:41.861754  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:41.908235  438716 cri.go:89] found id: ""
	I0819 19:15:41.908269  438716 logs.go:276] 0 containers: []
	W0819 19:15:41.908281  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:41.908289  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:41.908357  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:41.965631  438716 cri.go:89] found id: ""
	I0819 19:15:41.965657  438716 logs.go:276] 0 containers: []
	W0819 19:15:41.965667  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:41.965673  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:41.965732  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:42.004540  438716 cri.go:89] found id: ""
	I0819 19:15:42.004569  438716 logs.go:276] 0 containers: []
	W0819 19:15:42.004578  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:42.004585  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:42.004650  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:42.042189  438716 cri.go:89] found id: ""
	I0819 19:15:42.042215  438716 logs.go:276] 0 containers: []
	W0819 19:15:42.042224  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:42.042231  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:42.042299  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:42.079313  438716 cri.go:89] found id: ""
	I0819 19:15:42.079349  438716 logs.go:276] 0 containers: []
	W0819 19:15:42.079361  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:42.079370  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:42.079450  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:42.116130  438716 cri.go:89] found id: ""
	I0819 19:15:42.116164  438716 logs.go:276] 0 containers: []
	W0819 19:15:42.116176  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:42.116184  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:42.116253  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:42.154886  438716 cri.go:89] found id: ""
	I0819 19:15:42.154919  438716 logs.go:276] 0 containers: []
	W0819 19:15:42.154928  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:42.154935  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:42.154987  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:42.191204  438716 cri.go:89] found id: ""
	I0819 19:15:42.191237  438716 logs.go:276] 0 containers: []
	W0819 19:15:42.191248  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:42.191258  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:42.191275  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:42.244395  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:42.244434  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:42.258029  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:42.258066  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:42.323461  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:42.323481  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:42.323498  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:42.401932  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:42.401969  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:44.943615  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:44.958243  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:44.958315  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:44.995181  438716 cri.go:89] found id: ""
	I0819 19:15:44.995217  438716 logs.go:276] 0 containers: []
	W0819 19:15:44.995236  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:44.995244  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:44.995309  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:45.030705  438716 cri.go:89] found id: ""
	I0819 19:15:45.030743  438716 logs.go:276] 0 containers: []
	W0819 19:15:45.030752  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:45.030759  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:45.030814  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:45.068186  438716 cri.go:89] found id: ""
	I0819 19:15:45.068215  438716 logs.go:276] 0 containers: []
	W0819 19:15:45.068224  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:45.068231  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:45.068314  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:45.105415  438716 cri.go:89] found id: ""
	I0819 19:15:45.105443  438716 logs.go:276] 0 containers: []
	W0819 19:15:45.105452  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:45.105458  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:45.105517  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:45.143628  438716 cri.go:89] found id: ""
	I0819 19:15:45.143662  438716 logs.go:276] 0 containers: []
	W0819 19:15:45.143694  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:45.143704  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:45.143771  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:45.184896  438716 cri.go:89] found id: ""
	I0819 19:15:45.184922  438716 logs.go:276] 0 containers: []
	W0819 19:15:45.184930  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:45.184937  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:45.185000  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:45.222599  438716 cri.go:89] found id: ""
	I0819 19:15:45.222631  438716 logs.go:276] 0 containers: []
	W0819 19:15:45.222639  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:45.222645  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:45.222700  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:45.260310  438716 cri.go:89] found id: ""
	I0819 19:15:45.260341  438716 logs.go:276] 0 containers: []
	W0819 19:15:45.260352  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:45.260361  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:45.260379  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:45.273687  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:45.273718  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:45.351367  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:45.351390  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:45.351407  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:45.428751  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:45.428787  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:45.468830  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:45.468869  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:42.902576  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:45.402812  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:43.693205  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:46.192804  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:46.419903  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:48.918620  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:48.023654  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:48.037206  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:48.037294  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:48.071647  438716 cri.go:89] found id: ""
	I0819 19:15:48.071686  438716 logs.go:276] 0 containers: []
	W0819 19:15:48.071695  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:48.071704  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:48.071765  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:48.106542  438716 cri.go:89] found id: ""
	I0819 19:15:48.106575  438716 logs.go:276] 0 containers: []
	W0819 19:15:48.106586  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:48.106596  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:48.106662  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:48.151917  438716 cri.go:89] found id: ""
	I0819 19:15:48.151949  438716 logs.go:276] 0 containers: []
	W0819 19:15:48.151959  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:48.151966  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:48.152022  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:48.190095  438716 cri.go:89] found id: ""
	I0819 19:15:48.190125  438716 logs.go:276] 0 containers: []
	W0819 19:15:48.190137  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:48.190146  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:48.190211  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:48.227193  438716 cri.go:89] found id: ""
	I0819 19:15:48.227228  438716 logs.go:276] 0 containers: []
	W0819 19:15:48.227240  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:48.227248  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:48.227317  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:48.261353  438716 cri.go:89] found id: ""
	I0819 19:15:48.261386  438716 logs.go:276] 0 containers: []
	W0819 19:15:48.261396  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:48.261403  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:48.261455  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:48.295749  438716 cri.go:89] found id: ""
	I0819 19:15:48.295782  438716 logs.go:276] 0 containers: []
	W0819 19:15:48.295794  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:48.295803  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:48.295874  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:48.338350  438716 cri.go:89] found id: ""
	I0819 19:15:48.338383  438716 logs.go:276] 0 containers: []
	W0819 19:15:48.338394  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:48.338404  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:48.338420  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:48.420705  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:48.420749  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:48.464114  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:48.464153  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:48.519461  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:48.519505  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:48.534324  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:48.534357  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:48.603580  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:47.900813  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:49.902363  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:48.194425  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:50.693598  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:51.419909  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:53.918494  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:51.104343  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:51.117552  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:51.117629  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:51.150630  438716 cri.go:89] found id: ""
	I0819 19:15:51.150665  438716 logs.go:276] 0 containers: []
	W0819 19:15:51.150677  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:51.150691  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:51.150765  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:51.184316  438716 cri.go:89] found id: ""
	I0819 19:15:51.184346  438716 logs.go:276] 0 containers: []
	W0819 19:15:51.184356  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:51.184362  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:51.184410  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:51.221252  438716 cri.go:89] found id: ""
	I0819 19:15:51.221277  438716 logs.go:276] 0 containers: []
	W0819 19:15:51.221286  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:51.221292  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:51.221349  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:51.255727  438716 cri.go:89] found id: ""
	I0819 19:15:51.255755  438716 logs.go:276] 0 containers: []
	W0819 19:15:51.255763  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:51.255769  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:51.255823  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:51.290615  438716 cri.go:89] found id: ""
	I0819 19:15:51.290651  438716 logs.go:276] 0 containers: []
	W0819 19:15:51.290660  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:51.290667  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:51.290721  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:51.326895  438716 cri.go:89] found id: ""
	I0819 19:15:51.326922  438716 logs.go:276] 0 containers: []
	W0819 19:15:51.326930  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:51.326937  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:51.326987  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:51.365516  438716 cri.go:89] found id: ""
	I0819 19:15:51.365547  438716 logs.go:276] 0 containers: []
	W0819 19:15:51.365558  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:51.365566  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:51.365632  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:51.399002  438716 cri.go:89] found id: ""
	I0819 19:15:51.399030  438716 logs.go:276] 0 containers: []
	W0819 19:15:51.399038  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:51.399048  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:51.399059  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:51.453481  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:51.453524  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:51.467246  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:51.467277  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:51.548547  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:51.548578  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:51.548595  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:51.635627  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:51.635670  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:54.175003  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:54.190462  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:54.190537  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:54.232140  438716 cri.go:89] found id: ""
	I0819 19:15:54.232168  438716 logs.go:276] 0 containers: []
	W0819 19:15:54.232178  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:54.232186  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:54.232254  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:54.267700  438716 cri.go:89] found id: ""
	I0819 19:15:54.267732  438716 logs.go:276] 0 containers: []
	W0819 19:15:54.267742  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:54.267748  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:54.267807  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:54.306272  438716 cri.go:89] found id: ""
	I0819 19:15:54.306300  438716 logs.go:276] 0 containers: []
	W0819 19:15:54.306308  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:54.306315  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:54.306368  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:54.341503  438716 cri.go:89] found id: ""
	I0819 19:15:54.341536  438716 logs.go:276] 0 containers: []
	W0819 19:15:54.341549  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:54.341556  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:54.341609  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:54.375535  438716 cri.go:89] found id: ""
	I0819 19:15:54.375570  438716 logs.go:276] 0 containers: []
	W0819 19:15:54.375582  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:54.375591  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:54.375661  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:54.409611  438716 cri.go:89] found id: ""
	I0819 19:15:54.409641  438716 logs.go:276] 0 containers: []
	W0819 19:15:54.409653  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:54.409662  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:54.409731  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:54.444318  438716 cri.go:89] found id: ""
	I0819 19:15:54.444346  438716 logs.go:276] 0 containers: []
	W0819 19:15:54.444358  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:54.444366  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:54.444425  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:54.480746  438716 cri.go:89] found id: ""
	I0819 19:15:54.480777  438716 logs.go:276] 0 containers: []
	W0819 19:15:54.480789  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:54.480802  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:54.480817  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:54.534209  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:54.534245  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:54.549557  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:54.549598  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:54.625086  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:54.625111  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:54.625136  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:54.705549  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:54.705589  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:15:52.401150  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:54.402049  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:56.402545  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:52.693826  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:54.694875  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:57.193741  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:56.418166  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:58.418955  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:57.257440  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:15:57.276724  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:15:57.276812  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:15:57.319032  438716 cri.go:89] found id: ""
	I0819 19:15:57.319062  438716 logs.go:276] 0 containers: []
	W0819 19:15:57.319073  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:15:57.319081  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:15:57.319163  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:15:57.357093  438716 cri.go:89] found id: ""
	I0819 19:15:57.357129  438716 logs.go:276] 0 containers: []
	W0819 19:15:57.357140  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:15:57.357152  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:15:57.357222  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:15:57.393978  438716 cri.go:89] found id: ""
	I0819 19:15:57.394013  438716 logs.go:276] 0 containers: []
	W0819 19:15:57.394025  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:15:57.394033  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:15:57.394102  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:15:57.428731  438716 cri.go:89] found id: ""
	I0819 19:15:57.428760  438716 logs.go:276] 0 containers: []
	W0819 19:15:57.428768  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:15:57.428775  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:15:57.428824  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:57.467772  438716 cri.go:89] found id: ""
	I0819 19:15:57.467810  438716 logs.go:276] 0 containers: []
	W0819 19:15:57.467822  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:15:57.467832  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:15:57.467904  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:15:57.502398  438716 cri.go:89] found id: ""
	I0819 19:15:57.502434  438716 logs.go:276] 0 containers: []
	W0819 19:15:57.502444  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:15:57.502450  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:15:57.502503  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:15:57.536729  438716 cri.go:89] found id: ""
	I0819 19:15:57.536760  438716 logs.go:276] 0 containers: []
	W0819 19:15:57.536771  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:15:57.536779  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:15:57.536845  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:15:57.574738  438716 cri.go:89] found id: ""
	I0819 19:15:57.574762  438716 logs.go:276] 0 containers: []
	W0819 19:15:57.574770  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:15:57.574780  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:15:57.574793  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:15:57.630063  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:15:57.630113  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:15:57.643083  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:15:57.643111  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:15:57.725081  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:15:57.725104  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:15:57.725118  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:15:57.805065  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:15:57.805105  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:00.344557  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:00.357940  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:00.358005  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:00.399319  438716 cri.go:89] found id: ""
	I0819 19:16:00.399355  438716 logs.go:276] 0 containers: []
	W0819 19:16:00.399368  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:00.399377  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:00.399446  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:00.444223  438716 cri.go:89] found id: ""
	I0819 19:16:00.444254  438716 logs.go:276] 0 containers: []
	W0819 19:16:00.444264  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:00.444271  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:00.444323  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:00.479903  438716 cri.go:89] found id: ""
	I0819 19:16:00.479932  438716 logs.go:276] 0 containers: []
	W0819 19:16:00.479942  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:00.479948  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:00.480003  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:00.515923  438716 cri.go:89] found id: ""
	I0819 19:16:00.515954  438716 logs.go:276] 0 containers: []
	W0819 19:16:00.515966  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:00.515974  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:00.516043  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:15:58.901349  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:00.902114  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:15:59.194660  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:01.693174  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:00.419210  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:02.918814  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:00.551319  438716 cri.go:89] found id: ""
	I0819 19:16:00.551348  438716 logs.go:276] 0 containers: []
	W0819 19:16:00.551360  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:00.551370  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:00.551434  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:00.587847  438716 cri.go:89] found id: ""
	I0819 19:16:00.587882  438716 logs.go:276] 0 containers: []
	W0819 19:16:00.587892  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:00.587901  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:00.587976  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:00.624769  438716 cri.go:89] found id: ""
	I0819 19:16:00.624800  438716 logs.go:276] 0 containers: []
	W0819 19:16:00.624812  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:00.624820  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:00.624894  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:00.659300  438716 cri.go:89] found id: ""
	I0819 19:16:00.659330  438716 logs.go:276] 0 containers: []
	W0819 19:16:00.659342  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:00.659355  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:00.659371  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:00.739073  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:00.739113  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:00.779087  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:00.779116  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:00.831864  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:00.831914  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:00.845832  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:00.845863  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:00.920622  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:03.420751  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:03.434599  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:03.434664  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:03.469288  438716 cri.go:89] found id: ""
	I0819 19:16:03.469326  438716 logs.go:276] 0 containers: []
	W0819 19:16:03.469349  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:03.469372  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:03.469445  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:03.507885  438716 cri.go:89] found id: ""
	I0819 19:16:03.507911  438716 logs.go:276] 0 containers: []
	W0819 19:16:03.507927  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:03.507934  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:03.507987  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:03.543805  438716 cri.go:89] found id: ""
	I0819 19:16:03.543837  438716 logs.go:276] 0 containers: []
	W0819 19:16:03.543847  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:03.543854  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:03.543928  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:03.584060  438716 cri.go:89] found id: ""
	I0819 19:16:03.584093  438716 logs.go:276] 0 containers: []
	W0819 19:16:03.584105  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:03.584114  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:03.584202  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:03.619724  438716 cri.go:89] found id: ""
	I0819 19:16:03.619758  438716 logs.go:276] 0 containers: []
	W0819 19:16:03.619769  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:03.619776  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:03.619854  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:03.657180  438716 cri.go:89] found id: ""
	I0819 19:16:03.657213  438716 logs.go:276] 0 containers: []
	W0819 19:16:03.657225  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:03.657234  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:03.657303  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:03.695099  438716 cri.go:89] found id: ""
	I0819 19:16:03.695125  438716 logs.go:276] 0 containers: []
	W0819 19:16:03.695134  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:03.695139  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:03.695193  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:03.730263  438716 cri.go:89] found id: ""
	I0819 19:16:03.730291  438716 logs.go:276] 0 containers: []
	W0819 19:16:03.730302  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:03.730314  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:03.730331  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:03.780776  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:03.780816  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:03.795381  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:03.795419  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:03.869995  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:03.870016  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:03.870029  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:03.949654  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:03.949691  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:03.402500  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:05.902412  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:03.694220  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:06.193280  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:04.919284  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:07.418061  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:06.493589  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:06.506758  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:06.506834  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:06.545325  438716 cri.go:89] found id: ""
	I0819 19:16:06.545357  438716 logs.go:276] 0 containers: []
	W0819 19:16:06.545370  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:06.545378  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:06.545443  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:06.581708  438716 cri.go:89] found id: ""
	I0819 19:16:06.581741  438716 logs.go:276] 0 containers: []
	W0819 19:16:06.581753  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:06.581761  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:06.581828  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:06.626543  438716 cri.go:89] found id: ""
	I0819 19:16:06.626588  438716 logs.go:276] 0 containers: []
	W0819 19:16:06.626600  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:06.626609  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:06.626676  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:06.662466  438716 cri.go:89] found id: ""
	I0819 19:16:06.662499  438716 logs.go:276] 0 containers: []
	W0819 19:16:06.662509  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:06.662518  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:06.662585  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:06.701584  438716 cri.go:89] found id: ""
	I0819 19:16:06.701619  438716 logs.go:276] 0 containers: []
	W0819 19:16:06.701628  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:06.701635  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:06.701688  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:06.736245  438716 cri.go:89] found id: ""
	I0819 19:16:06.736280  438716 logs.go:276] 0 containers: []
	W0819 19:16:06.736292  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:06.736300  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:06.736392  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:06.774411  438716 cri.go:89] found id: ""
	I0819 19:16:06.774439  438716 logs.go:276] 0 containers: []
	W0819 19:16:06.774447  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:06.774454  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:06.774510  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:06.809560  438716 cri.go:89] found id: ""
	I0819 19:16:06.809597  438716 logs.go:276] 0 containers: []
	W0819 19:16:06.809609  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:06.809624  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:06.809648  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:06.884841  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:06.884862  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:06.884878  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:06.971467  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:06.971507  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:07.010737  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:07.010767  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:07.063807  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:07.063846  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:09.578451  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:09.591643  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:09.591737  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:09.625607  438716 cri.go:89] found id: ""
	I0819 19:16:09.625639  438716 logs.go:276] 0 containers: []
	W0819 19:16:09.625650  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:09.625659  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:09.625727  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:09.669145  438716 cri.go:89] found id: ""
	I0819 19:16:09.669177  438716 logs.go:276] 0 containers: []
	W0819 19:16:09.669185  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:09.669191  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:09.669254  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:09.707035  438716 cri.go:89] found id: ""
	I0819 19:16:09.707064  438716 logs.go:276] 0 containers: []
	W0819 19:16:09.707073  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:09.707080  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:09.707142  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:09.742089  438716 cri.go:89] found id: ""
	I0819 19:16:09.742116  438716 logs.go:276] 0 containers: []
	W0819 19:16:09.742125  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:09.742132  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:09.742193  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:09.782736  438716 cri.go:89] found id: ""
	I0819 19:16:09.782774  438716 logs.go:276] 0 containers: []
	W0819 19:16:09.782785  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:09.782794  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:09.782860  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:09.818003  438716 cri.go:89] found id: ""
	I0819 19:16:09.818031  438716 logs.go:276] 0 containers: []
	W0819 19:16:09.818040  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:09.818047  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:09.818110  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:09.852716  438716 cri.go:89] found id: ""
	I0819 19:16:09.852748  438716 logs.go:276] 0 containers: []
	W0819 19:16:09.852757  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:09.852764  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:09.852828  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:09.887176  438716 cri.go:89] found id: ""
	I0819 19:16:09.887206  438716 logs.go:276] 0 containers: []
	W0819 19:16:09.887218  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:09.887230  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:09.887247  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:09.901547  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:09.901573  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:09.969153  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:09.969190  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:09.969205  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:10.053777  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:10.053820  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:10.100888  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:10.100916  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:08.401650  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:10.402279  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:08.194305  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:10.693097  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:09.418856  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:11.918836  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:12.655112  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:12.667824  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:12.667897  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:12.702337  438716 cri.go:89] found id: ""
	I0819 19:16:12.702364  438716 logs.go:276] 0 containers: []
	W0819 19:16:12.702373  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:12.702379  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:12.702432  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:12.736628  438716 cri.go:89] found id: ""
	I0819 19:16:12.736655  438716 logs.go:276] 0 containers: []
	W0819 19:16:12.736663  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:12.736669  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:12.736720  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:12.773598  438716 cri.go:89] found id: ""
	I0819 19:16:12.773628  438716 logs.go:276] 0 containers: []
	W0819 19:16:12.773636  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:12.773643  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:12.773695  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:12.806584  438716 cri.go:89] found id: ""
	I0819 19:16:12.806620  438716 logs.go:276] 0 containers: []
	W0819 19:16:12.806632  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:12.806640  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:12.806723  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:12.840535  438716 cri.go:89] found id: ""
	I0819 19:16:12.840561  438716 logs.go:276] 0 containers: []
	W0819 19:16:12.840569  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:12.840575  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:12.840639  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:12.877680  438716 cri.go:89] found id: ""
	I0819 19:16:12.877712  438716 logs.go:276] 0 containers: []
	W0819 19:16:12.877721  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:12.877728  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:12.877779  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:12.912226  438716 cri.go:89] found id: ""
	I0819 19:16:12.912253  438716 logs.go:276] 0 containers: []
	W0819 19:16:12.912264  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:12.912272  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:12.912342  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:12.953463  438716 cri.go:89] found id: ""
	I0819 19:16:12.953493  438716 logs.go:276] 0 containers: []
	W0819 19:16:12.953504  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:12.953524  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:12.953542  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:13.007648  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:13.007691  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:13.022452  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:13.022494  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:13.092411  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:13.092439  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:13.092455  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:13.168711  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:13.168750  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:12.903478  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:15.402551  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:12.693162  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:14.698051  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:17.193988  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:14.417821  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:16.418541  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:18.918478  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:15.711501  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:15.724841  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:15.724921  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:15.760120  438716 cri.go:89] found id: ""
	I0819 19:16:15.760149  438716 logs.go:276] 0 containers: []
	W0819 19:16:15.760158  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:15.760166  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:15.760234  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:15.794959  438716 cri.go:89] found id: ""
	I0819 19:16:15.794988  438716 logs.go:276] 0 containers: []
	W0819 19:16:15.794996  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:15.795002  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:15.795054  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:15.842776  438716 cri.go:89] found id: ""
	I0819 19:16:15.842804  438716 logs.go:276] 0 containers: []
	W0819 19:16:15.842814  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:15.842820  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:15.842874  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:15.882134  438716 cri.go:89] found id: ""
	I0819 19:16:15.882167  438716 logs.go:276] 0 containers: []
	W0819 19:16:15.882178  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:15.882187  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:15.882251  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:15.919296  438716 cri.go:89] found id: ""
	I0819 19:16:15.919325  438716 logs.go:276] 0 containers: []
	W0819 19:16:15.919336  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:15.919345  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:15.919409  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:15.956401  438716 cri.go:89] found id: ""
	I0819 19:16:15.956429  438716 logs.go:276] 0 containers: []
	W0819 19:16:15.956437  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:15.956444  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:15.956507  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:15.994271  438716 cri.go:89] found id: ""
	I0819 19:16:15.994304  438716 logs.go:276] 0 containers: []
	W0819 19:16:15.994314  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:15.994320  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:15.994378  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:16.033685  438716 cri.go:89] found id: ""
	I0819 19:16:16.033714  438716 logs.go:276] 0 containers: []
	W0819 19:16:16.033724  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:16.033736  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:16.033754  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:16.083929  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:16.083964  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:16.107309  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:16.107342  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:16.193657  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:16.193681  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:16.193697  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:16.276974  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:16.277016  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:18.818532  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:18.831586  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:18.831655  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:18.866663  438716 cri.go:89] found id: ""
	I0819 19:16:18.866689  438716 logs.go:276] 0 containers: []
	W0819 19:16:18.866700  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:18.866709  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:18.866769  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:18.900711  438716 cri.go:89] found id: ""
	I0819 19:16:18.900746  438716 logs.go:276] 0 containers: []
	W0819 19:16:18.900757  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:18.900765  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:18.900849  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:18.935156  438716 cri.go:89] found id: ""
	I0819 19:16:18.935179  438716 logs.go:276] 0 containers: []
	W0819 19:16:18.935186  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:18.935193  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:18.935246  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:18.973853  438716 cri.go:89] found id: ""
	I0819 19:16:18.973889  438716 logs.go:276] 0 containers: []
	W0819 19:16:18.973902  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:18.973911  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:18.973978  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:19.014212  438716 cri.go:89] found id: ""
	I0819 19:16:19.014241  438716 logs.go:276] 0 containers: []
	W0819 19:16:19.014250  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:19.014255  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:19.014317  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:19.056089  438716 cri.go:89] found id: ""
	I0819 19:16:19.056125  438716 logs.go:276] 0 containers: []
	W0819 19:16:19.056137  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:19.056146  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:19.056211  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:19.091372  438716 cri.go:89] found id: ""
	I0819 19:16:19.091399  438716 logs.go:276] 0 containers: []
	W0819 19:16:19.091411  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:19.091420  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:19.091478  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:19.129737  438716 cri.go:89] found id: ""
	I0819 19:16:19.129767  438716 logs.go:276] 0 containers: []
	W0819 19:16:19.129777  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:19.129787  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:19.129800  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:19.207325  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:19.207360  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:19.247780  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:19.247816  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:19.302496  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:19.302543  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:19.317706  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:19.317739  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:19.395029  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:17.901762  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:19.901818  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:19.195079  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:21.693863  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:21.418534  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:23.420217  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:21.895538  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:21.910595  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:21.910658  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:21.948363  438716 cri.go:89] found id: ""
	I0819 19:16:21.948398  438716 logs.go:276] 0 containers: []
	W0819 19:16:21.948410  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:21.948419  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:21.948492  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:21.983391  438716 cri.go:89] found id: ""
	I0819 19:16:21.983428  438716 logs.go:276] 0 containers: []
	W0819 19:16:21.983440  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:21.983449  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:21.983520  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:22.022383  438716 cri.go:89] found id: ""
	I0819 19:16:22.022415  438716 logs.go:276] 0 containers: []
	W0819 19:16:22.022427  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:22.022436  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:22.022493  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:22.060676  438716 cri.go:89] found id: ""
	I0819 19:16:22.060707  438716 logs.go:276] 0 containers: []
	W0819 19:16:22.060716  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:22.060725  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:22.060778  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:22.095188  438716 cri.go:89] found id: ""
	I0819 19:16:22.095218  438716 logs.go:276] 0 containers: []
	W0819 19:16:22.095227  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:22.095234  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:22.095300  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:22.131164  438716 cri.go:89] found id: ""
	I0819 19:16:22.131192  438716 logs.go:276] 0 containers: []
	W0819 19:16:22.131200  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:22.131209  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:22.131275  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:22.166539  438716 cri.go:89] found id: ""
	I0819 19:16:22.166566  438716 logs.go:276] 0 containers: []
	W0819 19:16:22.166573  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:22.166580  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:22.166643  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:22.205604  438716 cri.go:89] found id: ""
	I0819 19:16:22.205631  438716 logs.go:276] 0 containers: []
	W0819 19:16:22.205640  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:22.205649  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:22.205662  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:22.265650  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:22.265689  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:22.280401  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:22.280443  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:22.356818  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:22.356851  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:22.356872  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:22.437678  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:22.437719  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:24.979655  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:24.993462  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:24.993526  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:25.029955  438716 cri.go:89] found id: ""
	I0819 19:16:25.029983  438716 logs.go:276] 0 containers: []
	W0819 19:16:25.029992  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:25.029999  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:25.030049  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:25.068478  438716 cri.go:89] found id: ""
	I0819 19:16:25.068507  438716 logs.go:276] 0 containers: []
	W0819 19:16:25.068518  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:25.068527  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:25.068594  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:25.105209  438716 cri.go:89] found id: ""
	I0819 19:16:25.105238  438716 logs.go:276] 0 containers: []
	W0819 19:16:25.105247  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:25.105256  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:25.105327  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:25.143166  438716 cri.go:89] found id: ""
	I0819 19:16:25.143203  438716 logs.go:276] 0 containers: []
	W0819 19:16:25.143218  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:25.143225  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:25.143279  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:25.177993  438716 cri.go:89] found id: ""
	I0819 19:16:25.178023  438716 logs.go:276] 0 containers: []
	W0819 19:16:25.178035  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:25.178044  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:25.178129  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:25.216473  438716 cri.go:89] found id: ""
	I0819 19:16:25.216501  438716 logs.go:276] 0 containers: []
	W0819 19:16:25.216523  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:25.216540  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:25.216603  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:25.251454  438716 cri.go:89] found id: ""
	I0819 19:16:25.251486  438716 logs.go:276] 0 containers: []
	W0819 19:16:25.251495  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:25.251501  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:25.251555  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:25.287145  438716 cri.go:89] found id: ""
	I0819 19:16:25.287179  438716 logs.go:276] 0 containers: []
	W0819 19:16:25.287188  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:25.287198  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:25.287210  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:25.371571  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:25.371619  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:25.418247  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:25.418277  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:25.472209  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:25.472248  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:25.486286  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:25.486315  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 19:16:21.902887  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:23.904358  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:26.403026  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:24.193797  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:26.194535  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:25.919371  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:28.418267  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	W0819 19:16:25.554470  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:28.055382  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:28.068750  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:28.068827  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:28.101856  438716 cri.go:89] found id: ""
	I0819 19:16:28.101891  438716 logs.go:276] 0 containers: []
	W0819 19:16:28.101903  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:28.101912  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:28.101977  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:28.136402  438716 cri.go:89] found id: ""
	I0819 19:16:28.136437  438716 logs.go:276] 0 containers: []
	W0819 19:16:28.136449  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:28.136460  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:28.136528  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:28.171766  438716 cri.go:89] found id: ""
	I0819 19:16:28.171795  438716 logs.go:276] 0 containers: []
	W0819 19:16:28.171803  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:28.171809  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:28.171864  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:28.206228  438716 cri.go:89] found id: ""
	I0819 19:16:28.206256  438716 logs.go:276] 0 containers: []
	W0819 19:16:28.206264  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:28.206272  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:28.206337  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:28.248877  438716 cri.go:89] found id: ""
	I0819 19:16:28.248912  438716 logs.go:276] 0 containers: []
	W0819 19:16:28.248923  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:28.248931  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:28.249002  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:28.290160  438716 cri.go:89] found id: ""
	I0819 19:16:28.290201  438716 logs.go:276] 0 containers: []
	W0819 19:16:28.290212  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:28.290221  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:28.290287  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:28.340413  438716 cri.go:89] found id: ""
	I0819 19:16:28.340445  438716 logs.go:276] 0 containers: []
	W0819 19:16:28.340454  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:28.340461  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:28.340513  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:28.385486  438716 cri.go:89] found id: ""
	I0819 19:16:28.385513  438716 logs.go:276] 0 containers: []
	W0819 19:16:28.385521  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:28.385532  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:28.385544  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:28.441987  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:28.442029  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:28.456509  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:28.456538  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:28.527941  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:28.527976  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:28.527993  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:28.612696  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:28.612738  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:28.901312  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:30.901640  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:28.693578  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:30.693686  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:30.418811  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:32.919696  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:31.154773  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:31.168718  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:31.168789  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:31.205365  438716 cri.go:89] found id: ""
	I0819 19:16:31.205399  438716 logs.go:276] 0 containers: []
	W0819 19:16:31.205411  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:31.205419  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:31.205496  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:31.238829  438716 cri.go:89] found id: ""
	I0819 19:16:31.238871  438716 logs.go:276] 0 containers: []
	W0819 19:16:31.238879  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:31.238886  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:31.238936  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:31.273229  438716 cri.go:89] found id: ""
	I0819 19:16:31.273259  438716 logs.go:276] 0 containers: []
	W0819 19:16:31.273304  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:31.273313  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:31.273377  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:31.309559  438716 cri.go:89] found id: ""
	I0819 19:16:31.309601  438716 logs.go:276] 0 containers: []
	W0819 19:16:31.309613  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:31.309622  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:31.309689  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:31.344939  438716 cri.go:89] found id: ""
	I0819 19:16:31.344971  438716 logs.go:276] 0 containers: []
	W0819 19:16:31.344981  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:31.344987  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:31.345043  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:31.382423  438716 cri.go:89] found id: ""
	I0819 19:16:31.382455  438716 logs.go:276] 0 containers: []
	W0819 19:16:31.382468  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:31.382474  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:31.382525  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:31.420148  438716 cri.go:89] found id: ""
	I0819 19:16:31.420174  438716 logs.go:276] 0 containers: []
	W0819 19:16:31.420184  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:31.420192  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:31.420262  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:31.455691  438716 cri.go:89] found id: ""
	I0819 19:16:31.455720  438716 logs.go:276] 0 containers: []
	W0819 19:16:31.455730  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:31.455740  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:31.455753  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:31.509501  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:31.509549  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:31.523650  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:31.523693  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:31.591535  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:31.591557  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:31.591574  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:31.674038  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:31.674077  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:34.216506  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:34.232782  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:34.232875  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:34.286103  438716 cri.go:89] found id: ""
	I0819 19:16:34.286136  438716 logs.go:276] 0 containers: []
	W0819 19:16:34.286147  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:34.286156  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:34.286221  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:34.324193  438716 cri.go:89] found id: ""
	I0819 19:16:34.324220  438716 logs.go:276] 0 containers: []
	W0819 19:16:34.324229  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:34.324235  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:34.324292  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:34.382777  438716 cri.go:89] found id: ""
	I0819 19:16:34.382804  438716 logs.go:276] 0 containers: []
	W0819 19:16:34.382814  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:34.382822  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:34.382887  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:34.420714  438716 cri.go:89] found id: ""
	I0819 19:16:34.420743  438716 logs.go:276] 0 containers: []
	W0819 19:16:34.420753  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:34.420771  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:34.420840  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:34.455338  438716 cri.go:89] found id: ""
	I0819 19:16:34.455369  438716 logs.go:276] 0 containers: []
	W0819 19:16:34.455381  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:34.455391  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:34.455467  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:34.489528  438716 cri.go:89] found id: ""
	I0819 19:16:34.489566  438716 logs.go:276] 0 containers: []
	W0819 19:16:34.489575  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:34.489581  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:34.489634  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:34.523830  438716 cri.go:89] found id: ""
	I0819 19:16:34.523857  438716 logs.go:276] 0 containers: []
	W0819 19:16:34.523866  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:34.523873  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:34.523940  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:34.559023  438716 cri.go:89] found id: ""
	I0819 19:16:34.559052  438716 logs.go:276] 0 containers: []
	W0819 19:16:34.559063  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:34.559077  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:34.559092  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:34.639116  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:34.639159  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:34.675990  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:34.676017  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:34.730900  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:34.730935  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:34.744938  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:34.744964  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:34.816267  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:32.902138  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:35.401865  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:32.696537  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:35.192648  438245 pod_ready.go:103] pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:35.687633  438245 pod_ready.go:82] duration metric: took 4m0.000667446s for pod "metrics-server-6867b74b74-5hlnx" in "kube-system" namespace to be "Ready" ...
	E0819 19:16:35.687688  438245 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0819 19:16:35.687715  438245 pod_ready.go:39] duration metric: took 4m13.552784118s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 19:16:35.687770  438245 kubeadm.go:597] duration metric: took 4m20.936149722s to restartPrimaryControlPlane
	W0819 19:16:35.687875  438245 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0819 19:16:35.687929  438245 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 19:16:35.419327  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:37.420007  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:37.317314  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:37.331915  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:37.331982  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:37.370233  438716 cri.go:89] found id: ""
	I0819 19:16:37.370261  438716 logs.go:276] 0 containers: []
	W0819 19:16:37.370269  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:37.370276  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:37.370343  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:37.409042  438716 cri.go:89] found id: ""
	I0819 19:16:37.409071  438716 logs.go:276] 0 containers: []
	W0819 19:16:37.409082  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:37.409090  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:37.409161  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:37.445903  438716 cri.go:89] found id: ""
	I0819 19:16:37.445932  438716 logs.go:276] 0 containers: []
	W0819 19:16:37.445941  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:37.445948  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:37.445999  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:37.484275  438716 cri.go:89] found id: ""
	I0819 19:16:37.484318  438716 logs.go:276] 0 containers: []
	W0819 19:16:37.484328  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:37.484334  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:37.484393  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:37.528131  438716 cri.go:89] found id: ""
	I0819 19:16:37.528161  438716 logs.go:276] 0 containers: []
	W0819 19:16:37.528174  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:37.528180  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:37.528243  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:37.563374  438716 cri.go:89] found id: ""
	I0819 19:16:37.563406  438716 logs.go:276] 0 containers: []
	W0819 19:16:37.563414  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:37.563421  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:37.563473  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:37.597234  438716 cri.go:89] found id: ""
	I0819 19:16:37.597260  438716 logs.go:276] 0 containers: []
	W0819 19:16:37.597267  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:37.597274  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:37.597329  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:37.634809  438716 cri.go:89] found id: ""
	I0819 19:16:37.634845  438716 logs.go:276] 0 containers: []
	W0819 19:16:37.634854  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:37.634864  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:37.634879  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:37.704354  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:37.704380  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:37.704396  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:37.788606  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:37.788646  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:37.830486  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:37.830513  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:37.890642  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:37.890681  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:40.405473  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:40.420019  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:40.420094  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:40.458558  438716 cri.go:89] found id: ""
	I0819 19:16:40.458586  438716 logs.go:276] 0 containers: []
	W0819 19:16:40.458598  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:40.458606  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:40.458671  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:40.500353  438716 cri.go:89] found id: ""
	I0819 19:16:40.500379  438716 logs.go:276] 0 containers: []
	W0819 19:16:40.500388  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:40.500394  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:40.500445  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:37.901881  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:39.902097  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:39.918877  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:41.919112  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:43.920092  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:40.534281  438716 cri.go:89] found id: ""
	I0819 19:16:40.534307  438716 logs.go:276] 0 containers: []
	W0819 19:16:40.534316  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:40.534322  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:40.534379  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:40.569537  438716 cri.go:89] found id: ""
	I0819 19:16:40.569568  438716 logs.go:276] 0 containers: []
	W0819 19:16:40.569578  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:40.569587  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:40.569654  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:40.603066  438716 cri.go:89] found id: ""
	I0819 19:16:40.603097  438716 logs.go:276] 0 containers: []
	W0819 19:16:40.603110  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:40.603118  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:40.603171  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:40.637598  438716 cri.go:89] found id: ""
	I0819 19:16:40.637628  438716 logs.go:276] 0 containers: []
	W0819 19:16:40.637637  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:40.637643  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:40.637704  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:40.673583  438716 cri.go:89] found id: ""
	I0819 19:16:40.673616  438716 logs.go:276] 0 containers: []
	W0819 19:16:40.673629  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:40.673637  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:40.673692  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:40.708324  438716 cri.go:89] found id: ""
	I0819 19:16:40.708354  438716 logs.go:276] 0 containers: []
	W0819 19:16:40.708363  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:40.708373  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:40.708387  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:40.789743  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:40.789782  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:40.830849  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:40.830884  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:40.882662  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:40.882700  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:40.896843  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:40.896869  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:40.969491  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:43.470579  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:43.483791  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:43.483876  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:43.523764  438716 cri.go:89] found id: ""
	I0819 19:16:43.523797  438716 logs.go:276] 0 containers: []
	W0819 19:16:43.523809  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:43.523817  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:43.523882  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:43.557925  438716 cri.go:89] found id: ""
	I0819 19:16:43.557953  438716 logs.go:276] 0 containers: []
	W0819 19:16:43.557960  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:43.557966  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:43.558017  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:43.591324  438716 cri.go:89] found id: ""
	I0819 19:16:43.591355  438716 logs.go:276] 0 containers: []
	W0819 19:16:43.591364  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:43.591370  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:43.591421  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:43.625798  438716 cri.go:89] found id: ""
	I0819 19:16:43.625826  438716 logs.go:276] 0 containers: []
	W0819 19:16:43.625834  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:43.625840  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:43.625898  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:43.659787  438716 cri.go:89] found id: ""
	I0819 19:16:43.659815  438716 logs.go:276] 0 containers: []
	W0819 19:16:43.659823  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:43.659830  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:43.659882  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:43.692982  438716 cri.go:89] found id: ""
	I0819 19:16:43.693008  438716 logs.go:276] 0 containers: []
	W0819 19:16:43.693017  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:43.693024  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:43.693075  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:43.726059  438716 cri.go:89] found id: ""
	I0819 19:16:43.726092  438716 logs.go:276] 0 containers: []
	W0819 19:16:43.726104  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:43.726113  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:43.726187  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:43.760906  438716 cri.go:89] found id: ""
	I0819 19:16:43.760947  438716 logs.go:276] 0 containers: []
	W0819 19:16:43.760958  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:43.760971  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:43.760994  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:43.812249  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:43.812285  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:43.826538  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:43.826566  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:43.894904  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:43.894926  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:43.894941  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:43.975746  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:43.975796  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:41.902398  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:43.902728  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:46.401834  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:46.419345  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:48.918688  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:46.515329  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:46.529088  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:46.529170  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:46.564525  438716 cri.go:89] found id: ""
	I0819 19:16:46.564557  438716 logs.go:276] 0 containers: []
	W0819 19:16:46.564570  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:46.564578  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:46.564647  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:46.598457  438716 cri.go:89] found id: ""
	I0819 19:16:46.598485  438716 logs.go:276] 0 containers: []
	W0819 19:16:46.598494  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:46.598499  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:46.598549  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:46.631767  438716 cri.go:89] found id: ""
	I0819 19:16:46.631798  438716 logs.go:276] 0 containers: []
	W0819 19:16:46.631807  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:46.631814  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:46.631867  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:46.664978  438716 cri.go:89] found id: ""
	I0819 19:16:46.665013  438716 logs.go:276] 0 containers: []
	W0819 19:16:46.665026  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:46.665034  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:46.665094  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:46.701024  438716 cri.go:89] found id: ""
	I0819 19:16:46.701052  438716 logs.go:276] 0 containers: []
	W0819 19:16:46.701061  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:46.701067  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:46.701132  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:46.735834  438716 cri.go:89] found id: ""
	I0819 19:16:46.735874  438716 logs.go:276] 0 containers: []
	W0819 19:16:46.735886  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:46.735894  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:46.735978  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:46.773392  438716 cri.go:89] found id: ""
	I0819 19:16:46.773426  438716 logs.go:276] 0 containers: []
	W0819 19:16:46.773437  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:46.773445  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:46.773498  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:46.819800  438716 cri.go:89] found id: ""
	I0819 19:16:46.819829  438716 logs.go:276] 0 containers: []
	W0819 19:16:46.819841  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:46.819869  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:46.819889  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:46.860633  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:46.860669  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:46.911895  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:46.911936  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:46.927388  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:46.927422  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:46.998601  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:46.998628  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:46.998645  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:49.585303  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:49.598962  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:49.599032  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:49.631891  438716 cri.go:89] found id: ""
	I0819 19:16:49.631920  438716 logs.go:276] 0 containers: []
	W0819 19:16:49.631931  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:49.631940  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:49.631998  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:49.671731  438716 cri.go:89] found id: ""
	I0819 19:16:49.671761  438716 logs.go:276] 0 containers: []
	W0819 19:16:49.671777  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:49.671786  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:49.671846  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:49.707517  438716 cri.go:89] found id: ""
	I0819 19:16:49.707556  438716 logs.go:276] 0 containers: []
	W0819 19:16:49.707568  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:49.707578  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:49.707651  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:49.744255  438716 cri.go:89] found id: ""
	I0819 19:16:49.744289  438716 logs.go:276] 0 containers: []
	W0819 19:16:49.744299  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:49.744305  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:49.744357  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:49.779224  438716 cri.go:89] found id: ""
	I0819 19:16:49.779252  438716 logs.go:276] 0 containers: []
	W0819 19:16:49.779259  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:49.779266  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:49.779322  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:49.815641  438716 cri.go:89] found id: ""
	I0819 19:16:49.815689  438716 logs.go:276] 0 containers: []
	W0819 19:16:49.815701  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:49.815711  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:49.815769  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:49.851861  438716 cri.go:89] found id: ""
	I0819 19:16:49.851894  438716 logs.go:276] 0 containers: []
	W0819 19:16:49.851906  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:49.851915  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:49.851984  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:49.888140  438716 cri.go:89] found id: ""
	I0819 19:16:49.888173  438716 logs.go:276] 0 containers: []
	W0819 19:16:49.888186  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:49.888199  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:49.888215  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:49.940389  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:49.940430  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:49.954519  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:49.954553  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:50.028462  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:50.028486  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:50.028502  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:50.108319  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:50.108362  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:48.901902  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:50.902702  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:50.919079  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:52.919271  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:52.647146  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:52.660468  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:52.660558  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:52.697665  438716 cri.go:89] found id: ""
	I0819 19:16:52.697703  438716 logs.go:276] 0 containers: []
	W0819 19:16:52.697719  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:52.697727  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:52.697786  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:52.739169  438716 cri.go:89] found id: ""
	I0819 19:16:52.739203  438716 logs.go:276] 0 containers: []
	W0819 19:16:52.739214  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:52.739222  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:52.739289  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:52.776580  438716 cri.go:89] found id: ""
	I0819 19:16:52.776610  438716 logs.go:276] 0 containers: []
	W0819 19:16:52.776619  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:52.776630  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:52.776683  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:52.813443  438716 cri.go:89] found id: ""
	I0819 19:16:52.813475  438716 logs.go:276] 0 containers: []
	W0819 19:16:52.813488  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:52.813497  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:52.813557  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:52.848035  438716 cri.go:89] found id: ""
	I0819 19:16:52.848064  438716 logs.go:276] 0 containers: []
	W0819 19:16:52.848075  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:52.848082  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:52.848150  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:52.881814  438716 cri.go:89] found id: ""
	I0819 19:16:52.881841  438716 logs.go:276] 0 containers: []
	W0819 19:16:52.881858  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:52.881867  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:52.881930  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:52.922179  438716 cri.go:89] found id: ""
	I0819 19:16:52.922202  438716 logs.go:276] 0 containers: []
	W0819 19:16:52.922210  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:52.922216  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:52.922277  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:52.958110  438716 cri.go:89] found id: ""
	I0819 19:16:52.958136  438716 logs.go:276] 0 containers: []
	W0819 19:16:52.958144  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:52.958153  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:52.958167  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:53.008553  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:53.008592  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:53.022826  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:53.022860  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:53.094940  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:53.094967  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:53.094982  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:53.173877  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:53.173920  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:53.403382  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:55.905504  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:55.419297  438295 pod_ready.go:103] pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace has status "Ready":"False"
	I0819 19:16:55.419331  438295 pod_ready.go:82] duration metric: took 4m0.007107243s for pod "metrics-server-6867b74b74-kxcwh" in "kube-system" namespace to be "Ready" ...
	E0819 19:16:55.419345  438295 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0819 19:16:55.419355  438295 pod_ready.go:39] duration metric: took 4m4.316528467s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 19:16:55.419408  438295 api_server.go:52] waiting for apiserver process to appear ...
	I0819 19:16:55.419449  438295 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:55.419499  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:55.466648  438295 cri.go:89] found id: "d66ad075c652a3b446078444a32327c07459f74199be8f89197067dbad566d5a"
	I0819 19:16:55.466679  438295 cri.go:89] found id: ""
	I0819 19:16:55.466690  438295 logs.go:276] 1 containers: [d66ad075c652a3b446078444a32327c07459f74199be8f89197067dbad566d5a]
	I0819 19:16:55.466758  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:16:55.471085  438295 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:55.471164  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:55.509883  438295 cri.go:89] found id: "a3cb2c04e3eb3398fa324b660ca1864f22175cbf41fd84eae34a24ce7928b672"
	I0819 19:16:55.509910  438295 cri.go:89] found id: ""
	I0819 19:16:55.509921  438295 logs.go:276] 1 containers: [a3cb2c04e3eb3398fa324b660ca1864f22175cbf41fd84eae34a24ce7928b672]
	I0819 19:16:55.509984  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:16:55.516866  438295 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:55.516954  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:55.560957  438295 cri.go:89] found id: "a6bc5b24f616e32fdffb80b6ed0201250b02f143c8217d56ef90dc55551d709f"
	I0819 19:16:55.560988  438295 cri.go:89] found id: ""
	I0819 19:16:55.560999  438295 logs.go:276] 1 containers: [a6bc5b24f616e32fdffb80b6ed0201250b02f143c8217d56ef90dc55551d709f]
	I0819 19:16:55.561065  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:16:55.565592  438295 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:55.565662  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:55.610872  438295 cri.go:89] found id: "c09c2a3840c6b84c4d187a5b4938f1e79c515609ad3ff7077a163e94acd5fc22"
	I0819 19:16:55.610905  438295 cri.go:89] found id: ""
	I0819 19:16:55.610914  438295 logs.go:276] 1 containers: [c09c2a3840c6b84c4d187a5b4938f1e79c515609ad3ff7077a163e94acd5fc22]
	I0819 19:16:55.610976  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:16:55.615411  438295 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:55.615486  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:55.652759  438295 cri.go:89] found id: "3e23a8501fe9333693618c26b918ed665ca9f2ea955dfc771ddbd90f4af91338"
	I0819 19:16:55.652792  438295 cri.go:89] found id: ""
	I0819 19:16:55.652807  438295 logs.go:276] 1 containers: [3e23a8501fe9333693618c26b918ed665ca9f2ea955dfc771ddbd90f4af91338]
	I0819 19:16:55.652873  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:16:55.657124  438295 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:55.657190  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:55.699063  438295 cri.go:89] found id: "6e6dab43bac16fb6a2155177fd2cb01da57c882a322ae89145bc332c50c87071"
	I0819 19:16:55.699085  438295 cri.go:89] found id: ""
	I0819 19:16:55.699093  438295 logs.go:276] 1 containers: [6e6dab43bac16fb6a2155177fd2cb01da57c882a322ae89145bc332c50c87071]
	I0819 19:16:55.699145  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:16:55.703224  438295 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:55.703292  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:55.753166  438295 cri.go:89] found id: ""
	I0819 19:16:55.753198  438295 logs.go:276] 0 containers: []
	W0819 19:16:55.753210  438295 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:55.753218  438295 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 19:16:55.753286  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 19:16:55.803518  438295 cri.go:89] found id: "902796698c02b97c3f50f231cba5dfbc00bc7e8344f104fe7a36109e1d10a4f8"
	I0819 19:16:55.803551  438295 cri.go:89] found id: "44a4290db8405288dc877d1dbfa8f1a4976cb6221431aef419db3cdff822d3b6"
	I0819 19:16:55.803558  438295 cri.go:89] found id: ""
	I0819 19:16:55.803568  438295 logs.go:276] 2 containers: [902796698c02b97c3f50f231cba5dfbc00bc7e8344f104fe7a36109e1d10a4f8 44a4290db8405288dc877d1dbfa8f1a4976cb6221431aef419db3cdff822d3b6]
	I0819 19:16:55.803637  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:16:55.808063  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:16:55.812708  438295 logs.go:123] Gathering logs for container status ...
	I0819 19:16:55.812737  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:55.861697  438295 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:55.861736  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 19:16:55.911203  438295 logs.go:138] Found kubelet problem: Aug 19 19:12:40 embed-certs-024748 kubelet[936]: W0819 19:12:40.671901     936 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:embed-certs-024748" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-024748' and this object
	W0819 19:16:55.911420  438295 logs.go:138] Found kubelet problem: Aug 19 19:12:40 embed-certs-024748 kubelet[936]: E0819 19:12:40.672098     936 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:embed-certs-024748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-024748' and this object" logger="UnhandledError"
	W0819 19:16:55.911603  438295 logs.go:138] Found kubelet problem: Aug 19 19:12:40 embed-certs-024748 kubelet[936]: W0819 19:12:40.672624     936 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-024748" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-024748' and this object
	W0819 19:16:55.911834  438295 logs.go:138] Found kubelet problem: Aug 19 19:12:40 embed-certs-024748 kubelet[936]: E0819 19:12:40.672667     936 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:embed-certs-024748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-024748' and this object" logger="UnhandledError"
	I0819 19:16:55.949585  438295 logs.go:123] Gathering logs for kube-scheduler [c09c2a3840c6b84c4d187a5b4938f1e79c515609ad3ff7077a163e94acd5fc22] ...
	I0819 19:16:55.949663  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c09c2a3840c6b84c4d187a5b4938f1e79c515609ad3ff7077a163e94acd5fc22"
	I0819 19:16:55.995063  438295 logs.go:123] Gathering logs for kube-controller-manager [6e6dab43bac16fb6a2155177fd2cb01da57c882a322ae89145bc332c50c87071] ...
	I0819 19:16:55.995100  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e6dab43bac16fb6a2155177fd2cb01da57c882a322ae89145bc332c50c87071"
	I0819 19:16:56.062320  438295 logs.go:123] Gathering logs for storage-provisioner [902796698c02b97c3f50f231cba5dfbc00bc7e8344f104fe7a36109e1d10a4f8] ...
	I0819 19:16:56.062376  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 902796698c02b97c3f50f231cba5dfbc00bc7e8344f104fe7a36109e1d10a4f8"
	I0819 19:16:56.100112  438295 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:56.100152  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:56.589439  438295 logs.go:123] Gathering logs for kube-proxy [3e23a8501fe9333693618c26b918ed665ca9f2ea955dfc771ddbd90f4af91338] ...
	I0819 19:16:56.589486  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e23a8501fe9333693618c26b918ed665ca9f2ea955dfc771ddbd90f4af91338"
	I0819 19:16:56.632096  438295 logs.go:123] Gathering logs for storage-provisioner [44a4290db8405288dc877d1dbfa8f1a4976cb6221431aef419db3cdff822d3b6] ...
	I0819 19:16:56.632132  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44a4290db8405288dc877d1dbfa8f1a4976cb6221431aef419db3cdff822d3b6"
	I0819 19:16:56.670952  438295 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:56.670984  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:56.685246  438295 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:56.685279  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 19:16:56.826418  438295 logs.go:123] Gathering logs for kube-apiserver [d66ad075c652a3b446078444a32327c07459f74199be8f89197067dbad566d5a] ...
	I0819 19:16:56.826456  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d66ad075c652a3b446078444a32327c07459f74199be8f89197067dbad566d5a"
	I0819 19:16:56.876901  438295 logs.go:123] Gathering logs for etcd [a3cb2c04e3eb3398fa324b660ca1864f22175cbf41fd84eae34a24ce7928b672] ...
	I0819 19:16:56.876944  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a3cb2c04e3eb3398fa324b660ca1864f22175cbf41fd84eae34a24ce7928b672"
	I0819 19:16:56.920390  438295 logs.go:123] Gathering logs for coredns [a6bc5b24f616e32fdffb80b6ed0201250b02f143c8217d56ef90dc55551d709f] ...
	I0819 19:16:56.920423  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6bc5b24f616e32fdffb80b6ed0201250b02f143c8217d56ef90dc55551d709f"
	I0819 19:16:56.961691  438295 out.go:358] Setting ErrFile to fd 2...
	I0819 19:16:56.961718  438295 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 19:16:56.961793  438295 out.go:270] X Problems detected in kubelet:
	W0819 19:16:56.961805  438295 out.go:270]   Aug 19 19:12:40 embed-certs-024748 kubelet[936]: W0819 19:12:40.671901     936 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:embed-certs-024748" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-024748' and this object
	W0819 19:16:56.961824  438295 out.go:270]   Aug 19 19:12:40 embed-certs-024748 kubelet[936]: E0819 19:12:40.672098     936 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:embed-certs-024748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-024748' and this object" logger="UnhandledError"
	W0819 19:16:56.961839  438295 out.go:270]   Aug 19 19:12:40 embed-certs-024748 kubelet[936]: W0819 19:12:40.672624     936 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-024748" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-024748' and this object
	W0819 19:16:56.961853  438295 out.go:270]   Aug 19 19:12:40 embed-certs-024748 kubelet[936]: E0819 19:12:40.672667     936 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:embed-certs-024748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-024748' and this object" logger="UnhandledError"
	I0819 19:16:56.961884  438295 out.go:358] Setting ErrFile to fd 2...
	I0819 19:16:56.961893  438295 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:16:55.716096  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:55.734732  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:55.734817  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:55.780484  438716 cri.go:89] found id: ""
	I0819 19:16:55.780514  438716 logs.go:276] 0 containers: []
	W0819 19:16:55.780525  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:55.780534  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:55.780607  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:55.821755  438716 cri.go:89] found id: ""
	I0819 19:16:55.821778  438716 logs.go:276] 0 containers: []
	W0819 19:16:55.821786  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:55.821792  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:55.821855  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:55.861032  438716 cri.go:89] found id: ""
	I0819 19:16:55.861066  438716 logs.go:276] 0 containers: []
	W0819 19:16:55.861077  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:55.861086  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:55.861159  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:55.909978  438716 cri.go:89] found id: ""
	I0819 19:16:55.910004  438716 logs.go:276] 0 containers: []
	W0819 19:16:55.910015  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:55.910024  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:55.910087  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:55.956603  438716 cri.go:89] found id: ""
	I0819 19:16:55.956634  438716 logs.go:276] 0 containers: []
	W0819 19:16:55.956645  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:55.956653  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:55.956722  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:55.999176  438716 cri.go:89] found id: ""
	I0819 19:16:55.999203  438716 logs.go:276] 0 containers: []
	W0819 19:16:55.999216  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:55.999225  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:55.999286  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:56.035141  438716 cri.go:89] found id: ""
	I0819 19:16:56.035172  438716 logs.go:276] 0 containers: []
	W0819 19:16:56.035183  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:56.035192  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:56.035255  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:56.076152  438716 cri.go:89] found id: ""
	I0819 19:16:56.076185  438716 logs.go:276] 0 containers: []
	W0819 19:16:56.076197  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:56.076209  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:56.076226  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:56.136624  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:56.136671  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:56.151867  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:56.151902  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:56.231650  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:56.231696  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:56.231713  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:56.307203  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:56.307247  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:58.848295  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:16:58.861984  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:16:58.862172  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:16:58.900089  438716 cri.go:89] found id: ""
	I0819 19:16:58.900114  438716 logs.go:276] 0 containers: []
	W0819 19:16:58.900124  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:16:58.900132  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:16:58.900203  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:16:58.932528  438716 cri.go:89] found id: ""
	I0819 19:16:58.932551  438716 logs.go:276] 0 containers: []
	W0819 19:16:58.932559  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:16:58.932565  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:16:58.932618  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:16:58.967255  438716 cri.go:89] found id: ""
	I0819 19:16:58.967283  438716 logs.go:276] 0 containers: []
	W0819 19:16:58.967291  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:16:58.967298  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:16:58.967349  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:16:59.000887  438716 cri.go:89] found id: ""
	I0819 19:16:59.000923  438716 logs.go:276] 0 containers: []
	W0819 19:16:59.000934  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:16:59.000942  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:16:59.001009  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:16:59.041386  438716 cri.go:89] found id: ""
	I0819 19:16:59.041417  438716 logs.go:276] 0 containers: []
	W0819 19:16:59.041428  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:16:59.041436  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:16:59.041499  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:16:59.080036  438716 cri.go:89] found id: ""
	I0819 19:16:59.080078  438716 logs.go:276] 0 containers: []
	W0819 19:16:59.080090  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:16:59.080099  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:16:59.080168  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:16:59.113946  438716 cri.go:89] found id: ""
	I0819 19:16:59.113982  438716 logs.go:276] 0 containers: []
	W0819 19:16:59.113995  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:16:59.114004  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:16:59.114066  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:16:59.155413  438716 cri.go:89] found id: ""
	I0819 19:16:59.155437  438716 logs.go:276] 0 containers: []
	W0819 19:16:59.155446  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:16:59.155456  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:16:59.155477  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:16:59.223795  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:16:59.223815  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:16:59.223828  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:16:59.304516  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:16:59.304554  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:16:59.344975  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:16:59.345005  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:16:59.397751  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:16:59.397789  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:16:58.402453  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:00.901494  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:02.043611  438245 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.355651212s)
	I0819 19:17:02.043735  438245 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 19:17:02.066981  438245 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 19:17:02.083179  438245 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 19:17:02.100807  438245 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 19:17:02.100829  438245 kubeadm.go:157] found existing configuration files:
	
	I0819 19:17:02.100877  438245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0819 19:17:02.116462  438245 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 19:17:02.116534  438245 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 19:17:02.127313  438245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0819 19:17:02.147096  438245 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 19:17:02.147170  438245 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 19:17:02.159262  438245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0819 19:17:02.168825  438245 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 19:17:02.168918  438245 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 19:17:02.179354  438245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0819 19:17:02.188982  438245 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 19:17:02.189051  438245 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 19:17:02.199291  438245 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 19:17:01.914433  438716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:17:01.927468  438716 kubeadm.go:597] duration metric: took 4m3.453401239s to restartPrimaryControlPlane
	W0819 19:17:01.927564  438716 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0819 19:17:01.927600  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 19:17:02.647971  438716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 19:17:02.665946  438716 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 19:17:02.676665  438716 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 19:17:02.686818  438716 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 19:17:02.686840  438716 kubeadm.go:157] found existing configuration files:
	
	I0819 19:17:02.686885  438716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 19:17:02.697160  438716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 19:17:02.697228  438716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 19:17:02.707774  438716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 19:17:02.717251  438716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 19:17:02.717310  438716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 19:17:02.727481  438716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 19:17:02.738085  438716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 19:17:02.738141  438716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 19:17:02.749286  438716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 19:17:02.759965  438716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 19:17:02.760025  438716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 19:17:02.770753  438716 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 19:17:02.835857  438716 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0819 19:17:02.835940  438716 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 19:17:02.983775  438716 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 19:17:02.983974  438716 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 19:17:02.984149  438716 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0819 19:17:03.173404  438716 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 19:17:03.175412  438716 out.go:235]   - Generating certificates and keys ...
	I0819 19:17:03.175520  438716 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 19:17:03.175659  438716 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 19:17:03.175805  438716 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 19:17:03.175913  438716 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 19:17:03.176021  438716 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 19:17:03.176125  438716 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 19:17:03.176626  438716 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 19:17:03.177624  438716 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 19:17:03.178399  438716 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 19:17:03.179325  438716 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 19:17:03.179599  438716 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 19:17:03.179702  438716 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 19:17:03.416467  438716 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 19:17:03.505378  438716 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 19:17:03.588959  438716 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 19:17:03.680602  438716 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 19:17:03.697717  438716 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 19:17:03.700436  438716 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 19:17:03.700579  438716 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 19:17:03.858804  438716 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 19:17:03.861395  438716 out.go:235]   - Booting up control plane ...
	I0819 19:17:03.861520  438716 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 19:17:03.877387  438716 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 19:17:03.878611  438716 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 19:17:03.882842  438716 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 19:17:03.887436  438716 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0819 19:17:02.902839  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:05.402376  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:02.248409  438245 kubeadm.go:310] W0819 19:17:02.217617    2563 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 19:17:02.250447  438245 kubeadm.go:310] W0819 19:17:02.219827    2563 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 19:17:02.377127  438245 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 19:17:06.962848  438295 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:17:06.984774  438295 api_server.go:72] duration metric: took 4m23.117653428s to wait for apiserver process to appear ...
	I0819 19:17:06.984811  438295 api_server.go:88] waiting for apiserver healthz status ...
	I0819 19:17:06.984865  438295 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:17:06.984939  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:17:07.025158  438295 cri.go:89] found id: "d66ad075c652a3b446078444a32327c07459f74199be8f89197067dbad566d5a"
	I0819 19:17:07.025201  438295 cri.go:89] found id: ""
	I0819 19:17:07.025213  438295 logs.go:276] 1 containers: [d66ad075c652a3b446078444a32327c07459f74199be8f89197067dbad566d5a]
	I0819 19:17:07.025287  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:07.032365  438295 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:17:07.032446  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:17:07.073368  438295 cri.go:89] found id: "a3cb2c04e3eb3398fa324b660ca1864f22175cbf41fd84eae34a24ce7928b672"
	I0819 19:17:07.073394  438295 cri.go:89] found id: ""
	I0819 19:17:07.073403  438295 logs.go:276] 1 containers: [a3cb2c04e3eb3398fa324b660ca1864f22175cbf41fd84eae34a24ce7928b672]
	I0819 19:17:07.073463  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:07.078781  438295 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:17:07.078891  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:17:07.123263  438295 cri.go:89] found id: "a6bc5b24f616e32fdffb80b6ed0201250b02f143c8217d56ef90dc55551d709f"
	I0819 19:17:07.123293  438295 cri.go:89] found id: ""
	I0819 19:17:07.123303  438295 logs.go:276] 1 containers: [a6bc5b24f616e32fdffb80b6ed0201250b02f143c8217d56ef90dc55551d709f]
	I0819 19:17:07.123365  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:07.128485  438295 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:17:07.128579  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:17:07.167105  438295 cri.go:89] found id: "c09c2a3840c6b84c4d187a5b4938f1e79c515609ad3ff7077a163e94acd5fc22"
	I0819 19:17:07.167137  438295 cri.go:89] found id: ""
	I0819 19:17:07.167148  438295 logs.go:276] 1 containers: [c09c2a3840c6b84c4d187a5b4938f1e79c515609ad3ff7077a163e94acd5fc22]
	I0819 19:17:07.167215  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:07.171571  438295 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:17:07.171641  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:17:07.215524  438295 cri.go:89] found id: "3e23a8501fe9333693618c26b918ed665ca9f2ea955dfc771ddbd90f4af91338"
	I0819 19:17:07.215547  438295 cri.go:89] found id: ""
	I0819 19:17:07.215555  438295 logs.go:276] 1 containers: [3e23a8501fe9333693618c26b918ed665ca9f2ea955dfc771ddbd90f4af91338]
	I0819 19:17:07.215621  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:07.221604  438295 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:17:07.221676  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:17:07.263106  438295 cri.go:89] found id: "6e6dab43bac16fb6a2155177fd2cb01da57c882a322ae89145bc332c50c87071"
	I0819 19:17:07.263140  438295 cri.go:89] found id: ""
	I0819 19:17:07.263149  438295 logs.go:276] 1 containers: [6e6dab43bac16fb6a2155177fd2cb01da57c882a322ae89145bc332c50c87071]
	I0819 19:17:07.263209  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:07.267703  438295 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:17:07.267770  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:17:07.316006  438295 cri.go:89] found id: ""
	I0819 19:17:07.316042  438295 logs.go:276] 0 containers: []
	W0819 19:17:07.316054  438295 logs.go:278] No container was found matching "kindnet"
	I0819 19:17:07.316062  438295 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 19:17:07.316132  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 19:17:07.361100  438295 cri.go:89] found id: "902796698c02b97c3f50f231cba5dfbc00bc7e8344f104fe7a36109e1d10a4f8"
	I0819 19:17:07.361123  438295 cri.go:89] found id: "44a4290db8405288dc877d1dbfa8f1a4976cb6221431aef419db3cdff822d3b6"
	I0819 19:17:07.361126  438295 cri.go:89] found id: ""
	I0819 19:17:07.361133  438295 logs.go:276] 2 containers: [902796698c02b97c3f50f231cba5dfbc00bc7e8344f104fe7a36109e1d10a4f8 44a4290db8405288dc877d1dbfa8f1a4976cb6221431aef419db3cdff822d3b6]
	I0819 19:17:07.361190  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:07.366949  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:07.372724  438295 logs.go:123] Gathering logs for kubelet ...
	I0819 19:17:07.372748  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 19:17:07.413540  438295 logs.go:138] Found kubelet problem: Aug 19 19:12:40 embed-certs-024748 kubelet[936]: W0819 19:12:40.671901     936 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:embed-certs-024748" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-024748' and this object
	W0819 19:17:07.413722  438295 logs.go:138] Found kubelet problem: Aug 19 19:12:40 embed-certs-024748 kubelet[936]: E0819 19:12:40.672098     936 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:embed-certs-024748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-024748' and this object" logger="UnhandledError"
	W0819 19:17:07.413858  438295 logs.go:138] Found kubelet problem: Aug 19 19:12:40 embed-certs-024748 kubelet[936]: W0819 19:12:40.672624     936 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-024748" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-024748' and this object
	W0819 19:17:07.414017  438295 logs.go:138] Found kubelet problem: Aug 19 19:12:40 embed-certs-024748 kubelet[936]: E0819 19:12:40.672667     936 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:embed-certs-024748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-024748' and this object" logger="UnhandledError"
	I0819 19:17:07.452061  438295 logs.go:123] Gathering logs for coredns [a6bc5b24f616e32fdffb80b6ed0201250b02f143c8217d56ef90dc55551d709f] ...
	I0819 19:17:07.452104  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6bc5b24f616e32fdffb80b6ed0201250b02f143c8217d56ef90dc55551d709f"
	I0819 19:17:07.490598  438295 logs.go:123] Gathering logs for kube-scheduler [c09c2a3840c6b84c4d187a5b4938f1e79c515609ad3ff7077a163e94acd5fc22] ...
	I0819 19:17:07.490636  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c09c2a3840c6b84c4d187a5b4938f1e79c515609ad3ff7077a163e94acd5fc22"
	I0819 19:17:07.530454  438295 logs.go:123] Gathering logs for kube-proxy [3e23a8501fe9333693618c26b918ed665ca9f2ea955dfc771ddbd90f4af91338] ...
	I0819 19:17:07.530486  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e23a8501fe9333693618c26b918ed665ca9f2ea955dfc771ddbd90f4af91338"
	I0819 19:17:07.581488  438295 logs.go:123] Gathering logs for storage-provisioner [902796698c02b97c3f50f231cba5dfbc00bc7e8344f104fe7a36109e1d10a4f8] ...
	I0819 19:17:07.581528  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 902796698c02b97c3f50f231cba5dfbc00bc7e8344f104fe7a36109e1d10a4f8"
	I0819 19:17:07.621752  438295 logs.go:123] Gathering logs for storage-provisioner [44a4290db8405288dc877d1dbfa8f1a4976cb6221431aef419db3cdff822d3b6] ...
	I0819 19:17:07.621787  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44a4290db8405288dc877d1dbfa8f1a4976cb6221431aef419db3cdff822d3b6"
	I0819 19:17:07.661330  438295 logs.go:123] Gathering logs for container status ...
	I0819 19:17:07.661365  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:17:07.709227  438295 logs.go:123] Gathering logs for dmesg ...
	I0819 19:17:07.709261  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:17:07.724634  438295 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:17:07.724670  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 19:17:07.850212  438295 logs.go:123] Gathering logs for kube-apiserver [d66ad075c652a3b446078444a32327c07459f74199be8f89197067dbad566d5a] ...
	I0819 19:17:07.850247  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d66ad075c652a3b446078444a32327c07459f74199be8f89197067dbad566d5a"
	I0819 19:17:07.894464  438295 logs.go:123] Gathering logs for etcd [a3cb2c04e3eb3398fa324b660ca1864f22175cbf41fd84eae34a24ce7928b672] ...
	I0819 19:17:07.894507  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a3cb2c04e3eb3398fa324b660ca1864f22175cbf41fd84eae34a24ce7928b672"
	I0819 19:17:07.943807  438295 logs.go:123] Gathering logs for kube-controller-manager [6e6dab43bac16fb6a2155177fd2cb01da57c882a322ae89145bc332c50c87071] ...
	I0819 19:17:07.943841  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e6dab43bac16fb6a2155177fd2cb01da57c882a322ae89145bc332c50c87071"
	I0819 19:17:08.007428  438295 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:17:08.007463  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:17:08.487397  438295 out.go:358] Setting ErrFile to fd 2...
	I0819 19:17:08.487435  438295 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 19:17:08.487518  438295 out.go:270] X Problems detected in kubelet:
	W0819 19:17:08.487534  438295 out.go:270]   Aug 19 19:12:40 embed-certs-024748 kubelet[936]: W0819 19:12:40.671901     936 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:embed-certs-024748" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-024748' and this object
	W0819 19:17:08.487546  438295 out.go:270]   Aug 19 19:12:40 embed-certs-024748 kubelet[936]: E0819 19:12:40.672098     936 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:embed-certs-024748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-024748' and this object" logger="UnhandledError"
	W0819 19:17:08.487560  438295 out.go:270]   Aug 19 19:12:40 embed-certs-024748 kubelet[936]: W0819 19:12:40.672624     936 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-024748" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-024748' and this object
	W0819 19:17:08.487574  438295 out.go:270]   Aug 19 19:12:40 embed-certs-024748 kubelet[936]: E0819 19:12:40.672667     936 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:embed-certs-024748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-024748' and this object" logger="UnhandledError"
	I0819 19:17:08.487584  438295 out.go:358] Setting ErrFile to fd 2...
	I0819 19:17:08.487598  438295 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:17:10.237580  438245 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0819 19:17:10.237675  438245 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 19:17:10.237792  438245 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 19:17:10.237934  438245 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 19:17:10.238088  438245 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0819 19:17:10.238194  438245 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 19:17:10.239873  438245 out.go:235]   - Generating certificates and keys ...
	I0819 19:17:10.239957  438245 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 19:17:10.240051  438245 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 19:17:10.240187  438245 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 19:17:10.240294  438245 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 19:17:10.240410  438245 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 19:17:10.240495  438245 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 19:17:10.240598  438245 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 19:17:10.240680  438245 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 19:17:10.240747  438245 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 19:17:10.240843  438245 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 19:17:10.240886  438245 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 19:17:10.240958  438245 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 19:17:10.241024  438245 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 19:17:10.241094  438245 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0819 19:17:10.241159  438245 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 19:17:10.241248  438245 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 19:17:10.241328  438245 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 19:17:10.241431  438245 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 19:17:10.241535  438245 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 19:17:10.243764  438245 out.go:235]   - Booting up control plane ...
	I0819 19:17:10.243859  438245 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 19:17:10.243934  438245 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 19:17:10.243994  438245 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 19:17:10.244131  438245 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 19:17:10.244263  438245 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 19:17:10.244301  438245 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 19:17:10.244458  438245 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0819 19:17:10.244611  438245 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0819 19:17:10.244685  438245 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.412341ms
	I0819 19:17:10.244770  438245 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0819 19:17:10.244850  438245 kubeadm.go:310] [api-check] The API server is healthy after 5.002047877s
	I0819 19:17:10.244953  438245 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 19:17:10.245093  438245 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 19:17:10.245199  438245 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 19:17:10.245400  438245 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-982795 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 19:17:10.245465  438245 kubeadm.go:310] [bootstrap-token] Using token: trsfx5.kx2phd1605yhia2w
	I0819 19:17:10.247722  438245 out.go:235]   - Configuring RBAC rules ...
	I0819 19:17:10.247861  438245 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 19:17:10.247955  438245 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 19:17:10.248144  438245 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 19:17:10.248264  438245 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 19:17:10.248379  438245 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 19:17:10.248468  438245 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 19:17:10.248567  438245 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 19:17:10.248612  438245 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 19:17:10.248654  438245 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 19:17:10.248660  438245 kubeadm.go:310] 
	I0819 19:17:10.248708  438245 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 19:17:10.248713  438245 kubeadm.go:310] 
	I0819 19:17:10.248779  438245 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 19:17:10.248786  438245 kubeadm.go:310] 
	I0819 19:17:10.248806  438245 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 19:17:10.248866  438245 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 19:17:10.248910  438245 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 19:17:10.248916  438245 kubeadm.go:310] 
	I0819 19:17:10.248966  438245 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 19:17:10.248972  438245 kubeadm.go:310] 
	I0819 19:17:10.249014  438245 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 19:17:10.249024  438245 kubeadm.go:310] 
	I0819 19:17:10.249069  438245 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 19:17:10.249136  438245 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 19:17:10.249209  438245 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 19:17:10.249221  438245 kubeadm.go:310] 
	I0819 19:17:10.249319  438245 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 19:17:10.249386  438245 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 19:17:10.249392  438245 kubeadm.go:310] 
	I0819 19:17:10.249464  438245 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token trsfx5.kx2phd1605yhia2w \
	I0819 19:17:10.249553  438245 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3fcbd90565c5acbc36a47b2db682cb22dce9b172c9bf3af21e506ebb67608039 \
	I0819 19:17:10.249575  438245 kubeadm.go:310] 	--control-plane 
	I0819 19:17:10.249581  438245 kubeadm.go:310] 
	I0819 19:17:10.249658  438245 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 19:17:10.249664  438245 kubeadm.go:310] 
	I0819 19:17:10.249734  438245 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token trsfx5.kx2phd1605yhia2w \
	I0819 19:17:10.249833  438245 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3fcbd90565c5acbc36a47b2db682cb22dce9b172c9bf3af21e506ebb67608039 
	I0819 19:17:10.249849  438245 cni.go:84] Creating CNI manager for ""
	I0819 19:17:10.249857  438245 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 19:17:10.252133  438245 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 19:17:07.403590  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:09.901861  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:10.253419  438245 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 19:17:10.264266  438245 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 19:17:10.289509  438245 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 19:17:10.289661  438245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-982795 minikube.k8s.io/updated_at=2024_08_19T19_17_10_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=9c2db9d51ec33b5c53a86e9ba3d384ee332e3411 minikube.k8s.io/name=default-k8s-diff-port-982795 minikube.k8s.io/primary=true
	I0819 19:17:10.289663  438245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:17:10.322738  438245 ops.go:34] apiserver oom_adj: -16
	I0819 19:17:10.519946  438245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:17:11.020736  438245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:17:11.520925  438245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:17:12.020276  438245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:17:12.520277  438245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:17:13.020787  438245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:17:13.520048  438245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:17:14.020893  438245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:17:14.520869  438245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 19:17:14.642214  438245 kubeadm.go:1113] duration metric: took 4.352638211s to wait for elevateKubeSystemPrivileges
	I0819 19:17:14.642251  438245 kubeadm.go:394] duration metric: took 4m59.943476935s to StartCluster
	I0819 19:17:14.642295  438245 settings.go:142] acquiring lock: {Name:mk396fcf49a1d0e69583cf37ff3c819e37118163 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:17:14.642382  438245 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19468-372744/kubeconfig
	I0819 19:17:14.644103  438245 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/kubeconfig: {Name:mk8e7b4e1bb7da665111d2acd83eb48882c66853 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 19:17:14.644408  438245 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.48 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 19:17:14.644550  438245 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 19:17:14.644641  438245 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-982795"
	I0819 19:17:14.644665  438245 config.go:182] Loaded profile config "default-k8s-diff-port-982795": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 19:17:14.644687  438245 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-982795"
	W0819 19:17:14.644701  438245 addons.go:243] addon storage-provisioner should already be in state true
	I0819 19:17:14.644712  438245 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-982795"
	I0819 19:17:14.644735  438245 host.go:66] Checking if "default-k8s-diff-port-982795" exists ...
	I0819 19:17:14.644757  438245 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-982795"
	W0819 19:17:14.644770  438245 addons.go:243] addon metrics-server should already be in state true
	I0819 19:17:14.644678  438245 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-982795"
	I0819 19:17:14.644852  438245 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-982795"
	I0819 19:17:14.644797  438245 host.go:66] Checking if "default-k8s-diff-port-982795" exists ...
	I0819 19:17:14.645125  438245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:17:14.645176  438245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:17:14.645272  438245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:17:14.645291  438245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:17:14.645355  438245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:17:14.645401  438245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:17:14.646083  438245 out.go:177] * Verifying Kubernetes components...
	I0819 19:17:14.647579  438245 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 19:17:14.662756  438245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42581
	I0819 19:17:14.663407  438245 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:17:14.664088  438245 main.go:141] libmachine: Using API Version  1
	I0819 19:17:14.664117  438245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:17:14.664528  438245 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:17:14.665189  438245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:17:14.665222  438245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:17:14.665665  438245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43637
	I0819 19:17:14.665842  438245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44021
	I0819 19:17:14.666204  438245 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:17:14.666321  438245 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:17:14.666761  438245 main.go:141] libmachine: Using API Version  1
	I0819 19:17:14.666783  438245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:17:14.666955  438245 main.go:141] libmachine: Using API Version  1
	I0819 19:17:14.666979  438245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:17:14.667173  438245 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:17:14.667363  438245 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:17:14.667592  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetState
	I0819 19:17:14.667786  438245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:17:14.667818  438245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:17:14.671231  438245 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-982795"
	W0819 19:17:14.671249  438245 addons.go:243] addon default-storageclass should already be in state true
	I0819 19:17:14.671273  438245 host.go:66] Checking if "default-k8s-diff-port-982795" exists ...
	I0819 19:17:14.671507  438245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:17:14.671533  438245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:17:14.682996  438245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36593
	I0819 19:17:14.683560  438245 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:17:14.684268  438245 main.go:141] libmachine: Using API Version  1
	I0819 19:17:14.684292  438245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:17:14.684686  438245 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:17:14.684899  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetState
	I0819 19:17:14.686943  438245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44459
	I0819 19:17:14.687384  438245 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:17:14.687309  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .DriverName
	I0819 19:17:14.687874  438245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46587
	I0819 19:17:14.687965  438245 main.go:141] libmachine: Using API Version  1
	I0819 19:17:14.687980  438245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:17:14.688367  438245 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:17:14.688420  438245 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:17:14.688623  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetState
	I0819 19:17:14.689039  438245 main.go:141] libmachine: Using API Version  1
	I0819 19:17:14.689362  438245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:17:14.689690  438245 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 19:17:14.690179  438245 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:17:14.690626  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .DriverName
	I0819 19:17:14.690789  438245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 19:17:14.690823  438245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 19:17:14.690938  438245 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 19:17:14.690958  438245 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 19:17:14.690979  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHHostname
	I0819 19:17:14.692114  438245 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0819 19:17:11.902284  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:13.903205  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:16.402298  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:14.693147  438245 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0819 19:17:14.693163  438245 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0819 19:17:14.693182  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHHostname
	I0819 19:17:14.694601  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:17:14.695302  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:17:14.695333  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:17:14.695541  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHPort
	I0819 19:17:14.695760  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:17:14.696133  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHUsername
	I0819 19:17:14.696303  438245 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/default-k8s-diff-port-982795/id_rsa Username:docker}
	I0819 19:17:14.696554  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:17:14.696979  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:17:14.697003  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:17:14.697110  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHPort
	I0819 19:17:14.697274  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:17:14.697445  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHUsername
	I0819 19:17:14.697578  438245 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/default-k8s-diff-port-982795/id_rsa Username:docker}
	I0819 19:17:14.708592  438245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38807
	I0819 19:17:14.709140  438245 main.go:141] libmachine: () Calling .GetVersion
	I0819 19:17:14.709716  438245 main.go:141] libmachine: Using API Version  1
	I0819 19:17:14.709737  438245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 19:17:14.710049  438245 main.go:141] libmachine: () Calling .GetMachineName
	I0819 19:17:14.710269  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetState
	I0819 19:17:14.711887  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .DriverName
	I0819 19:17:14.712147  438245 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 19:17:14.712162  438245 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 19:17:14.712179  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHHostname
	I0819 19:17:14.715593  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:17:14.716040  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:19:cd", ip: ""} in network mk-default-k8s-diff-port-982795: {Iface:virbr3 ExpiryTime:2024-08-19 20:12:00 +0000 UTC Type:0 Mac:52:54:00:d4:19:cd Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:default-k8s-diff-port-982795 Clientid:01:52:54:00:d4:19:cd}
	I0819 19:17:14.716062  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | domain default-k8s-diff-port-982795 has defined IP address 192.168.61.48 and MAC address 52:54:00:d4:19:cd in network mk-default-k8s-diff-port-982795
	I0819 19:17:14.716384  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHPort
	I0819 19:17:14.716561  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHKeyPath
	I0819 19:17:14.716710  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .GetSSHUsername
	I0819 19:17:14.716938  438245 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/default-k8s-diff-port-982795/id_rsa Username:docker}
	I0819 19:17:14.874857  438245 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 19:17:14.903798  438245 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-982795" to be "Ready" ...
	I0819 19:17:14.919842  438245 node_ready.go:49] node "default-k8s-diff-port-982795" has status "Ready":"True"
	I0819 19:17:14.919866  438245 node_ready.go:38] duration metric: took 16.039402ms for node "default-k8s-diff-port-982795" to be "Ready" ...
	I0819 19:17:14.919877  438245 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 19:17:14.932785  438245 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-845gx" in "kube-system" namespace to be "Ready" ...
	I0819 19:17:15.019664  438245 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0819 19:17:15.019718  438245 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0819 19:17:15.030317  438245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 19:17:15.056177  438245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 19:17:15.074202  438245 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0819 19:17:15.074235  438245 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0819 19:17:15.127037  438245 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 19:17:15.127071  438245 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0819 19:17:15.217951  438245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 19:17:15.351034  438245 main.go:141] libmachine: Making call to close driver server
	I0819 19:17:15.351067  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .Close
	I0819 19:17:15.351398  438245 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:17:15.351417  438245 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:17:15.351429  438245 main.go:141] libmachine: Making call to close driver server
	I0819 19:17:15.351441  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .Close
	I0819 19:17:15.351678  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | Closing plugin on server side
	I0819 19:17:15.351728  438245 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:17:15.351750  438245 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:17:15.357999  438245 main.go:141] libmachine: Making call to close driver server
	I0819 19:17:15.358023  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .Close
	I0819 19:17:15.358291  438245 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:17:15.358316  438245 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:17:16.196638  438245 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.140417152s)
	I0819 19:17:16.196694  438245 main.go:141] libmachine: Making call to close driver server
	I0819 19:17:16.196707  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .Close
	I0819 19:17:16.197022  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | Closing plugin on server side
	I0819 19:17:16.197112  438245 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:17:16.197137  438245 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:17:16.197157  438245 main.go:141] libmachine: Making call to close driver server
	I0819 19:17:16.197167  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .Close
	I0819 19:17:16.197449  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | Closing plugin on server side
	I0819 19:17:16.197493  438245 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:17:16.197505  438245 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:17:16.638069  438245 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.42006496s)
	I0819 19:17:16.638141  438245 main.go:141] libmachine: Making call to close driver server
	I0819 19:17:16.638159  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .Close
	I0819 19:17:16.638488  438245 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:17:16.638518  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | Closing plugin on server side
	I0819 19:17:16.638529  438245 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:17:16.638564  438245 main.go:141] libmachine: Making call to close driver server
	I0819 19:17:16.638574  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) Calling .Close
	I0819 19:17:16.638861  438245 main.go:141] libmachine: (default-k8s-diff-port-982795) DBG | Closing plugin on server side
	I0819 19:17:16.638896  438245 main.go:141] libmachine: Successfully made call to close driver server
	I0819 19:17:16.638904  438245 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 19:17:16.638915  438245 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-982795"
	I0819 19:17:16.641476  438245 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0819 19:17:16.642733  438245 addons.go:510] duration metric: took 1.998196502s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0819 19:17:16.954631  438245 pod_ready.go:103] pod "coredns-6f6b679f8f-845gx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:18.489333  438295 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0819 19:17:18.494609  438295 api_server.go:279] https://192.168.72.96:8443/healthz returned 200:
	ok
	I0819 19:17:18.495587  438295 api_server.go:141] control plane version: v1.31.0
	I0819 19:17:18.495613  438295 api_server.go:131] duration metric: took 11.510793296s to wait for apiserver health ...
	I0819 19:17:18.495624  438295 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 19:17:18.495656  438295 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:17:18.495735  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:17:18.540446  438295 cri.go:89] found id: "d66ad075c652a3b446078444a32327c07459f74199be8f89197067dbad566d5a"
	I0819 19:17:18.540477  438295 cri.go:89] found id: ""
	I0819 19:17:18.540487  438295 logs.go:276] 1 containers: [d66ad075c652a3b446078444a32327c07459f74199be8f89197067dbad566d5a]
	I0819 19:17:18.540555  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:18.551443  438295 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:17:18.551527  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:17:18.592388  438295 cri.go:89] found id: "a3cb2c04e3eb3398fa324b660ca1864f22175cbf41fd84eae34a24ce7928b672"
	I0819 19:17:18.592416  438295 cri.go:89] found id: ""
	I0819 19:17:18.592427  438295 logs.go:276] 1 containers: [a3cb2c04e3eb3398fa324b660ca1864f22175cbf41fd84eae34a24ce7928b672]
	I0819 19:17:18.592495  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:18.597534  438295 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:17:18.597615  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:17:18.637782  438295 cri.go:89] found id: "a6bc5b24f616e32fdffb80b6ed0201250b02f143c8217d56ef90dc55551d709f"
	I0819 19:17:18.637804  438295 cri.go:89] found id: ""
	I0819 19:17:18.637812  438295 logs.go:276] 1 containers: [a6bc5b24f616e32fdffb80b6ed0201250b02f143c8217d56ef90dc55551d709f]
	I0819 19:17:18.637861  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:18.642557  438295 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:17:18.642618  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:17:18.679573  438295 cri.go:89] found id: "c09c2a3840c6b84c4d187a5b4938f1e79c515609ad3ff7077a163e94acd5fc22"
	I0819 19:17:18.679597  438295 cri.go:89] found id: ""
	I0819 19:17:18.679605  438295 logs.go:276] 1 containers: [c09c2a3840c6b84c4d187a5b4938f1e79c515609ad3ff7077a163e94acd5fc22]
	I0819 19:17:18.679657  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:18.684160  438295 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:17:18.684230  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:17:18.726848  438295 cri.go:89] found id: "3e23a8501fe9333693618c26b918ed665ca9f2ea955dfc771ddbd90f4af91338"
	I0819 19:17:18.726881  438295 cri.go:89] found id: ""
	I0819 19:17:18.726889  438295 logs.go:276] 1 containers: [3e23a8501fe9333693618c26b918ed665ca9f2ea955dfc771ddbd90f4af91338]
	I0819 19:17:18.726943  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:18.731422  438295 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:17:18.731484  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:17:18.773623  438295 cri.go:89] found id: "6e6dab43bac16fb6a2155177fd2cb01da57c882a322ae89145bc332c50c87071"
	I0819 19:17:18.773649  438295 cri.go:89] found id: ""
	I0819 19:17:18.773658  438295 logs.go:276] 1 containers: [6e6dab43bac16fb6a2155177fd2cb01da57c882a322ae89145bc332c50c87071]
	I0819 19:17:18.773709  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:18.779609  438295 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:17:18.779687  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:17:18.822876  438295 cri.go:89] found id: ""
	I0819 19:17:18.822911  438295 logs.go:276] 0 containers: []
	W0819 19:17:18.822922  438295 logs.go:278] No container was found matching "kindnet"
	I0819 19:17:18.822931  438295 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 19:17:18.822998  438295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 19:17:18.868653  438295 cri.go:89] found id: "902796698c02b97c3f50f231cba5dfbc00bc7e8344f104fe7a36109e1d10a4f8"
	I0819 19:17:18.868685  438295 cri.go:89] found id: "44a4290db8405288dc877d1dbfa8f1a4976cb6221431aef419db3cdff822d3b6"
	I0819 19:17:18.868691  438295 cri.go:89] found id: ""
	I0819 19:17:18.868701  438295 logs.go:276] 2 containers: [902796698c02b97c3f50f231cba5dfbc00bc7e8344f104fe7a36109e1d10a4f8 44a4290db8405288dc877d1dbfa8f1a4976cb6221431aef419db3cdff822d3b6]
	I0819 19:17:18.868776  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:18.873136  438295 ssh_runner.go:195] Run: which crictl
	I0819 19:17:18.877397  438295 logs.go:123] Gathering logs for kube-proxy [3e23a8501fe9333693618c26b918ed665ca9f2ea955dfc771ddbd90f4af91338] ...
	I0819 19:17:18.877425  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e23a8501fe9333693618c26b918ed665ca9f2ea955dfc771ddbd90f4af91338"
	I0819 19:17:18.918085  438295 logs.go:123] Gathering logs for kube-controller-manager [6e6dab43bac16fb6a2155177fd2cb01da57c882a322ae89145bc332c50c87071] ...
	I0819 19:17:18.918118  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e6dab43bac16fb6a2155177fd2cb01da57c882a322ae89145bc332c50c87071"
	I0819 19:17:18.973344  438295 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:17:18.973378  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:17:18.901539  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:20.902550  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:19.440295  438245 pod_ready.go:103] pod "coredns-6f6b679f8f-845gx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:21.939652  438245 pod_ready.go:103] pod "coredns-6f6b679f8f-845gx" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:19.443625  438295 logs.go:123] Gathering logs for container status ...
	I0819 19:17:19.443689  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:17:19.492650  438295 logs.go:123] Gathering logs for dmesg ...
	I0819 19:17:19.492696  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:17:19.507957  438295 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:17:19.507996  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 19:17:19.617295  438295 logs.go:123] Gathering logs for coredns [a6bc5b24f616e32fdffb80b6ed0201250b02f143c8217d56ef90dc55551d709f] ...
	I0819 19:17:19.617341  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6bc5b24f616e32fdffb80b6ed0201250b02f143c8217d56ef90dc55551d709f"
	I0819 19:17:19.669869  438295 logs.go:123] Gathering logs for kube-scheduler [c09c2a3840c6b84c4d187a5b4938f1e79c515609ad3ff7077a163e94acd5fc22] ...
	I0819 19:17:19.669930  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c09c2a3840c6b84c4d187a5b4938f1e79c515609ad3ff7077a163e94acd5fc22"
	I0819 19:17:19.706649  438295 logs.go:123] Gathering logs for storage-provisioner [44a4290db8405288dc877d1dbfa8f1a4976cb6221431aef419db3cdff822d3b6] ...
	I0819 19:17:19.706681  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44a4290db8405288dc877d1dbfa8f1a4976cb6221431aef419db3cdff822d3b6"
	I0819 19:17:19.746742  438295 logs.go:123] Gathering logs for kubelet ...
	I0819 19:17:19.746780  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 19:17:19.796224  438295 logs.go:138] Found kubelet problem: Aug 19 19:12:40 embed-certs-024748 kubelet[936]: W0819 19:12:40.671901     936 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:embed-certs-024748" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-024748' and this object
	W0819 19:17:19.796442  438295 logs.go:138] Found kubelet problem: Aug 19 19:12:40 embed-certs-024748 kubelet[936]: E0819 19:12:40.672098     936 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:embed-certs-024748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-024748' and this object" logger="UnhandledError"
	W0819 19:17:19.796622  438295 logs.go:138] Found kubelet problem: Aug 19 19:12:40 embed-certs-024748 kubelet[936]: W0819 19:12:40.672624     936 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-024748" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-024748' and this object
	W0819 19:17:19.796845  438295 logs.go:138] Found kubelet problem: Aug 19 19:12:40 embed-certs-024748 kubelet[936]: E0819 19:12:40.672667     936 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:embed-certs-024748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-024748' and this object" logger="UnhandledError"
	I0819 19:17:19.836283  438295 logs.go:123] Gathering logs for kube-apiserver [d66ad075c652a3b446078444a32327c07459f74199be8f89197067dbad566d5a] ...
	I0819 19:17:19.836328  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d66ad075c652a3b446078444a32327c07459f74199be8f89197067dbad566d5a"
	I0819 19:17:19.889829  438295 logs.go:123] Gathering logs for etcd [a3cb2c04e3eb3398fa324b660ca1864f22175cbf41fd84eae34a24ce7928b672] ...
	I0819 19:17:19.889875  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a3cb2c04e3eb3398fa324b660ca1864f22175cbf41fd84eae34a24ce7928b672"
	I0819 19:17:19.938361  438295 logs.go:123] Gathering logs for storage-provisioner [902796698c02b97c3f50f231cba5dfbc00bc7e8344f104fe7a36109e1d10a4f8] ...
	I0819 19:17:19.938397  438295 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 902796698c02b97c3f50f231cba5dfbc00bc7e8344f104fe7a36109e1d10a4f8"
	I0819 19:17:19.978525  438295 out.go:358] Setting ErrFile to fd 2...
	I0819 19:17:19.978557  438295 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 19:17:19.978628  438295 out.go:270] X Problems detected in kubelet:
	W0819 19:17:19.978642  438295 out.go:270]   Aug 19 19:12:40 embed-certs-024748 kubelet[936]: W0819 19:12:40.671901     936 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:embed-certs-024748" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-024748' and this object
	W0819 19:17:19.978656  438295 out.go:270]   Aug 19 19:12:40 embed-certs-024748 kubelet[936]: E0819 19:12:40.672098     936 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:embed-certs-024748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-024748' and this object" logger="UnhandledError"
	W0819 19:17:19.978669  438295 out.go:270]   Aug 19 19:12:40 embed-certs-024748 kubelet[936]: W0819 19:12:40.672624     936 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-024748" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-024748' and this object
	W0819 19:17:19.978680  438295 out.go:270]   Aug 19 19:12:40 embed-certs-024748 kubelet[936]: E0819 19:12:40.672667     936 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:embed-certs-024748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-024748' and this object" logger="UnhandledError"
	I0819 19:17:19.978690  438295 out.go:358] Setting ErrFile to fd 2...
	I0819 19:17:19.978699  438295 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 19:17:23.941399  438245 pod_ready.go:93] pod "coredns-6f6b679f8f-845gx" in "kube-system" namespace has status "Ready":"True"
	I0819 19:17:23.941426  438245 pod_ready.go:82] duration metric: took 9.00859927s for pod "coredns-6f6b679f8f-845gx" in "kube-system" namespace to be "Ready" ...
	I0819 19:17:23.941438  438245 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-tlxtt" in "kube-system" namespace to be "Ready" ...
	I0819 19:17:23.946827  438245 pod_ready.go:93] pod "coredns-6f6b679f8f-tlxtt" in "kube-system" namespace has status "Ready":"True"
	I0819 19:17:23.946848  438245 pod_ready.go:82] duration metric: took 5.40058ms for pod "coredns-6f6b679f8f-tlxtt" in "kube-system" namespace to be "Ready" ...
	I0819 19:17:23.946859  438245 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:17:23.956158  438245 pod_ready.go:93] pod "etcd-default-k8s-diff-port-982795" in "kube-system" namespace has status "Ready":"True"
	I0819 19:17:23.956181  438245 pod_ready.go:82] duration metric: took 9.312871ms for pod "etcd-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:17:23.956193  438245 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:17:23.962573  438245 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-982795" in "kube-system" namespace has status "Ready":"True"
	I0819 19:17:23.962595  438245 pod_ready.go:82] duration metric: took 6.3934ms for pod "kube-apiserver-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:17:23.962607  438245 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:17:23.968186  438245 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-982795" in "kube-system" namespace has status "Ready":"True"
	I0819 19:17:23.968206  438245 pod_ready.go:82] duration metric: took 5.591464ms for pod "kube-controller-manager-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:17:23.968214  438245 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2v4hk" in "kube-system" namespace to be "Ready" ...
	I0819 19:17:24.337409  438245 pod_ready.go:93] pod "kube-proxy-2v4hk" in "kube-system" namespace has status "Ready":"True"
	I0819 19:17:24.337443  438245 pod_ready.go:82] duration metric: took 369.220318ms for pod "kube-proxy-2v4hk" in "kube-system" namespace to be "Ready" ...
	I0819 19:17:24.337460  438245 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:17:24.737326  438245 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-982795" in "kube-system" namespace has status "Ready":"True"
	I0819 19:17:24.737362  438245 pod_ready.go:82] duration metric: took 399.891804ms for pod "kube-scheduler-default-k8s-diff-port-982795" in "kube-system" namespace to be "Ready" ...
	I0819 19:17:24.737375  438245 pod_ready.go:39] duration metric: took 9.817484404s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 19:17:24.737396  438245 api_server.go:52] waiting for apiserver process to appear ...
	I0819 19:17:24.737467  438245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:17:24.753681  438245 api_server.go:72] duration metric: took 10.109231411s to wait for apiserver process to appear ...
	I0819 19:17:24.753711  438245 api_server.go:88] waiting for apiserver healthz status ...
	I0819 19:17:24.753734  438245 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8444/healthz ...
	I0819 19:17:24.757976  438245 api_server.go:279] https://192.168.61.48:8444/healthz returned 200:
	ok
	I0819 19:17:24.758875  438245 api_server.go:141] control plane version: v1.31.0
	I0819 19:17:24.758899  438245 api_server.go:131] duration metric: took 5.179486ms to wait for apiserver health ...
	I0819 19:17:24.758908  438245 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 19:17:24.944008  438245 system_pods.go:59] 9 kube-system pods found
	I0819 19:17:24.944053  438245 system_pods.go:61] "coredns-6f6b679f8f-845gx" [95155dd2-d46c-4445-b735-26eae16aaff9] Running
	I0819 19:17:24.944058  438245 system_pods.go:61] "coredns-6f6b679f8f-tlxtt" [150ac4be-bef1-4f0a-ab16-f085284686cb] Running
	I0819 19:17:24.944062  438245 system_pods.go:61] "etcd-default-k8s-diff-port-982795" [eb29f445-6242-4b60-a8d5-7c684df17926] Running
	I0819 19:17:24.944066  438245 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-982795" [2add6270-bf14-43e7-834b-3e629f46efa3] Running
	I0819 19:17:24.944070  438245 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-982795" [6b636d4b-0efa-4cef-b0d4-d4539ddc5c90] Running
	I0819 19:17:24.944073  438245 system_pods.go:61] "kube-proxy-2v4hk" [042d5d54-6557-4d8e-8f4e-2d56e95882ce] Running
	I0819 19:17:24.944076  438245 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-982795" [6eff3815-26b3-4e95-a754-2dc65fd29126] Running
	I0819 19:17:24.944082  438245 system_pods.go:61] "metrics-server-6867b74b74-2dp5r" [04e0ce68-d9a2-426a-a0e9-47f6f7867efd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 19:17:24.944086  438245 system_pods.go:61] "storage-provisioner" [23fcea86-977e-4eb1-9e5a-23d6bdfb09c0] Running
	I0819 19:17:24.944094  438245 system_pods.go:74] duration metric: took 185.180015ms to wait for pod list to return data ...
	I0819 19:17:24.944104  438245 default_sa.go:34] waiting for default service account to be created ...
	I0819 19:17:25.137108  438245 default_sa.go:45] found service account: "default"
	I0819 19:17:25.137147  438245 default_sa.go:55] duration metric: took 193.033434ms for default service account to be created ...
	I0819 19:17:25.137160  438245 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 19:17:25.340115  438245 system_pods.go:86] 9 kube-system pods found
	I0819 19:17:25.340146  438245 system_pods.go:89] "coredns-6f6b679f8f-845gx" [95155dd2-d46c-4445-b735-26eae16aaff9] Running
	I0819 19:17:25.340155  438245 system_pods.go:89] "coredns-6f6b679f8f-tlxtt" [150ac4be-bef1-4f0a-ab16-f085284686cb] Running
	I0819 19:17:25.340161  438245 system_pods.go:89] "etcd-default-k8s-diff-port-982795" [eb29f445-6242-4b60-a8d5-7c684df17926] Running
	I0819 19:17:25.340167  438245 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-982795" [2add6270-bf14-43e7-834b-3e629f46efa3] Running
	I0819 19:17:25.340173  438245 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-982795" [6b636d4b-0efa-4cef-b0d4-d4539ddc5c90] Running
	I0819 19:17:25.340177  438245 system_pods.go:89] "kube-proxy-2v4hk" [042d5d54-6557-4d8e-8f4e-2d56e95882ce] Running
	I0819 19:17:25.340182  438245 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-982795" [6eff3815-26b3-4e95-a754-2dc65fd29126] Running
	I0819 19:17:25.340192  438245 system_pods.go:89] "metrics-server-6867b74b74-2dp5r" [04e0ce68-d9a2-426a-a0e9-47f6f7867efd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 19:17:25.340198  438245 system_pods.go:89] "storage-provisioner" [23fcea86-977e-4eb1-9e5a-23d6bdfb09c0] Running
	I0819 19:17:25.340211  438245 system_pods.go:126] duration metric: took 203.044324ms to wait for k8s-apps to be running ...
	I0819 19:17:25.340224  438245 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 19:17:25.340278  438245 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 19:17:25.355190  438245 system_svc.go:56] duration metric: took 14.954269ms WaitForService to wait for kubelet
	I0819 19:17:25.355223  438245 kubeadm.go:582] duration metric: took 10.710777567s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 19:17:25.355252  438245 node_conditions.go:102] verifying NodePressure condition ...
	I0819 19:17:25.537425  438245 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 19:17:25.537459  438245 node_conditions.go:123] node cpu capacity is 2
	I0819 19:17:25.537472  438245 node_conditions.go:105] duration metric: took 182.213218ms to run NodePressure ...
	I0819 19:17:25.537491  438245 start.go:241] waiting for startup goroutines ...
	I0819 19:17:25.537501  438245 start.go:246] waiting for cluster config update ...
	I0819 19:17:25.537516  438245 start.go:255] writing updated cluster config ...
	I0819 19:17:25.537851  438245 ssh_runner.go:195] Run: rm -f paused
	I0819 19:17:25.589212  438245 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 19:17:25.591352  438245 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-982795" cluster and "default" namespace by default
	I0819 19:17:22.902846  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:25.401911  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:29.988042  438295 system_pods.go:59] 8 kube-system pods found
	I0819 19:17:29.988074  438295 system_pods.go:61] "coredns-6f6b679f8f-7ww4z" [bbde00d4-6027-4d8d-b51e-bd68915da166] Running
	I0819 19:17:29.988080  438295 system_pods.go:61] "etcd-embed-certs-024748" [846ff0f0-5399-43fd-8e7b-1f64997cd291] Running
	I0819 19:17:29.988084  438295 system_pods.go:61] "kube-apiserver-embed-certs-024748" [3ff558d6-e82e-47a0-bb81-15244bee6470] Running
	I0819 19:17:29.988088  438295 system_pods.go:61] "kube-controller-manager-embed-certs-024748" [993b82ba-e8e7-4896-a06b-87c4f08d5985] Running
	I0819 19:17:29.988092  438295 system_pods.go:61] "kube-proxy-bmmbh" [1f77f152-f5f4-40f6-9632-1eaa36b9ea31] Running
	I0819 19:17:29.988095  438295 system_pods.go:61] "kube-scheduler-embed-certs-024748" [34684d4c-2479-45c5-883b-158cf9f974f5] Running
	I0819 19:17:29.988100  438295 system_pods.go:61] "metrics-server-6867b74b74-kxcwh" [15f86629-d916-4fdc-9ecf-9cb1b6c83f85] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 19:17:29.988104  438295 system_pods.go:61] "storage-provisioner" [7acb6ce1-21b6-4cdd-a5cb-76d694fc0a38] Running
	I0819 19:17:29.988113  438295 system_pods.go:74] duration metric: took 11.492481541s to wait for pod list to return data ...
	I0819 19:17:29.988120  438295 default_sa.go:34] waiting for default service account to be created ...
	I0819 19:17:29.991728  438295 default_sa.go:45] found service account: "default"
	I0819 19:17:29.991755  438295 default_sa.go:55] duration metric: took 3.62838ms for default service account to be created ...
	I0819 19:17:29.991764  438295 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 19:17:29.997212  438295 system_pods.go:86] 8 kube-system pods found
	I0819 19:17:29.997237  438295 system_pods.go:89] "coredns-6f6b679f8f-7ww4z" [bbde00d4-6027-4d8d-b51e-bd68915da166] Running
	I0819 19:17:29.997243  438295 system_pods.go:89] "etcd-embed-certs-024748" [846ff0f0-5399-43fd-8e7b-1f64997cd291] Running
	I0819 19:17:29.997247  438295 system_pods.go:89] "kube-apiserver-embed-certs-024748" [3ff558d6-e82e-47a0-bb81-15244bee6470] Running
	I0819 19:17:29.997252  438295 system_pods.go:89] "kube-controller-manager-embed-certs-024748" [993b82ba-e8e7-4896-a06b-87c4f08d5985] Running
	I0819 19:17:29.997256  438295 system_pods.go:89] "kube-proxy-bmmbh" [1f77f152-f5f4-40f6-9632-1eaa36b9ea31] Running
	I0819 19:17:29.997260  438295 system_pods.go:89] "kube-scheduler-embed-certs-024748" [34684d4c-2479-45c5-883b-158cf9f974f5] Running
	I0819 19:17:29.997267  438295 system_pods.go:89] "metrics-server-6867b74b74-kxcwh" [15f86629-d916-4fdc-9ecf-9cb1b6c83f85] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 19:17:29.997270  438295 system_pods.go:89] "storage-provisioner" [7acb6ce1-21b6-4cdd-a5cb-76d694fc0a38] Running
	I0819 19:17:29.997277  438295 system_pods.go:126] duration metric: took 5.507363ms to wait for k8s-apps to be running ...
	I0819 19:17:29.997283  438295 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 19:17:29.997329  438295 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 19:17:30.015349  438295 system_svc.go:56] duration metric: took 18.05422ms WaitForService to wait for kubelet
	I0819 19:17:30.015385  438295 kubeadm.go:582] duration metric: took 4m46.148274918s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 19:17:30.015408  438295 node_conditions.go:102] verifying NodePressure condition ...
	I0819 19:17:30.019744  438295 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 19:17:30.019767  438295 node_conditions.go:123] node cpu capacity is 2
	I0819 19:17:30.019779  438295 node_conditions.go:105] duration metric: took 4.364435ms to run NodePressure ...
	I0819 19:17:30.019791  438295 start.go:241] waiting for startup goroutines ...
	I0819 19:17:30.019798  438295 start.go:246] waiting for cluster config update ...
	I0819 19:17:30.019809  438295 start.go:255] writing updated cluster config ...
	I0819 19:17:30.020080  438295 ssh_runner.go:195] Run: rm -f paused
	I0819 19:17:30.071945  438295 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 19:17:30.073912  438295 out.go:177] * Done! kubectl is now configured to use "embed-certs-024748" cluster and "default" namespace by default
	I0819 19:17:27.901471  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:29.901560  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:32.401214  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:34.402184  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:36.901979  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:38.902132  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:41.401103  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:43.889122  438716 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0819 19:17:43.889226  438716 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 19:17:43.889441  438716 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 19:17:43.402531  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:45.402739  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:48.889647  438716 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 19:17:48.889896  438716 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 19:17:47.902033  438001 pod_ready.go:103] pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace has status "Ready":"False"
	I0819 19:17:48.402784  438001 pod_ready.go:82] duration metric: took 4m0.007573449s for pod "metrics-server-6867b74b74-vxwrs" in "kube-system" namespace to be "Ready" ...
	E0819 19:17:48.402807  438001 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0819 19:17:48.402814  438001 pod_ready.go:39] duration metric: took 4m5.043625176s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 19:17:48.402837  438001 api_server.go:52] waiting for apiserver process to appear ...
	I0819 19:17:48.402866  438001 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:17:48.402916  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:17:48.465049  438001 cri.go:89] found id: "cdac290df2d44c9b30a9c4378f98137a73e603fccd18bc228cca5d017f0a7094"
	I0819 19:17:48.465072  438001 cri.go:89] found id: ""
	I0819 19:17:48.465081  438001 logs.go:276] 1 containers: [cdac290df2d44c9b30a9c4378f98137a73e603fccd18bc228cca5d017f0a7094]
	I0819 19:17:48.465157  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:48.469640  438001 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:17:48.469708  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:17:48.506800  438001 cri.go:89] found id: "27d104597d0ca1b418bd0cab630536ff2d859717c314b48ea994680b21a5bd9a"
	I0819 19:17:48.506825  438001 cri.go:89] found id: ""
	I0819 19:17:48.506836  438001 logs.go:276] 1 containers: [27d104597d0ca1b418bd0cab630536ff2d859717c314b48ea994680b21a5bd9a]
	I0819 19:17:48.506900  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:48.511810  438001 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:17:48.511899  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:17:48.558215  438001 cri.go:89] found id: "6ad390cacd3d89ad9a5e7af71dab26d472a67971ffda086057b7cf0e0a9560aa"
	I0819 19:17:48.558240  438001 cri.go:89] found id: ""
	I0819 19:17:48.558250  438001 logs.go:276] 1 containers: [6ad390cacd3d89ad9a5e7af71dab26d472a67971ffda086057b7cf0e0a9560aa]
	I0819 19:17:48.558308  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:48.562785  438001 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:17:48.562844  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:17:48.602715  438001 cri.go:89] found id: "123f84ccdc9cf1aa830891307b79d42c9166f018bff19b498a5107e428feb92f"
	I0819 19:17:48.602738  438001 cri.go:89] found id: ""
	I0819 19:17:48.602748  438001 logs.go:276] 1 containers: [123f84ccdc9cf1aa830891307b79d42c9166f018bff19b498a5107e428feb92f]
	I0819 19:17:48.602815  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:48.607456  438001 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:17:48.607512  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:17:48.648285  438001 cri.go:89] found id: "236b4296ad713b251ca958489ebfc4ce41bd2cb64d538cf0cf5f72cc9243e94a"
	I0819 19:17:48.648314  438001 cri.go:89] found id: ""
	I0819 19:17:48.648324  438001 logs.go:276] 1 containers: [236b4296ad713b251ca958489ebfc4ce41bd2cb64d538cf0cf5f72cc9243e94a]
	I0819 19:17:48.648374  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:48.653772  438001 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:17:48.653830  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:17:48.697336  438001 cri.go:89] found id: "390aeac356048873634022bb4093a927ddaf293b994b7316b79cfc2c4c329346"
	I0819 19:17:48.697365  438001 cri.go:89] found id: ""
	I0819 19:17:48.697376  438001 logs.go:276] 1 containers: [390aeac356048873634022bb4093a927ddaf293b994b7316b79cfc2c4c329346]
	I0819 19:17:48.697438  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:48.701661  438001 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:17:48.701726  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:17:48.737952  438001 cri.go:89] found id: ""
	I0819 19:17:48.737990  438001 logs.go:276] 0 containers: []
	W0819 19:17:48.738002  438001 logs.go:278] No container was found matching "kindnet"
	I0819 19:17:48.738010  438001 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 19:17:48.738076  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 19:17:48.780047  438001 cri.go:89] found id: "fd16c88623359ff9e44155c82c7e33b07dc040678d1d6f1915a25d80a5db0bbd"
	I0819 19:17:48.780076  438001 cri.go:89] found id: "482a17643a2dedc658bdc88ca54e2ffb40166833acfc42adf452364226e51dc6"
	I0819 19:17:48.780082  438001 cri.go:89] found id: ""
	I0819 19:17:48.780092  438001 logs.go:276] 2 containers: [fd16c88623359ff9e44155c82c7e33b07dc040678d1d6f1915a25d80a5db0bbd 482a17643a2dedc658bdc88ca54e2ffb40166833acfc42adf452364226e51dc6]
	I0819 19:17:48.780168  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:48.784558  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:48.788803  438001 logs.go:123] Gathering logs for kube-apiserver [cdac290df2d44c9b30a9c4378f98137a73e603fccd18bc228cca5d017f0a7094] ...
	I0819 19:17:48.788826  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cdac290df2d44c9b30a9c4378f98137a73e603fccd18bc228cca5d017f0a7094"
	I0819 19:17:48.843469  438001 logs.go:123] Gathering logs for kube-scheduler [123f84ccdc9cf1aa830891307b79d42c9166f018bff19b498a5107e428feb92f] ...
	I0819 19:17:48.843501  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 123f84ccdc9cf1aa830891307b79d42c9166f018bff19b498a5107e428feb92f"
	I0819 19:17:48.884461  438001 logs.go:123] Gathering logs for kube-proxy [236b4296ad713b251ca958489ebfc4ce41bd2cb64d538cf0cf5f72cc9243e94a] ...
	I0819 19:17:48.884495  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 236b4296ad713b251ca958489ebfc4ce41bd2cb64d538cf0cf5f72cc9243e94a"
	I0819 19:17:48.927064  438001 logs.go:123] Gathering logs for storage-provisioner [fd16c88623359ff9e44155c82c7e33b07dc040678d1d6f1915a25d80a5db0bbd] ...
	I0819 19:17:48.927093  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd16c88623359ff9e44155c82c7e33b07dc040678d1d6f1915a25d80a5db0bbd"
	I0819 19:17:48.963812  438001 logs.go:123] Gathering logs for container status ...
	I0819 19:17:48.963845  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:17:49.017381  438001 logs.go:123] Gathering logs for kubelet ...
	I0819 19:17:49.017420  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:17:49.093572  438001 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:17:49.093614  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 19:17:49.236680  438001 logs.go:123] Gathering logs for coredns [6ad390cacd3d89ad9a5e7af71dab26d472a67971ffda086057b7cf0e0a9560aa] ...
	I0819 19:17:49.236721  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ad390cacd3d89ad9a5e7af71dab26d472a67971ffda086057b7cf0e0a9560aa"
	I0819 19:17:49.274636  438001 logs.go:123] Gathering logs for kube-controller-manager [390aeac356048873634022bb4093a927ddaf293b994b7316b79cfc2c4c329346] ...
	I0819 19:17:49.274677  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 390aeac356048873634022bb4093a927ddaf293b994b7316b79cfc2c4c329346"
	I0819 19:17:49.326208  438001 logs.go:123] Gathering logs for storage-provisioner [482a17643a2dedc658bdc88ca54e2ffb40166833acfc42adf452364226e51dc6] ...
	I0819 19:17:49.326242  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 482a17643a2dedc658bdc88ca54e2ffb40166833acfc42adf452364226e51dc6"
	I0819 19:17:49.363589  438001 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:17:49.363628  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:17:49.841705  438001 logs.go:123] Gathering logs for dmesg ...
	I0819 19:17:49.841757  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:17:49.858466  438001 logs.go:123] Gathering logs for etcd [27d104597d0ca1b418bd0cab630536ff2d859717c314b48ea994680b21a5bd9a] ...
	I0819 19:17:49.858504  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27d104597d0ca1b418bd0cab630536ff2d859717c314b48ea994680b21a5bd9a"
	I0819 19:17:52.406197  438001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 19:17:52.422951  438001 api_server.go:72] duration metric: took 4m16.822246565s to wait for apiserver process to appear ...
	I0819 19:17:52.422981  438001 api_server.go:88] waiting for apiserver healthz status ...
	I0819 19:17:52.423019  438001 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:17:52.423075  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:17:52.464305  438001 cri.go:89] found id: "cdac290df2d44c9b30a9c4378f98137a73e603fccd18bc228cca5d017f0a7094"
	I0819 19:17:52.464327  438001 cri.go:89] found id: ""
	I0819 19:17:52.464335  438001 logs.go:276] 1 containers: [cdac290df2d44c9b30a9c4378f98137a73e603fccd18bc228cca5d017f0a7094]
	I0819 19:17:52.464387  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:52.468824  438001 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:17:52.468904  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:17:52.508907  438001 cri.go:89] found id: "27d104597d0ca1b418bd0cab630536ff2d859717c314b48ea994680b21a5bd9a"
	I0819 19:17:52.508929  438001 cri.go:89] found id: ""
	I0819 19:17:52.508937  438001 logs.go:276] 1 containers: [27d104597d0ca1b418bd0cab630536ff2d859717c314b48ea994680b21a5bd9a]
	I0819 19:17:52.508998  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:52.513206  438001 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:17:52.513281  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:17:52.553908  438001 cri.go:89] found id: "6ad390cacd3d89ad9a5e7af71dab26d472a67971ffda086057b7cf0e0a9560aa"
	I0819 19:17:52.553940  438001 cri.go:89] found id: ""
	I0819 19:17:52.553948  438001 logs.go:276] 1 containers: [6ad390cacd3d89ad9a5e7af71dab26d472a67971ffda086057b7cf0e0a9560aa]
	I0819 19:17:52.554007  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:52.558420  438001 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:17:52.558487  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:17:52.598450  438001 cri.go:89] found id: "123f84ccdc9cf1aa830891307b79d42c9166f018bff19b498a5107e428feb92f"
	I0819 19:17:52.598480  438001 cri.go:89] found id: ""
	I0819 19:17:52.598491  438001 logs.go:276] 1 containers: [123f84ccdc9cf1aa830891307b79d42c9166f018bff19b498a5107e428feb92f]
	I0819 19:17:52.598564  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:52.603421  438001 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:17:52.603485  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:17:52.639017  438001 cri.go:89] found id: "236b4296ad713b251ca958489ebfc4ce41bd2cb64d538cf0cf5f72cc9243e94a"
	I0819 19:17:52.639049  438001 cri.go:89] found id: ""
	I0819 19:17:52.639060  438001 logs.go:276] 1 containers: [236b4296ad713b251ca958489ebfc4ce41bd2cb64d538cf0cf5f72cc9243e94a]
	I0819 19:17:52.639129  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:52.645313  438001 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:17:52.645392  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:17:52.687266  438001 cri.go:89] found id: "390aeac356048873634022bb4093a927ddaf293b994b7316b79cfc2c4c329346"
	I0819 19:17:52.687296  438001 cri.go:89] found id: ""
	I0819 19:17:52.687305  438001 logs.go:276] 1 containers: [390aeac356048873634022bb4093a927ddaf293b994b7316b79cfc2c4c329346]
	I0819 19:17:52.687369  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:52.691770  438001 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:17:52.691830  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:17:52.734067  438001 cri.go:89] found id: ""
	I0819 19:17:52.734098  438001 logs.go:276] 0 containers: []
	W0819 19:17:52.734107  438001 logs.go:278] No container was found matching "kindnet"
	I0819 19:17:52.734113  438001 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 19:17:52.734171  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 19:17:52.781039  438001 cri.go:89] found id: "fd16c88623359ff9e44155c82c7e33b07dc040678d1d6f1915a25d80a5db0bbd"
	I0819 19:17:52.781062  438001 cri.go:89] found id: "482a17643a2dedc658bdc88ca54e2ffb40166833acfc42adf452364226e51dc6"
	I0819 19:17:52.781066  438001 cri.go:89] found id: ""
	I0819 19:17:52.781074  438001 logs.go:276] 2 containers: [fd16c88623359ff9e44155c82c7e33b07dc040678d1d6f1915a25d80a5db0bbd 482a17643a2dedc658bdc88ca54e2ffb40166833acfc42adf452364226e51dc6]
	I0819 19:17:52.781135  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:52.785730  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:52.789946  438001 logs.go:123] Gathering logs for kube-scheduler [123f84ccdc9cf1aa830891307b79d42c9166f018bff19b498a5107e428feb92f] ...
	I0819 19:17:52.789978  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 123f84ccdc9cf1aa830891307b79d42c9166f018bff19b498a5107e428feb92f"
	I0819 19:17:52.830509  438001 logs.go:123] Gathering logs for kube-controller-manager [390aeac356048873634022bb4093a927ddaf293b994b7316b79cfc2c4c329346] ...
	I0819 19:17:52.830541  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 390aeac356048873634022bb4093a927ddaf293b994b7316b79cfc2c4c329346"
	I0819 19:17:52.892964  438001 logs.go:123] Gathering logs for container status ...
	I0819 19:17:52.893017  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:17:52.947999  438001 logs.go:123] Gathering logs for kubelet ...
	I0819 19:17:52.948028  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:17:53.019377  438001 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:17:53.019423  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 19:17:53.134032  438001 logs.go:123] Gathering logs for kube-apiserver [cdac290df2d44c9b30a9c4378f98137a73e603fccd18bc228cca5d017f0a7094] ...
	I0819 19:17:53.134069  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cdac290df2d44c9b30a9c4378f98137a73e603fccd18bc228cca5d017f0a7094"
	I0819 19:17:53.186159  438001 logs.go:123] Gathering logs for etcd [27d104597d0ca1b418bd0cab630536ff2d859717c314b48ea994680b21a5bd9a] ...
	I0819 19:17:53.186193  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27d104597d0ca1b418bd0cab630536ff2d859717c314b48ea994680b21a5bd9a"
	I0819 19:17:53.236918  438001 logs.go:123] Gathering logs for storage-provisioner [482a17643a2dedc658bdc88ca54e2ffb40166833acfc42adf452364226e51dc6] ...
	I0819 19:17:53.236949  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 482a17643a2dedc658bdc88ca54e2ffb40166833acfc42adf452364226e51dc6"
	I0819 19:17:53.275211  438001 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:17:53.275242  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:17:53.710352  438001 logs.go:123] Gathering logs for dmesg ...
	I0819 19:17:53.710396  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:17:53.726691  438001 logs.go:123] Gathering logs for coredns [6ad390cacd3d89ad9a5e7af71dab26d472a67971ffda086057b7cf0e0a9560aa] ...
	I0819 19:17:53.726731  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ad390cacd3d89ad9a5e7af71dab26d472a67971ffda086057b7cf0e0a9560aa"
	I0819 19:17:53.768322  438001 logs.go:123] Gathering logs for kube-proxy [236b4296ad713b251ca958489ebfc4ce41bd2cb64d538cf0cf5f72cc9243e94a] ...
	I0819 19:17:53.768361  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 236b4296ad713b251ca958489ebfc4ce41bd2cb64d538cf0cf5f72cc9243e94a"
	I0819 19:17:53.808546  438001 logs.go:123] Gathering logs for storage-provisioner [fd16c88623359ff9e44155c82c7e33b07dc040678d1d6f1915a25d80a5db0bbd] ...
	I0819 19:17:53.808577  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd16c88623359ff9e44155c82c7e33b07dc040678d1d6f1915a25d80a5db0bbd"
	I0819 19:17:56.362339  438001 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8443/healthz ...
	I0819 19:17:56.366636  438001 api_server.go:279] https://192.168.39.106:8443/healthz returned 200:
	ok
	I0819 19:17:56.367838  438001 api_server.go:141] control plane version: v1.31.0
	I0819 19:17:56.367867  438001 api_server.go:131] duration metric: took 3.944877317s to wait for apiserver health ...
	I0819 19:17:56.367891  438001 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 19:17:56.367925  438001 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:17:56.367991  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:17:56.412151  438001 cri.go:89] found id: "cdac290df2d44c9b30a9c4378f98137a73e603fccd18bc228cca5d017f0a7094"
	I0819 19:17:56.412179  438001 cri.go:89] found id: ""
	I0819 19:17:56.412187  438001 logs.go:276] 1 containers: [cdac290df2d44c9b30a9c4378f98137a73e603fccd18bc228cca5d017f0a7094]
	I0819 19:17:56.412247  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:56.416620  438001 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:17:56.416795  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:17:56.456888  438001 cri.go:89] found id: "27d104597d0ca1b418bd0cab630536ff2d859717c314b48ea994680b21a5bd9a"
	I0819 19:17:56.456918  438001 cri.go:89] found id: ""
	I0819 19:17:56.456927  438001 logs.go:276] 1 containers: [27d104597d0ca1b418bd0cab630536ff2d859717c314b48ea994680b21a5bd9a]
	I0819 19:17:56.456984  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:56.461563  438001 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:17:56.461667  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:17:56.506990  438001 cri.go:89] found id: "6ad390cacd3d89ad9a5e7af71dab26d472a67971ffda086057b7cf0e0a9560aa"
	I0819 19:17:56.507018  438001 cri.go:89] found id: ""
	I0819 19:17:56.507028  438001 logs.go:276] 1 containers: [6ad390cacd3d89ad9a5e7af71dab26d472a67971ffda086057b7cf0e0a9560aa]
	I0819 19:17:56.507099  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:56.511547  438001 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:17:56.511616  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:17:56.551734  438001 cri.go:89] found id: "123f84ccdc9cf1aa830891307b79d42c9166f018bff19b498a5107e428feb92f"
	I0819 19:17:56.551761  438001 cri.go:89] found id: ""
	I0819 19:17:56.551772  438001 logs.go:276] 1 containers: [123f84ccdc9cf1aa830891307b79d42c9166f018bff19b498a5107e428feb92f]
	I0819 19:17:56.551837  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:56.556963  438001 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:17:56.557039  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:17:56.601862  438001 cri.go:89] found id: "236b4296ad713b251ca958489ebfc4ce41bd2cb64d538cf0cf5f72cc9243e94a"
	I0819 19:17:56.601892  438001 cri.go:89] found id: ""
	I0819 19:17:56.601902  438001 logs.go:276] 1 containers: [236b4296ad713b251ca958489ebfc4ce41bd2cb64d538cf0cf5f72cc9243e94a]
	I0819 19:17:56.601971  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:56.606618  438001 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:17:56.606706  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:17:56.649476  438001 cri.go:89] found id: "390aeac356048873634022bb4093a927ddaf293b994b7316b79cfc2c4c329346"
	I0819 19:17:56.649501  438001 cri.go:89] found id: ""
	I0819 19:17:56.649510  438001 logs.go:276] 1 containers: [390aeac356048873634022bb4093a927ddaf293b994b7316b79cfc2c4c329346]
	I0819 19:17:56.649561  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:56.654009  438001 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:17:56.654071  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:17:56.707479  438001 cri.go:89] found id: ""
	I0819 19:17:56.707506  438001 logs.go:276] 0 containers: []
	W0819 19:17:56.707518  438001 logs.go:278] No container was found matching "kindnet"
	I0819 19:17:56.707527  438001 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 19:17:56.707585  438001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 19:17:56.749937  438001 cri.go:89] found id: "fd16c88623359ff9e44155c82c7e33b07dc040678d1d6f1915a25d80a5db0bbd"
	I0819 19:17:56.749961  438001 cri.go:89] found id: "482a17643a2dedc658bdc88ca54e2ffb40166833acfc42adf452364226e51dc6"
	I0819 19:17:56.749966  438001 cri.go:89] found id: ""
	I0819 19:17:56.749973  438001 logs.go:276] 2 containers: [fd16c88623359ff9e44155c82c7e33b07dc040678d1d6f1915a25d80a5db0bbd 482a17643a2dedc658bdc88ca54e2ffb40166833acfc42adf452364226e51dc6]
	I0819 19:17:56.750026  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:56.754791  438001 ssh_runner.go:195] Run: which crictl
	I0819 19:17:56.758672  438001 logs.go:123] Gathering logs for etcd [27d104597d0ca1b418bd0cab630536ff2d859717c314b48ea994680b21a5bd9a] ...
	I0819 19:17:56.758700  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27d104597d0ca1b418bd0cab630536ff2d859717c314b48ea994680b21a5bd9a"
	I0819 19:17:56.811420  438001 logs.go:123] Gathering logs for kube-controller-manager [390aeac356048873634022bb4093a927ddaf293b994b7316b79cfc2c4c329346] ...
	I0819 19:17:56.811461  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 390aeac356048873634022bb4093a927ddaf293b994b7316b79cfc2c4c329346"
	I0819 19:17:56.871550  438001 logs.go:123] Gathering logs for storage-provisioner [482a17643a2dedc658bdc88ca54e2ffb40166833acfc42adf452364226e51dc6] ...
	I0819 19:17:56.871588  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 482a17643a2dedc658bdc88ca54e2ffb40166833acfc42adf452364226e51dc6"
	I0819 19:17:56.918183  438001 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:17:56.918224  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 19:17:57.297614  438001 logs.go:123] Gathering logs for container status ...
	I0819 19:17:57.297653  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:17:57.339092  438001 logs.go:123] Gathering logs for dmesg ...
	I0819 19:17:57.339127  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:17:57.355787  438001 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:17:57.355820  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 19:17:57.486287  438001 logs.go:123] Gathering logs for kube-apiserver [cdac290df2d44c9b30a9c4378f98137a73e603fccd18bc228cca5d017f0a7094] ...
	I0819 19:17:57.486328  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cdac290df2d44c9b30a9c4378f98137a73e603fccd18bc228cca5d017f0a7094"
	I0819 19:17:57.535864  438001 logs.go:123] Gathering logs for coredns [6ad390cacd3d89ad9a5e7af71dab26d472a67971ffda086057b7cf0e0a9560aa] ...
	I0819 19:17:57.535903  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ad390cacd3d89ad9a5e7af71dab26d472a67971ffda086057b7cf0e0a9560aa"
	I0819 19:17:57.577211  438001 logs.go:123] Gathering logs for kube-scheduler [123f84ccdc9cf1aa830891307b79d42c9166f018bff19b498a5107e428feb92f] ...
	I0819 19:17:57.577248  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 123f84ccdc9cf1aa830891307b79d42c9166f018bff19b498a5107e428feb92f"
	I0819 19:17:57.615928  438001 logs.go:123] Gathering logs for kube-proxy [236b4296ad713b251ca958489ebfc4ce41bd2cb64d538cf0cf5f72cc9243e94a] ...
	I0819 19:17:57.615962  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 236b4296ad713b251ca958489ebfc4ce41bd2cb64d538cf0cf5f72cc9243e94a"
	I0819 19:17:57.655413  438001 logs.go:123] Gathering logs for storage-provisioner [fd16c88623359ff9e44155c82c7e33b07dc040678d1d6f1915a25d80a5db0bbd] ...
	I0819 19:17:57.655445  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd16c88623359ff9e44155c82c7e33b07dc040678d1d6f1915a25d80a5db0bbd"
	I0819 19:17:57.704470  438001 logs.go:123] Gathering logs for kubelet ...
	I0819 19:17:57.704502  438001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:18:00.281191  438001 system_pods.go:59] 8 kube-system pods found
	I0819 19:18:00.281223  438001 system_pods.go:61] "coredns-6f6b679f8f-22lbt" [c8a5cabd-41d4-41cb-91c1-2db1f3471db3] Running
	I0819 19:18:00.281228  438001 system_pods.go:61] "etcd-no-preload-278232" [36d555a1-33e4-4c6c-b24e-2fee4fd84f2b] Running
	I0819 19:18:00.281232  438001 system_pods.go:61] "kube-apiserver-no-preload-278232" [af7173e5-c4ac-4ece-b8b9-bb81cb6b9bfd] Running
	I0819 19:18:00.281235  438001 system_pods.go:61] "kube-controller-manager-no-preload-278232" [2463d97a-5221-40ce-8fd7-08151165d6f7] Running
	I0819 19:18:00.281238  438001 system_pods.go:61] "kube-proxy-rcf49" [85d5814a-1ba9-46be-ab11-17bf40c0f029] Running
	I0819 19:18:00.281241  438001 system_pods.go:61] "kube-scheduler-no-preload-278232" [3b327704-f70c-4d6f-a774-15427a305472] Running
	I0819 19:18:00.281247  438001 system_pods.go:61] "metrics-server-6867b74b74-vxwrs" [e8b74128-b393-4f0f-90fe-e05f20d54acd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 19:18:00.281252  438001 system_pods.go:61] "storage-provisioner" [24766475-1a5b-4f1a-9350-3e891b5272cc] Running
	I0819 19:18:00.281260  438001 system_pods.go:74] duration metric: took 3.913361626s to wait for pod list to return data ...
	I0819 19:18:00.281267  438001 default_sa.go:34] waiting for default service account to be created ...
	I0819 19:18:00.283873  438001 default_sa.go:45] found service account: "default"
	I0819 19:18:00.283898  438001 default_sa.go:55] duration metric: took 2.625775ms for default service account to be created ...
	I0819 19:18:00.283907  438001 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 19:18:00.288985  438001 system_pods.go:86] 8 kube-system pods found
	I0819 19:18:00.289012  438001 system_pods.go:89] "coredns-6f6b679f8f-22lbt" [c8a5cabd-41d4-41cb-91c1-2db1f3471db3] Running
	I0819 19:18:00.289018  438001 system_pods.go:89] "etcd-no-preload-278232" [36d555a1-33e4-4c6c-b24e-2fee4fd84f2b] Running
	I0819 19:18:00.289022  438001 system_pods.go:89] "kube-apiserver-no-preload-278232" [af7173e5-c4ac-4ece-b8b9-bb81cb6b9bfd] Running
	I0819 19:18:00.289028  438001 system_pods.go:89] "kube-controller-manager-no-preload-278232" [2463d97a-5221-40ce-8fd7-08151165d6f7] Running
	I0819 19:18:00.289033  438001 system_pods.go:89] "kube-proxy-rcf49" [85d5814a-1ba9-46be-ab11-17bf40c0f029] Running
	I0819 19:18:00.289038  438001 system_pods.go:89] "kube-scheduler-no-preload-278232" [3b327704-f70c-4d6f-a774-15427a305472] Running
	I0819 19:18:00.289047  438001 system_pods.go:89] "metrics-server-6867b74b74-vxwrs" [e8b74128-b393-4f0f-90fe-e05f20d54acd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 19:18:00.289056  438001 system_pods.go:89] "storage-provisioner" [24766475-1a5b-4f1a-9350-3e891b5272cc] Running
	I0819 19:18:00.289067  438001 system_pods.go:126] duration metric: took 5.154385ms to wait for k8s-apps to be running ...
	I0819 19:18:00.289081  438001 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 19:18:00.289132  438001 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 19:18:00.307128  438001 system_svc.go:56] duration metric: took 18.036826ms WaitForService to wait for kubelet
	I0819 19:18:00.307160  438001 kubeadm.go:582] duration metric: took 4m24.706461383s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 19:18:00.307183  438001 node_conditions.go:102] verifying NodePressure condition ...
	I0819 19:18:00.309818  438001 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 19:18:00.309866  438001 node_conditions.go:123] node cpu capacity is 2
	I0819 19:18:00.309879  438001 node_conditions.go:105] duration metric: took 2.691554ms to run NodePressure ...
	I0819 19:18:00.309892  438001 start.go:241] waiting for startup goroutines ...
	I0819 19:18:00.309901  438001 start.go:246] waiting for cluster config update ...
	I0819 19:18:00.309918  438001 start.go:255] writing updated cluster config ...
	I0819 19:18:00.310268  438001 ssh_runner.go:195] Run: rm -f paused
	I0819 19:18:00.366211  438001 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 19:18:00.368280  438001 out.go:177] * Done! kubectl is now configured to use "no-preload-278232" cluster and "default" namespace by default
	I0819 19:17:58.890611  438716 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 19:17:58.890832  438716 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 19:18:18.891960  438716 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 19:18:18.892243  438716 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 19:18:58.894609  438716 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 19:18:58.894854  438716 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 19:18:58.894869  438716 kubeadm.go:310] 
	I0819 19:18:58.894912  438716 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0819 19:18:58.894967  438716 kubeadm.go:310] 		timed out waiting for the condition
	I0819 19:18:58.894981  438716 kubeadm.go:310] 
	I0819 19:18:58.895024  438716 kubeadm.go:310] 	This error is likely caused by:
	I0819 19:18:58.895072  438716 kubeadm.go:310] 		- The kubelet is not running
	I0819 19:18:58.895344  438716 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0819 19:18:58.895388  438716 kubeadm.go:310] 
	I0819 19:18:58.895518  438716 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0819 19:18:58.895613  438716 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0819 19:18:58.895668  438716 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0819 19:18:58.895695  438716 kubeadm.go:310] 
	I0819 19:18:58.895839  438716 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0819 19:18:58.895959  438716 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0819 19:18:58.895972  438716 kubeadm.go:310] 
	I0819 19:18:58.896072  438716 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0819 19:18:58.896154  438716 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0819 19:18:58.896220  438716 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0819 19:18:58.896284  438716 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0819 19:18:58.896314  438716 kubeadm.go:310] 
	I0819 19:18:58.896819  438716 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 19:18:58.896946  438716 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0819 19:18:58.897028  438716 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0819 19:18:58.897193  438716 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0819 19:18:58.897249  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 19:18:59.361073  438716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 19:18:59.375791  438716 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 19:18:59.387650  438716 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 19:18:59.387697  438716 kubeadm.go:157] found existing configuration files:
	
	I0819 19:18:59.387756  438716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 19:18:59.397345  438716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 19:18:59.397409  438716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 19:18:59.408060  438716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 19:18:59.417658  438716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 19:18:59.417731  438716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 19:18:59.427765  438716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 19:18:59.437636  438716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 19:18:59.437712  438716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 19:18:59.447506  438716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 19:18:59.457100  438716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 19:18:59.457165  438716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 19:18:59.467185  438716 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 19:18:59.540706  438716 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0819 19:18:59.541005  438716 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 19:18:59.694109  438716 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 19:18:59.694238  438716 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 19:18:59.694350  438716 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0819 19:18:59.874268  438716 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 19:18:59.876259  438716 out.go:235]   - Generating certificates and keys ...
	I0819 19:18:59.876362  438716 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 19:18:59.876441  438716 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 19:18:59.876569  438716 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 19:18:59.876654  438716 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 19:18:59.876751  438716 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 19:18:59.876824  438716 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 19:18:59.876900  438716 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 19:18:59.877076  438716 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 19:18:59.877571  438716 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 19:18:59.877997  438716 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 19:18:59.878139  438716 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 19:18:59.878241  438716 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 19:19:00.153380  438716 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 19:19:00.359863  438716 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 19:19:00.470797  438716 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 19:19:00.590041  438716 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 19:19:00.614332  438716 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 19:19:00.615415  438716 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 19:19:00.615473  438716 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 19:19:00.756167  438716 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 19:19:00.757737  438716 out.go:235]   - Booting up control plane ...
	I0819 19:19:00.757873  438716 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 19:19:00.761484  438716 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 19:19:00.762431  438716 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 19:19:00.763241  438716 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 19:19:00.766155  438716 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0819 19:19:40.770166  438716 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0819 19:19:40.770378  438716 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 19:19:40.770543  438716 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 19:19:45.771352  438716 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 19:19:45.771587  438716 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 19:19:55.772027  438716 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 19:19:55.772243  438716 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 19:20:15.773008  438716 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 19:20:15.773238  438716 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 19:20:55.771311  438716 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 19:20:55.771517  438716 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 19:20:55.771530  438716 kubeadm.go:310] 
	I0819 19:20:55.771578  438716 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0819 19:20:55.771750  438716 kubeadm.go:310] 		timed out waiting for the condition
	I0819 19:20:55.771784  438716 kubeadm.go:310] 
	I0819 19:20:55.771845  438716 kubeadm.go:310] 	This error is likely caused by:
	I0819 19:20:55.771891  438716 kubeadm.go:310] 		- The kubelet is not running
	I0819 19:20:55.772014  438716 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0819 19:20:55.772027  438716 kubeadm.go:310] 
	I0819 19:20:55.772125  438716 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0819 19:20:55.772162  438716 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0819 19:20:55.772188  438716 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0819 19:20:55.772196  438716 kubeadm.go:310] 
	I0819 19:20:55.772272  438716 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0819 19:20:55.772336  438716 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0819 19:20:55.772343  438716 kubeadm.go:310] 
	I0819 19:20:55.772439  438716 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0819 19:20:55.772520  438716 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0819 19:20:55.772581  438716 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0819 19:20:55.772637  438716 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0819 19:20:55.772645  438716 kubeadm.go:310] 
	I0819 19:20:55.773758  438716 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 19:20:55.773880  438716 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0819 19:20:55.773971  438716 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0819 19:20:55.774067  438716 kubeadm.go:394] duration metric: took 7m57.361589371s to StartCluster
	I0819 19:20:55.774157  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 19:20:55.774243  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 19:20:55.818428  438716 cri.go:89] found id: ""
	I0819 19:20:55.818460  438716 logs.go:276] 0 containers: []
	W0819 19:20:55.818468  438716 logs.go:278] No container was found matching "kube-apiserver"
	I0819 19:20:55.818475  438716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 19:20:55.818535  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 19:20:55.857714  438716 cri.go:89] found id: ""
	I0819 19:20:55.857747  438716 logs.go:276] 0 containers: []
	W0819 19:20:55.857758  438716 logs.go:278] No container was found matching "etcd"
	I0819 19:20:55.857766  438716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 19:20:55.857841  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 19:20:55.891917  438716 cri.go:89] found id: ""
	I0819 19:20:55.891948  438716 logs.go:276] 0 containers: []
	W0819 19:20:55.891967  438716 logs.go:278] No container was found matching "coredns"
	I0819 19:20:55.891976  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 19:20:55.892046  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 19:20:55.930608  438716 cri.go:89] found id: ""
	I0819 19:20:55.930643  438716 logs.go:276] 0 containers: []
	W0819 19:20:55.930656  438716 logs.go:278] No container was found matching "kube-scheduler"
	I0819 19:20:55.930665  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 19:20:55.930734  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 19:20:55.966563  438716 cri.go:89] found id: ""
	I0819 19:20:55.966591  438716 logs.go:276] 0 containers: []
	W0819 19:20:55.966600  438716 logs.go:278] No container was found matching "kube-proxy"
	I0819 19:20:55.966607  438716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 19:20:55.966670  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 19:20:56.010392  438716 cri.go:89] found id: ""
	I0819 19:20:56.010421  438716 logs.go:276] 0 containers: []
	W0819 19:20:56.010430  438716 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 19:20:56.010436  438716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 19:20:56.010491  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 19:20:56.066940  438716 cri.go:89] found id: ""
	I0819 19:20:56.066973  438716 logs.go:276] 0 containers: []
	W0819 19:20:56.066985  438716 logs.go:278] No container was found matching "kindnet"
	I0819 19:20:56.066994  438716 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 19:20:56.067062  438716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 19:20:56.118852  438716 cri.go:89] found id: ""
	I0819 19:20:56.118881  438716 logs.go:276] 0 containers: []
	W0819 19:20:56.118894  438716 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 19:20:56.118909  438716 logs.go:123] Gathering logs for container status ...
	I0819 19:20:56.118925  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 19:20:56.158224  438716 logs.go:123] Gathering logs for kubelet ...
	I0819 19:20:56.158263  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 19:20:56.211882  438716 logs.go:123] Gathering logs for dmesg ...
	I0819 19:20:56.211925  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 19:20:56.228082  438716 logs.go:123] Gathering logs for describe nodes ...
	I0819 19:20:56.228124  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 19:20:56.307857  438716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 19:20:56.307880  438716 logs.go:123] Gathering logs for CRI-O ...
	I0819 19:20:56.307893  438716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0819 19:20:56.414797  438716 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0819 19:20:56.414885  438716 out.go:270] * 
	W0819 19:20:56.415020  438716 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0819 19:20:56.415039  438716 out.go:270] * 
	W0819 19:20:56.416031  438716 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 19:20:56.419869  438716 out.go:201] 
	W0819 19:20:56.421262  438716 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0819 19:20:56.421319  438716 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0819 19:20:56.421351  438716 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0819 19:20:56.422942  438716 out.go:201] 
	
	
	==> CRI-O <==
	Aug 19 19:31:59 old-k8s-version-104669 crio[655]: time="2024-08-19 19:31:59.468507336Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095919468478274,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c59894ea-a6f6-4a82-bcad-4454372248a0 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:31:59 old-k8s-version-104669 crio[655]: time="2024-08-19 19:31:59.469178591Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=aa26f426-f0ad-4349-8e51-9c47ed0f57b8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:31:59 old-k8s-version-104669 crio[655]: time="2024-08-19 19:31:59.469250857Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=aa26f426-f0ad-4349-8e51-9c47ed0f57b8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:31:59 old-k8s-version-104669 crio[655]: time="2024-08-19 19:31:59.469289518Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=aa26f426-f0ad-4349-8e51-9c47ed0f57b8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:31:59 old-k8s-version-104669 crio[655]: time="2024-08-19 19:31:59.499420703Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=44a6a823-41ae-4976-b0e8-c7c9c70b08c0 name=/runtime.v1.RuntimeService/Version
	Aug 19 19:31:59 old-k8s-version-104669 crio[655]: time="2024-08-19 19:31:59.499513206Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=44a6a823-41ae-4976-b0e8-c7c9c70b08c0 name=/runtime.v1.RuntimeService/Version
	Aug 19 19:31:59 old-k8s-version-104669 crio[655]: time="2024-08-19 19:31:59.500696509Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d762a479-fac2-497e-b75d-f9cfbe7bfbd2 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:31:59 old-k8s-version-104669 crio[655]: time="2024-08-19 19:31:59.501120427Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095919501056399,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d762a479-fac2-497e-b75d-f9cfbe7bfbd2 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:31:59 old-k8s-version-104669 crio[655]: time="2024-08-19 19:31:59.501653138Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ce905d2c-994e-479f-8ca8-16be04f91990 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:31:59 old-k8s-version-104669 crio[655]: time="2024-08-19 19:31:59.501706222Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ce905d2c-994e-479f-8ca8-16be04f91990 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:31:59 old-k8s-version-104669 crio[655]: time="2024-08-19 19:31:59.501734556Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=ce905d2c-994e-479f-8ca8-16be04f91990 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:31:59 old-k8s-version-104669 crio[655]: time="2024-08-19 19:31:59.534658910Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=72b4af5c-e1c2-431d-9f43-1a1755fea715 name=/runtime.v1.RuntimeService/Version
	Aug 19 19:31:59 old-k8s-version-104669 crio[655]: time="2024-08-19 19:31:59.534728460Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=72b4af5c-e1c2-431d-9f43-1a1755fea715 name=/runtime.v1.RuntimeService/Version
	Aug 19 19:31:59 old-k8s-version-104669 crio[655]: time="2024-08-19 19:31:59.536033822Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2afef1c2-0b9c-4e78-b2ea-a6c5fa8ae0f1 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:31:59 old-k8s-version-104669 crio[655]: time="2024-08-19 19:31:59.536488377Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095919536466458,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2afef1c2-0b9c-4e78-b2ea-a6c5fa8ae0f1 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:31:59 old-k8s-version-104669 crio[655]: time="2024-08-19 19:31:59.537146672Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4b02225a-458a-4738-9187-4ebb99404710 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:31:59 old-k8s-version-104669 crio[655]: time="2024-08-19 19:31:59.537227832Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4b02225a-458a-4738-9187-4ebb99404710 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:31:59 old-k8s-version-104669 crio[655]: time="2024-08-19 19:31:59.537269016Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=4b02225a-458a-4738-9187-4ebb99404710 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:31:59 old-k8s-version-104669 crio[655]: time="2024-08-19 19:31:59.575454375Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3c69f6df-706b-4b73-9e9c-2df9abe4a318 name=/runtime.v1.RuntimeService/Version
	Aug 19 19:31:59 old-k8s-version-104669 crio[655]: time="2024-08-19 19:31:59.575564897Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3c69f6df-706b-4b73-9e9c-2df9abe4a318 name=/runtime.v1.RuntimeService/Version
	Aug 19 19:31:59 old-k8s-version-104669 crio[655]: time="2024-08-19 19:31:59.577036990Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c7734ec3-a7ba-4d81-b38e-d6581e97fed3 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:31:59 old-k8s-version-104669 crio[655]: time="2024-08-19 19:31:59.577486474Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724095919577462768,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c7734ec3-a7ba-4d81-b38e-d6581e97fed3 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 19:31:59 old-k8s-version-104669 crio[655]: time="2024-08-19 19:31:59.578261257Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a1cc4cce-17c4-42d8-a901-b9139e00947c name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:31:59 old-k8s-version-104669 crio[655]: time="2024-08-19 19:31:59.578313427Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a1cc4cce-17c4-42d8-a901-b9139e00947c name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 19:31:59 old-k8s-version-104669 crio[655]: time="2024-08-19 19:31:59.578346355Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=a1cc4cce-17c4-42d8-a901-b9139e00947c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Aug19 19:12] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050789] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041369] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.978049] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.658614] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.655874] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.305057] systemd-fstab-generator[577]: Ignoring "noauto" option for root device
	[  +0.056862] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065398] systemd-fstab-generator[589]: Ignoring "noauto" option for root device
	[  +0.183560] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.167037] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +0.268786] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +6.546314] systemd-fstab-generator[904]: Ignoring "noauto" option for root device
	[  +0.062802] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.070433] systemd-fstab-generator[1029]: Ignoring "noauto" option for root device
	[Aug19 19:13] kauditd_printk_skb: 46 callbacks suppressed
	[Aug19 19:17] systemd-fstab-generator[5079]: Ignoring "noauto" option for root device
	[Aug19 19:18] systemd-fstab-generator[5368]: Ignoring "noauto" option for root device
	[  +0.069874] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 19:31:59 up 19 min,  0 users,  load average: 0.05, 0.04, 0.04
	Linux old-k8s-version-104669 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Aug 19 19:31:57 old-k8s-version-104669 kubelet[6835]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:138 +0x185
	Aug 19 19:31:57 old-k8s-version-104669 kubelet[6835]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run.func1()
	Aug 19 19:31:57 old-k8s-version-104669 kubelet[6835]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:222 +0x70
	Aug 19 19:31:57 old-k8s-version-104669 kubelet[6835]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000dce6f0)
	Aug 19 19:31:57 old-k8s-version-104669 kubelet[6835]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
	Aug 19 19:31:57 old-k8s-version-104669 kubelet[6835]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000be1ef0, 0x4f0ac20, 0xc000c7f8b0, 0x1, 0xc00009c0c0)
	Aug 19 19:31:57 old-k8s-version-104669 kubelet[6835]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
	Aug 19 19:31:57 old-k8s-version-104669 kubelet[6835]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc000c542a0, 0xc00009c0c0)
	Aug 19 19:31:57 old-k8s-version-104669 kubelet[6835]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Aug 19 19:31:57 old-k8s-version-104669 kubelet[6835]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Aug 19 19:31:57 old-k8s-version-104669 kubelet[6835]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Aug 19 19:31:57 old-k8s-version-104669 kubelet[6835]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000c8c7e0, 0xc000ca91a0)
	Aug 19 19:31:57 old-k8s-version-104669 kubelet[6835]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Aug 19 19:31:57 old-k8s-version-104669 kubelet[6835]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Aug 19 19:31:57 old-k8s-version-104669 kubelet[6835]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Aug 19 19:31:57 old-k8s-version-104669 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Aug 19 19:31:57 old-k8s-version-104669 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Aug 19 19:31:58 old-k8s-version-104669 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 135.
	Aug 19 19:31:58 old-k8s-version-104669 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Aug 19 19:31:58 old-k8s-version-104669 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Aug 19 19:31:58 old-k8s-version-104669 kubelet[6861]: I0819 19:31:58.368461    6861 server.go:416] Version: v1.20.0
	Aug 19 19:31:58 old-k8s-version-104669 kubelet[6861]: I0819 19:31:58.368979    6861 server.go:837] Client rotation is on, will bootstrap in background
	Aug 19 19:31:58 old-k8s-version-104669 kubelet[6861]: I0819 19:31:58.372397    6861 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Aug 19 19:31:58 old-k8s-version-104669 kubelet[6861]: W0819 19:31:58.374311    6861 manager.go:159] Cannot detect current cgroup on cgroup v2
	Aug 19 19:31:58 old-k8s-version-104669 kubelet[6861]: I0819 19:31:58.374818    6861 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-104669 -n old-k8s-version-104669
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-104669 -n old-k8s-version-104669: exit status 2 (235.675455ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-104669" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (117.54s)

                                                
                                    

Test pass (251/318)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 25.34
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.0/json-events 13.29
13 TestDownloadOnly/v1.31.0/preload-exists 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.06
18 TestDownloadOnly/v1.31.0/DeleteAll 0.13
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.6
22 TestOffline 87.9
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 136.71
31 TestAddons/serial/GCPAuth/Namespaces 0.16
33 TestAddons/parallel/Registry 16.85
35 TestAddons/parallel/InspektorGadget 11.04
37 TestAddons/parallel/HelmTiller 11.04
39 TestAddons/parallel/CSI 80.72
40 TestAddons/parallel/Headlamp 19.03
41 TestAddons/parallel/CloudSpanner 6.59
42 TestAddons/parallel/LocalPath 12.27
43 TestAddons/parallel/NvidiaDevicePlugin 5.55
44 TestAddons/parallel/Yakd 12.11
46 TestCertOptions 56.27
47 TestCertExpiration 319.52
49 TestForceSystemdFlag 77.35
50 TestForceSystemdEnv 72.43
52 TestKVMDriverInstallOrUpdate 5.11
56 TestErrorSpam/setup 45.19
57 TestErrorSpam/start 0.35
58 TestErrorSpam/status 0.74
59 TestErrorSpam/pause 1.57
60 TestErrorSpam/unpause 1.78
61 TestErrorSpam/stop 5.71
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 85.76
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 33.28
68 TestFunctional/serial/KubeContext 0.04
69 TestFunctional/serial/KubectlGetPods 0.07
72 TestFunctional/serial/CacheCmd/cache/add_remote 3.28
73 TestFunctional/serial/CacheCmd/cache/add_local 2.27
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
75 TestFunctional/serial/CacheCmd/cache/list 0.05
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.7
78 TestFunctional/serial/CacheCmd/cache/delete 0.09
79 TestFunctional/serial/MinikubeKubectlCmd 0.11
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
81 TestFunctional/serial/ExtraConfig 85.48
83 TestFunctional/serial/LogsCmd 1.36
84 TestFunctional/serial/LogsFileCmd 1.39
85 TestFunctional/serial/InvalidService 4.31
87 TestFunctional/parallel/ConfigCmd 0.34
88 TestFunctional/parallel/DashboardCmd 14.54
89 TestFunctional/parallel/DryRun 0.27
90 TestFunctional/parallel/InternationalLanguage 0.14
91 TestFunctional/parallel/StatusCmd 0.99
95 TestFunctional/parallel/ServiceCmdConnect 23.53
96 TestFunctional/parallel/AddonsCmd 0.12
97 TestFunctional/parallel/PersistentVolumeClaim 47.35
99 TestFunctional/parallel/SSHCmd 0.42
100 TestFunctional/parallel/CpCmd 1.31
101 TestFunctional/parallel/MySQL 24.85
102 TestFunctional/parallel/FileSync 0.2
103 TestFunctional/parallel/CertSync 1.29
107 TestFunctional/parallel/NodeLabels 0.07
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.44
111 TestFunctional/parallel/License 0.63
112 TestFunctional/parallel/Version/short 0.05
113 TestFunctional/parallel/Version/components 0.61
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.24
115 TestFunctional/parallel/ImageCommands/ImageListTable 0.29
116 TestFunctional/parallel/ImageCommands/ImageListJson 0.31
117 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
118 TestFunctional/parallel/ImageCommands/ImageBuild 3.56
119 TestFunctional/parallel/ImageCommands/Setup 1.94
120 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
121 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.11
122 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
123 TestFunctional/parallel/ProfileCmd/profile_not_create 0.3
129 TestFunctional/parallel/ProfileCmd/profile_list 0.28
134 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.55
135 TestFunctional/parallel/ProfileCmd/profile_json_output 0.28
136 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.17
137 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.22
138 TestFunctional/parallel/ImageCommands/ImageSaveToFile 4.07
139 TestFunctional/parallel/ImageCommands/ImageRemove 1.09
140 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 3.4
141 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.56
142 TestFunctional/parallel/ServiceCmd/DeployApp 7.31
143 TestFunctional/parallel/MountCmd/any-port 8.74
144 TestFunctional/parallel/ServiceCmd/List 0.48
145 TestFunctional/parallel/ServiceCmd/JSONOutput 0.46
146 TestFunctional/parallel/ServiceCmd/HTTPS 0.35
147 TestFunctional/parallel/ServiceCmd/Format 0.31
148 TestFunctional/parallel/ServiceCmd/URL 0.31
149 TestFunctional/parallel/MountCmd/specific-port 2.22
150 TestFunctional/parallel/MountCmd/VerifyCleanup 1.7
151 TestFunctional/delete_echo-server_images 0.03
152 TestFunctional/delete_my-image_image 0.02
153 TestFunctional/delete_minikube_cached_images 0.02
157 TestMultiControlPlane/serial/StartCluster 204.87
158 TestMultiControlPlane/serial/DeployApp 6.14
159 TestMultiControlPlane/serial/PingHostFromPods 1.26
160 TestMultiControlPlane/serial/AddWorkerNode 56.96
161 TestMultiControlPlane/serial/NodeLabels 0.07
162 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.53
163 TestMultiControlPlane/serial/CopyFile 12.99
165 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.47
167 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.39
169 TestMultiControlPlane/serial/DeleteSecondaryNode 16.92
170 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.37
172 TestMultiControlPlane/serial/RestartCluster 334.14
173 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.38
174 TestMultiControlPlane/serial/AddSecondaryNode 78.45
175 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.53
179 TestJSONOutput/start/Command 87.14
180 TestJSONOutput/start/Audit 0
182 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
183 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
185 TestJSONOutput/pause/Command 0.75
186 TestJSONOutput/pause/Audit 0
188 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/unpause/Command 0.64
192 TestJSONOutput/unpause/Audit 0
194 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/stop/Command 7.36
198 TestJSONOutput/stop/Audit 0
200 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
202 TestErrorJSONOutput 0.2
207 TestMainNoArgs 0.05
208 TestMinikubeProfile 84.3
211 TestMountStart/serial/StartWithMountFirst 24.46
212 TestMountStart/serial/VerifyMountFirst 0.37
213 TestMountStart/serial/StartWithMountSecond 31.5
214 TestMountStart/serial/VerifyMountSecond 0.37
215 TestMountStart/serial/DeleteFirst 0.69
216 TestMountStart/serial/VerifyMountPostDelete 0.54
217 TestMountStart/serial/Stop 1.46
218 TestMountStart/serial/RestartStopped 24.03
219 TestMountStart/serial/VerifyMountPostStop 0.37
222 TestMultiNode/serial/FreshStart2Nodes 114.99
223 TestMultiNode/serial/DeployApp2Nodes 5.2
224 TestMultiNode/serial/PingHostFrom2Pods 0.79
225 TestMultiNode/serial/AddNode 51.26
226 TestMultiNode/serial/MultiNodeLabels 0.06
227 TestMultiNode/serial/ProfileList 0.22
228 TestMultiNode/serial/CopyFile 7.22
229 TestMultiNode/serial/StopNode 2.29
230 TestMultiNode/serial/StartAfterStop 39.66
232 TestMultiNode/serial/DeleteNode 2.34
234 TestMultiNode/serial/RestartMultiNode 180.64
235 TestMultiNode/serial/ValidateNameConflict 41.36
242 TestScheduledStopUnix 115.94
246 TestRunningBinaryUpgrade 159.49
254 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
255 TestNoKubernetes/serial/StartWithK8s 96.03
260 TestNetworkPlugins/group/false 2.95
264 TestNoKubernetes/serial/StartWithStopK8s 69.64
265 TestNoKubernetes/serial/Start 46.19
266 TestNoKubernetes/serial/VerifyK8sNotRunning 0.2
267 TestNoKubernetes/serial/ProfileList 0.96
268 TestNoKubernetes/serial/Stop 1.28
269 TestNoKubernetes/serial/StartNoArgs 21.92
270 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.21
271 TestStoppedBinaryUpgrade/Setup 2.63
272 TestStoppedBinaryUpgrade/Upgrade 120.16
281 TestPause/serial/Start 89.09
282 TestStoppedBinaryUpgrade/MinikubeLogs 0.91
283 TestNetworkPlugins/group/auto/Start 59.93
284 TestNetworkPlugins/group/kindnet/Start 87.85
285 TestNetworkPlugins/group/auto/KubeletFlags 0.23
286 TestNetworkPlugins/group/auto/NetCatPod 10.44
287 TestPause/serial/SecondStartNoReconfiguration 48.58
288 TestNetworkPlugins/group/auto/DNS 0.17
289 TestNetworkPlugins/group/auto/Localhost 0.13
290 TestNetworkPlugins/group/auto/HairPin 0.13
291 TestNetworkPlugins/group/calico/Start 86.41
292 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
293 TestNetworkPlugins/group/kindnet/KubeletFlags 0.21
294 TestNetworkPlugins/group/kindnet/NetCatPod 11.24
295 TestNetworkPlugins/group/kindnet/DNS 0.16
296 TestNetworkPlugins/group/kindnet/Localhost 0.13
297 TestNetworkPlugins/group/kindnet/HairPin 0.13
298 TestPause/serial/Pause 0.8
299 TestPause/serial/VerifyStatus 0.25
300 TestPause/serial/Unpause 0.74
301 TestPause/serial/PauseAgain 0.87
302 TestPause/serial/DeletePaused 0.86
303 TestPause/serial/VerifyDeletedResources 0.56
304 TestNetworkPlugins/group/custom-flannel/Start 81.72
305 TestNetworkPlugins/group/flannel/Start 97.38
306 TestNetworkPlugins/group/bridge/Start 72.47
307 TestNetworkPlugins/group/calico/ControllerPod 6.01
308 TestNetworkPlugins/group/calico/KubeletFlags 0.26
309 TestNetworkPlugins/group/calico/NetCatPod 14.32
310 TestNetworkPlugins/group/calico/DNS 0.2
311 TestNetworkPlugins/group/calico/Localhost 0.17
312 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.23
313 TestNetworkPlugins/group/calico/HairPin 0.14
314 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.3
315 TestNetworkPlugins/group/custom-flannel/DNS 0.24
316 TestNetworkPlugins/group/custom-flannel/Localhost 0.2
317 TestNetworkPlugins/group/custom-flannel/HairPin 0.18
318 TestNetworkPlugins/group/enable-default-cni/Start 93.57
319 TestNetworkPlugins/group/flannel/ControllerPod 6.01
322 TestNetworkPlugins/group/flannel/KubeletFlags 0.21
323 TestNetworkPlugins/group/flannel/NetCatPod 11.22
324 TestNetworkPlugins/group/bridge/KubeletFlags 0.21
325 TestNetworkPlugins/group/bridge/NetCatPod 9.22
326 TestNetworkPlugins/group/flannel/DNS 0.18
327 TestNetworkPlugins/group/flannel/Localhost 0.19
328 TestNetworkPlugins/group/flannel/HairPin 0.13
329 TestNetworkPlugins/group/bridge/DNS 26.38
331 TestStartStop/group/no-preload/serial/FirstStart 115.63
332 TestNetworkPlugins/group/bridge/Localhost 0.15
333 TestNetworkPlugins/group/bridge/HairPin 0.12
335 TestStartStop/group/embed-certs/serial/FirstStart 106.4
336 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.22
337 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.26
338 TestNetworkPlugins/group/enable-default-cni/DNS 0.23
339 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
340 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
342 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 54.02
343 TestStartStop/group/no-preload/serial/DeployApp 10.27
344 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.99
346 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.28
347 TestStartStop/group/embed-certs/serial/DeployApp 9.29
348 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.01
350 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.98
355 TestStartStop/group/no-preload/serial/SecondStart 649.05
358 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 598.65
359 TestStartStop/group/embed-certs/serial/SecondStart 601.35
360 TestStartStop/group/old-k8s-version/serial/Stop 1.36
361 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
372 TestStartStop/group/newest-cni/serial/FirstStart 51.39
373 TestStartStop/group/newest-cni/serial/DeployApp 0
374 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.11
375 TestStartStop/group/newest-cni/serial/Stop 11.32
376 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
377 TestStartStop/group/newest-cni/serial/SecondStart 36.61
378 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
379 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
380 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
381 TestStartStop/group/newest-cni/serial/Pause 2.37
x
+
TestDownloadOnly/v1.20.0/json-events (25.34s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-817469 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-817469 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (25.335729127s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (25.34s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-817469
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-817469: exit status 85 (61.108397ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-817469 | jenkins | v1.33.1 | 19 Aug 24 17:44 UTC |          |
	|         | -p download-only-817469        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 17:44:12
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 17:44:12.877947  380021 out.go:345] Setting OutFile to fd 1 ...
	I0819 17:44:12.878214  380021 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 17:44:12.878224  380021 out.go:358] Setting ErrFile to fd 2...
	I0819 17:44:12.878229  380021 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 17:44:12.878456  380021 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19468-372744/.minikube/bin
	W0819 17:44:12.878677  380021 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19468-372744/.minikube/config/config.json: open /home/jenkins/minikube-integration/19468-372744/.minikube/config/config.json: no such file or directory
	I0819 17:44:12.879311  380021 out.go:352] Setting JSON to true
	I0819 17:44:12.880326  380021 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":5196,"bootTime":1724084257,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 17:44:12.880389  380021 start.go:139] virtualization: kvm guest
	I0819 17:44:12.882792  380021 out.go:97] [download-only-817469] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 17:44:12.882940  380021 notify.go:220] Checking for updates...
	W0819 17:44:12.883017  380021 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19468-372744/.minikube/cache/preloaded-tarball: no such file or directory
	I0819 17:44:12.884156  380021 out.go:169] MINIKUBE_LOCATION=19468
	I0819 17:44:12.885640  380021 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 17:44:12.886906  380021 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19468-372744/kubeconfig
	I0819 17:44:12.888115  380021 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19468-372744/.minikube
	I0819 17:44:12.889567  380021 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0819 17:44:12.891729  380021 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0819 17:44:12.892022  380021 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 17:44:12.924018  380021 out.go:97] Using the kvm2 driver based on user configuration
	I0819 17:44:12.924047  380021 start.go:297] selected driver: kvm2
	I0819 17:44:12.924062  380021 start.go:901] validating driver "kvm2" against <nil>
	I0819 17:44:12.924388  380021 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 17:44:12.924462  380021 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19468-372744/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 17:44:12.939973  380021 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 17:44:12.940035  380021 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 17:44:12.940553  380021 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0819 17:44:12.940732  380021 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0819 17:44:12.940772  380021 cni.go:84] Creating CNI manager for ""
	I0819 17:44:12.940784  380021 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 17:44:12.940798  380021 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 17:44:12.940864  380021 start.go:340] cluster config:
	{Name:download-only-817469 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-817469 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 17:44:12.941074  380021 iso.go:125] acquiring lock: {Name:mk4c0ac1c3202b1a296739df622960e7a0bd8566 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 17:44:12.942845  380021 out.go:97] Downloading VM boot image ...
	I0819 17:44:12.942903  380021 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19468-372744/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0819 17:44:23.390766  380021 out.go:97] Starting "download-only-817469" primary control-plane node in "download-only-817469" cluster
	I0819 17:44:23.390813  380021 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0819 17:44:23.500960  380021 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0819 17:44:23.500998  380021 cache.go:56] Caching tarball of preloaded images
	I0819 17:44:23.501163  380021 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0819 17:44:23.502842  380021 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0819 17:44:23.502858  380021 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0819 17:44:23.613864  380021 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19468-372744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0819 17:44:36.444585  380021 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0819 17:44:36.444710  380021 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19468-372744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0819 17:44:37.354274  380021 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0819 17:44:37.354668  380021 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/download-only-817469/config.json ...
	I0819 17:44:37.354728  380021 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/download-only-817469/config.json: {Name:mk518df7d48ad7213ee55682fd58501beb0ec913 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:44:37.354940  380021 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0819 17:44:37.355135  380021 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19468-372744/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-817469 host does not exist
	  To start a cluster, run: "minikube start -p download-only-817469"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-817469
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (13.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-891667 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-891667 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (13.290379182s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (13.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-891667
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-891667: exit status 85 (59.811787ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-817469 | jenkins | v1.33.1 | 19 Aug 24 17:44 UTC |                     |
	|         | -p download-only-817469        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 19 Aug 24 17:44 UTC | 19 Aug 24 17:44 UTC |
	| delete  | -p download-only-817469        | download-only-817469 | jenkins | v1.33.1 | 19 Aug 24 17:44 UTC | 19 Aug 24 17:44 UTC |
	| start   | -o=json --download-only        | download-only-891667 | jenkins | v1.33.1 | 19 Aug 24 17:44 UTC |                     |
	|         | -p download-only-891667        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 17:44:38
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 17:44:38.539032  380277 out.go:345] Setting OutFile to fd 1 ...
	I0819 17:44:38.539279  380277 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 17:44:38.539287  380277 out.go:358] Setting ErrFile to fd 2...
	I0819 17:44:38.539291  380277 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 17:44:38.539457  380277 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19468-372744/.minikube/bin
	I0819 17:44:38.540051  380277 out.go:352] Setting JSON to true
	I0819 17:44:38.540971  380277 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":5222,"bootTime":1724084257,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 17:44:38.541046  380277 start.go:139] virtualization: kvm guest
	I0819 17:44:38.543210  380277 out.go:97] [download-only-891667] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 17:44:38.543406  380277 notify.go:220] Checking for updates...
	I0819 17:44:38.544678  380277 out.go:169] MINIKUBE_LOCATION=19468
	I0819 17:44:38.546182  380277 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 17:44:38.547694  380277 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19468-372744/kubeconfig
	I0819 17:44:38.548891  380277 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19468-372744/.minikube
	I0819 17:44:38.550179  380277 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0819 17:44:38.552519  380277 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0819 17:44:38.552704  380277 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 17:44:38.584697  380277 out.go:97] Using the kvm2 driver based on user configuration
	I0819 17:44:38.584735  380277 start.go:297] selected driver: kvm2
	I0819 17:44:38.584747  380277 start.go:901] validating driver "kvm2" against <nil>
	I0819 17:44:38.585149  380277 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 17:44:38.585240  380277 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19468-372744/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 17:44:38.600143  380277 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 17:44:38.600207  380277 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 17:44:38.600688  380277 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0819 17:44:38.600858  380277 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0819 17:44:38.600941  380277 cni.go:84] Creating CNI manager for ""
	I0819 17:44:38.600955  380277 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 17:44:38.600962  380277 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 17:44:38.601036  380277 start.go:340] cluster config:
	{Name:download-only-891667 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:download-only-891667 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 17:44:38.601132  380277 iso.go:125] acquiring lock: {Name:mk4c0ac1c3202b1a296739df622960e7a0bd8566 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 17:44:38.602759  380277 out.go:97] Starting "download-only-891667" primary control-plane node in "download-only-891667" cluster
	I0819 17:44:38.602773  380277 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 17:44:39.197380  380277 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0819 17:44:39.197426  380277 cache.go:56] Caching tarball of preloaded images
	I0819 17:44:39.197623  380277 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 17:44:39.199434  380277 out.go:97] Downloading Kubernetes v1.31.0 preload ...
	I0819 17:44:39.199464  380277 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 ...
	I0819 17:44:39.313183  380277 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:4a2ae163f7665ceaa95dee8ffc8efdba -> /home/jenkins/minikube-integration/19468-372744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-891667 host does not exist
	  To start a cluster, run: "minikube start -p download-only-891667"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-891667
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.6s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-807766 --alsologtostderr --binary-mirror http://127.0.0.1:38687 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-807766" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-807766
--- PASS: TestBinaryMirror (0.60s)

                                                
                                    
x
+
TestOffline (87.9s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-235067 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-235067 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m26.644626827s)
helpers_test.go:175: Cleaning up "offline-crio-235067" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-235067
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-235067: (1.254967654s)
--- PASS: TestOffline (87.90s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-347256
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-347256: exit status 85 (51.998465ms)

                                                
                                                
-- stdout --
	* Profile "addons-347256" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-347256"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-347256
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-347256: exit status 85 (51.40278ms)

                                                
                                                
-- stdout --
	* Profile "addons-347256" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-347256"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (136.71s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-347256 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-347256 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m16.706564107s)
--- PASS: TestAddons/Setup (136.71s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.16s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-347256 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-347256 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.16s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.85s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 3.351423ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6fb4cdfc84-szv4z" [9388e4e2-9cbc-4408-8be6-ec9be4b5737f] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004409486s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-9q2l4" [73b6c461-1963-4b13-bb12-e75024c4c5d7] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003699566s
addons_test.go:342: (dbg) Run:  kubectl --context addons-347256 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-347256 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-347256 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.055346401s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-347256 ip
2024/08/19 17:47:45 [DEBUG] GET http://192.168.39.18:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-347256 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.85s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.04s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-2flk5" [721c3159-4a85-4b14-b2cf-8ed0d7f4de74] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.005312693s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-347256
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-347256: (6.03143141s)
--- PASS: TestAddons/parallel/InspektorGadget (11.04s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.04s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 2.415519ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-b48cc5f79-bqbr9" [801ad1ee-bac9-4f5e-9d38-655f7fbf1779] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.004426999s
addons_test.go:475: (dbg) Run:  kubectl --context addons-347256 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-347256 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.437762892s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-347256 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.04s)

                                                
                                    
x
+
TestAddons/parallel/CSI (80.72s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 7.874515ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-347256 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-347256 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-347256 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-347256 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-347256 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-347256 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-347256 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-347256 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-347256 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-347256 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-347256 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-347256 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-347256 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-347256 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-347256 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-347256 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-347256 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-347256 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-347256 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-347256 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-347256 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-347256 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-347256 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-347256 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-347256 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-347256 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-347256 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-347256 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-347256 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-347256 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-347256 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-347256 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-347256 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-347256 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-347256 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-347256 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-347256 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-347256 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-347256 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-347256 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-347256 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-347256 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [7aab78c6-8450-4c0f-a4f9-a6a70d3f2628] Pending
helpers_test.go:344: "task-pv-pod" [7aab78c6-8450-4c0f-a4f9-a6a70d3f2628] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [7aab78c6-8450-4c0f-a4f9-a6a70d3f2628] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 17.004143118s
addons_test.go:590: (dbg) Run:  kubectl --context addons-347256 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-347256 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-347256 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-347256 delete pod task-pv-pod
addons_test.go:600: (dbg) Done: kubectl --context addons-347256 delete pod task-pv-pod: (1.125111064s)
addons_test.go:606: (dbg) Run:  kubectl --context addons-347256 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-347256 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-347256 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-347256 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-347256 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-347256 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-347256 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-347256 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [feee7776-e2f8-42e5-a538-51383409c5f8] Pending
helpers_test.go:344: "task-pv-pod-restore" [feee7776-e2f8-42e5-a538-51383409c5f8] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [feee7776-e2f8-42e5-a538-51383409c5f8] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.004235763s
addons_test.go:632: (dbg) Run:  kubectl --context addons-347256 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-347256 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-347256 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-347256 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p addons-347256 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.709953412s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p addons-347256 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (80.72s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (19.03s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-347256 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-pr7qq" [701e3431-68b8-41e5-81bd-75ec9b7d64b3] Pending
helpers_test.go:344: "headlamp-57fb76fcdb-pr7qq" [701e3431-68b8-41e5-81bd-75ec9b7d64b3] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-pr7qq" [701e3431-68b8-41e5-81bd-75ec9b7d64b3] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.205349715s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p addons-347256 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-amd64 -p addons-347256 addons disable headlamp --alsologtostderr -v=1: (5.86203954s)
--- PASS: TestAddons/parallel/Headlamp (19.03s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.59s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-c4bc9b5f8-sphfv" [47f895ef-f849-4028-afa5-5ed765629ba6] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004667822s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-347256
--- PASS: TestAddons/parallel/CloudSpanner (6.59s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (12.27s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-347256 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-347256 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-347256 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-347256 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-347256 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-347256 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-347256 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-347256 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-347256 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [4fdb4db0-54aa-4dec-bbbe-1117983164f1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [4fdb4db0-54aa-4dec-bbbe-1117983164f1] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [4fdb4db0-54aa-4dec-bbbe-1117983164f1] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.003960682s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-347256 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-amd64 -p addons-347256 ssh "cat /opt/local-path-provisioner/pvc-94a0ff27-15d3-467a-86db-027973dec176_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-347256 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-347256 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p addons-347256 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (12.27s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.55s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-x924x" [b28534d9-e3b6-474a-90ca-04048cd59d85] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.013980244s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-347256
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.55s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (12.11s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-xcd7g" [625eb5c1-573a-4727-baf0-311c050adb55] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.005220371s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p addons-347256 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p addons-347256 addons disable yakd --alsologtostderr -v=1: (6.107250487s)
--- PASS: TestAddons/parallel/Yakd (12.11s)

                                                
                                    
x
+
TestCertOptions (56.27s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-297834 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-297834 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (54.837383179s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-297834 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-297834 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-297834 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-297834" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-297834
--- PASS: TestCertOptions (56.27s)

                                                
                                    
x
+
TestCertExpiration (319.52s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-005082 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-005082 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m46.783356094s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-005082 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-005082 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (31.725826085s)
helpers_test.go:175: Cleaning up "cert-expiration-005082" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-005082
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-005082: (1.013208447s)
--- PASS: TestCertExpiration (319.52s)

                                                
                                    
x
+
TestForceSystemdFlag (77.35s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-448594 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-448594 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m16.150879044s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-448594 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-448594" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-448594
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-448594: (1.001478156s)
--- PASS: TestForceSystemdFlag (77.35s)

                                                
                                    
x
+
TestForceSystemdEnv (72.43s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-376529 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-376529 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m11.435941734s)
helpers_test.go:175: Cleaning up "force-systemd-env-376529" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-376529
--- PASS: TestForceSystemdEnv (72.43s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (5.11s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (5.11s)

                                                
                                    
x
+
TestErrorSpam/setup (45.19s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-744800 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-744800 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-744800 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-744800 --driver=kvm2  --container-runtime=crio: (45.185322334s)
--- PASS: TestErrorSpam/setup (45.19s)

                                                
                                    
x
+
TestErrorSpam/start (0.35s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-744800 --log_dir /tmp/nospam-744800 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-744800 --log_dir /tmp/nospam-744800 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-744800 --log_dir /tmp/nospam-744800 start --dry-run
--- PASS: TestErrorSpam/start (0.35s)

                                                
                                    
x
+
TestErrorSpam/status (0.74s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-744800 --log_dir /tmp/nospam-744800 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-744800 --log_dir /tmp/nospam-744800 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-744800 --log_dir /tmp/nospam-744800 status
--- PASS: TestErrorSpam/status (0.74s)

                                                
                                    
x
+
TestErrorSpam/pause (1.57s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-744800 --log_dir /tmp/nospam-744800 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-744800 --log_dir /tmp/nospam-744800 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-744800 --log_dir /tmp/nospam-744800 pause
--- PASS: TestErrorSpam/pause (1.57s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.78s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-744800 --log_dir /tmp/nospam-744800 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-744800 --log_dir /tmp/nospam-744800 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-744800 --log_dir /tmp/nospam-744800 unpause
--- PASS: TestErrorSpam/unpause (1.78s)

                                                
                                    
x
+
TestErrorSpam/stop (5.71s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-744800 --log_dir /tmp/nospam-744800 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-744800 --log_dir /tmp/nospam-744800 stop: (2.284235975s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-744800 --log_dir /tmp/nospam-744800 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-744800 --log_dir /tmp/nospam-744800 stop: (1.981238492s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-744800 --log_dir /tmp/nospam-744800 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-744800 --log_dir /tmp/nospam-744800 stop: (1.441797804s)
--- PASS: TestErrorSpam/stop (5.71s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19468-372744/.minikube/files/etc/test/nested/copy/380009/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (85.76s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-499773 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0819 17:57:10.115147  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/client.crt: no such file or directory" logger="UnhandledError"
E0819 17:57:10.122502  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/client.crt: no such file or directory" logger="UnhandledError"
E0819 17:57:10.133935  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/client.crt: no such file or directory" logger="UnhandledError"
E0819 17:57:10.155441  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/client.crt: no such file or directory" logger="UnhandledError"
E0819 17:57:10.196882  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/client.crt: no such file or directory" logger="UnhandledError"
E0819 17:57:10.278315  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/client.crt: no such file or directory" logger="UnhandledError"
E0819 17:57:10.439838  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/client.crt: no such file or directory" logger="UnhandledError"
E0819 17:57:10.761562  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/client.crt: no such file or directory" logger="UnhandledError"
E0819 17:57:11.403732  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/client.crt: no such file or directory" logger="UnhandledError"
E0819 17:57:12.685253  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/client.crt: no such file or directory" logger="UnhandledError"
E0819 17:57:15.248545  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/client.crt: no such file or directory" logger="UnhandledError"
E0819 17:57:20.370287  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/client.crt: no such file or directory" logger="UnhandledError"
E0819 17:57:30.612614  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/client.crt: no such file or directory" logger="UnhandledError"
E0819 17:57:51.094323  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-499773 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m25.762826894s)
--- PASS: TestFunctional/serial/StartWithProxy (85.76s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (33.28s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-499773 --alsologtostderr -v=8
E0819 17:58:32.056042  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-499773 --alsologtostderr -v=8: (33.280111615s)
functional_test.go:663: soft start took 33.280719237s for "functional-499773" cluster.
--- PASS: TestFunctional/serial/SoftStart (33.28s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-499773 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-499773 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-499773 cache add registry.k8s.io/pause:3.1: (1.029087745s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-499773 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-499773 cache add registry.k8s.io/pause:3.3: (1.179071879s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-499773 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-499773 cache add registry.k8s.io/pause:latest: (1.07505427s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-499773 /tmp/TestFunctionalserialCacheCmdcacheadd_local4266868663/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-499773 cache add minikube-local-cache-test:functional-499773
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-499773 cache add minikube-local-cache-test:functional-499773: (1.945956449s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-499773 cache delete minikube-local-cache-test:functional-499773
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-499773
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-499773 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.7s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-499773 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-499773 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-499773 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (211.32481ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-499773 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-amd64 -p functional-499773 cache reload: (1.027741448s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-499773 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.70s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-499773 kubectl -- --context functional-499773 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-499773 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (85.48s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-499773 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0819 17:59:53.977477  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-499773 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m25.481743296s)
functional_test.go:761: restart took 1m25.481913924s for "functional-499773" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (85.48s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.36s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-499773 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-499773 logs: (1.357664028s)
--- PASS: TestFunctional/serial/LogsCmd (1.36s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.39s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-499773 logs --file /tmp/TestFunctionalserialLogsFileCmd2662709583/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-499773 logs --file /tmp/TestFunctionalserialLogsFileCmd2662709583/001/logs.txt: (1.385877217s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.39s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.31s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-499773 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-499773
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-499773: exit status 115 (279.777733ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.36:30754 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-499773 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.31s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-499773 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-499773 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-499773 config get cpus: exit status 14 (51.379126ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-499773 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-499773 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-499773 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-499773 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-499773 config get cpus: exit status 14 (58.42591ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (14.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-499773 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-499773 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 389937: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (14.54s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-499773 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-499773 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (140.237806ms)

                                                
                                                
-- stdout --
	* [functional-499773] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19468
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19468-372744/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19468-372744/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 18:00:50.790385  389655 out.go:345] Setting OutFile to fd 1 ...
	I0819 18:00:50.790527  389655 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:00:50.790538  389655 out.go:358] Setting ErrFile to fd 2...
	I0819 18:00:50.790544  389655 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:00:50.790757  389655 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19468-372744/.minikube/bin
	I0819 18:00:50.791324  389655 out.go:352] Setting JSON to false
	I0819 18:00:50.792323  389655 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":6194,"bootTime":1724084257,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 18:00:50.792386  389655 start.go:139] virtualization: kvm guest
	I0819 18:00:50.794532  389655 out.go:177] * [functional-499773] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 18:00:50.796269  389655 out.go:177]   - MINIKUBE_LOCATION=19468
	I0819 18:00:50.796303  389655 notify.go:220] Checking for updates...
	I0819 18:00:50.799250  389655 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 18:00:50.800689  389655 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19468-372744/kubeconfig
	I0819 18:00:50.802204  389655 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19468-372744/.minikube
	I0819 18:00:50.803698  389655 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 18:00:50.805009  389655 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 18:00:50.806725  389655 config.go:182] Loaded profile config "functional-499773": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:00:50.807228  389655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:00:50.807292  389655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:00:50.823522  389655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43009
	I0819 18:00:50.824031  389655 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:00:50.824649  389655 main.go:141] libmachine: Using API Version  1
	I0819 18:00:50.824667  389655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:00:50.825043  389655 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:00:50.825251  389655 main.go:141] libmachine: (functional-499773) Calling .DriverName
	I0819 18:00:50.825534  389655 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 18:00:50.825825  389655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:00:50.825871  389655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:00:50.841346  389655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44903
	I0819 18:00:50.841843  389655 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:00:50.842387  389655 main.go:141] libmachine: Using API Version  1
	I0819 18:00:50.842409  389655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:00:50.842738  389655 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:00:50.842963  389655 main.go:141] libmachine: (functional-499773) Calling .DriverName
	I0819 18:00:50.876609  389655 out.go:177] * Using the kvm2 driver based on existing profile
	I0819 18:00:50.877891  389655 start.go:297] selected driver: kvm2
	I0819 18:00:50.877913  389655 start.go:901] validating driver "kvm2" against &{Name:functional-499773 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:functional-499773 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.36 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 18:00:50.878068  389655 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 18:00:50.880458  389655 out.go:201] 
	W0819 18:00:50.881603  389655 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0819 18:00:50.882846  389655 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-499773 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-499773 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-499773 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (141.394705ms)

                                                
                                                
-- stdout --
	* [functional-499773] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19468
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19468-372744/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19468-372744/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 18:00:49.054794  389302 out.go:345] Setting OutFile to fd 1 ...
	I0819 18:00:49.055249  389302 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:00:49.055342  389302 out.go:358] Setting ErrFile to fd 2...
	I0819 18:00:49.055363  389302 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:00:49.055704  389302 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19468-372744/.minikube/bin
	I0819 18:00:49.056267  389302 out.go:352] Setting JSON to false
	I0819 18:00:49.057217  389302 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":6192,"bootTime":1724084257,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 18:00:49.057282  389302 start.go:139] virtualization: kvm guest
	I0819 18:00:49.059630  389302 out.go:177] * [functional-499773] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0819 18:00:49.061205  389302 out.go:177]   - MINIKUBE_LOCATION=19468
	I0819 18:00:49.061214  389302 notify.go:220] Checking for updates...
	I0819 18:00:49.062642  389302 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 18:00:49.064050  389302 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19468-372744/kubeconfig
	I0819 18:00:49.065436  389302 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19468-372744/.minikube
	I0819 18:00:49.066840  389302 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 18:00:49.068283  389302 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 18:00:49.070036  389302 config.go:182] Loaded profile config "functional-499773": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:00:49.070488  389302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:00:49.070554  389302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:00:49.089320  389302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37799
	I0819 18:00:49.089728  389302 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:00:49.090280  389302 main.go:141] libmachine: Using API Version  1
	I0819 18:00:49.090303  389302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:00:49.090690  389302 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:00:49.090894  389302 main.go:141] libmachine: (functional-499773) Calling .DriverName
	I0819 18:00:49.091199  389302 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 18:00:49.091621  389302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:00:49.091689  389302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:00:49.107010  389302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41337
	I0819 18:00:49.107501  389302 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:00:49.108061  389302 main.go:141] libmachine: Using API Version  1
	I0819 18:00:49.108082  389302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:00:49.108421  389302 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:00:49.108633  389302 main.go:141] libmachine: (functional-499773) Calling .DriverName
	I0819 18:00:49.141731  389302 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0819 18:00:49.143208  389302 start.go:297] selected driver: kvm2
	I0819 18:00:49.143231  389302 start.go:901] validating driver "kvm2" against &{Name:functional-499773 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:functional-499773 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.36 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 18:00:49.143342  389302 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 18:00:49.145613  389302 out.go:201] 
	W0819 18:00:49.147119  389302 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0819 18:00:49.148527  389302 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-499773 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-499773 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-499773 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (23.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-499773 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-499773 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-g6z57" [9259afe9-8dc6-4df0-99cf-326e086f3223] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-g6z57" [9259afe9-8dc6-4df0-99cf-326e086f3223] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 23.004416185s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-499773 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.36:31848
functional_test.go:1675: http://192.168.39.36:31848: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-g6z57

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.36:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.36:31848
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (23.53s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-499773 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-499773 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (47.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [223744f1-1b75-4b9d-9955-b089f7da38e1] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004114244s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-499773 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-499773 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-499773 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-499773 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-499773 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-499773 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-499773 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [7cfdbc32-4fb4-4f04-8a6a-bde8439cd12a] Pending
helpers_test.go:344: "sp-pod" [7cfdbc32-4fb4-4f04-8a6a-bde8439cd12a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [7cfdbc32-4fb4-4f04-8a6a-bde8439cd12a] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.004199777s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-499773 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-499773 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-499773 delete -f testdata/storage-provisioner/pod.yaml: (1.786605962s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-499773 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [a7e63c48-da0c-4676-9dc8-8fcbf74364df] Pending
helpers_test.go:344: "sp-pod" [a7e63c48-da0c-4676-9dc8-8fcbf74364df] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [a7e63c48-da0c-4676-9dc8-8fcbf74364df] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.00453339s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-499773 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (47.35s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-499773 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-499773 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-499773 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-499773 ssh -n functional-499773 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-499773 cp functional-499773:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd194973677/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-499773 ssh -n functional-499773 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-499773 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-499773 ssh -n functional-499773 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (24.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-499773 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-8vdd5" [9667ac96-aadf-4611-92b6-fd7fac858a28] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-8vdd5" [9667ac96-aadf-4611-92b6-fd7fac858a28] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 19.015915675s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-499773 exec mysql-6cdb49bbb-8vdd5 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-499773 exec mysql-6cdb49bbb-8vdd5 -- mysql -ppassword -e "show databases;": exit status 1 (202.492695ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-499773 exec mysql-6cdb49bbb-8vdd5 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-499773 exec mysql-6cdb49bbb-8vdd5 -- mysql -ppassword -e "show databases;": exit status 1 (148.661575ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-499773 exec mysql-6cdb49bbb-8vdd5 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-499773 exec mysql-6cdb49bbb-8vdd5 -- mysql -ppassword -e "show databases;": exit status 1 (141.394967ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-499773 exec mysql-6cdb49bbb-8vdd5 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (24.85s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/380009/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-499773 ssh "sudo cat /etc/test/nested/copy/380009/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/380009.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-499773 ssh "sudo cat /etc/ssl/certs/380009.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/380009.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-499773 ssh "sudo cat /usr/share/ca-certificates/380009.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-499773 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3800092.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-499773 ssh "sudo cat /etc/ssl/certs/3800092.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/3800092.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-499773 ssh "sudo cat /usr/share/ca-certificates/3800092.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-499773 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-499773 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-499773 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-499773 ssh "sudo systemctl is-active docker": exit status 1 (228.402482ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-499773 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-499773 ssh "sudo systemctl is-active containerd": exit status 1 (211.74607ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-499773 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-499773 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-499773 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-499773 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-499773
localhost/kicbase/echo-server:functional-499773
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240730-75a5af0c
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-499773 image ls --format short --alsologtostderr:
I0819 18:00:53.075338  389993 out.go:345] Setting OutFile to fd 1 ...
I0819 18:00:53.075613  389993 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 18:00:53.075623  389993 out.go:358] Setting ErrFile to fd 2...
I0819 18:00:53.075627  389993 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 18:00:53.075820  389993 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19468-372744/.minikube/bin
I0819 18:00:53.076376  389993 config.go:182] Loaded profile config "functional-499773": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 18:00:53.076469  389993 config.go:182] Loaded profile config "functional-499773": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 18:00:53.076866  389993 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0819 18:00:53.076912  389993 main.go:141] libmachine: Launching plugin server for driver kvm2
I0819 18:00:53.092076  389993 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41141
I0819 18:00:53.092594  389993 main.go:141] libmachine: () Calling .GetVersion
I0819 18:00:53.093159  389993 main.go:141] libmachine: Using API Version  1
I0819 18:00:53.093190  389993 main.go:141] libmachine: () Calling .SetConfigRaw
I0819 18:00:53.093520  389993 main.go:141] libmachine: () Calling .GetMachineName
I0819 18:00:53.093740  389993 main.go:141] libmachine: (functional-499773) Calling .GetState
I0819 18:00:53.095736  389993 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0819 18:00:53.095772  389993 main.go:141] libmachine: Launching plugin server for driver kvm2
I0819 18:00:53.110713  389993 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35145
I0819 18:00:53.111102  389993 main.go:141] libmachine: () Calling .GetVersion
I0819 18:00:53.111688  389993 main.go:141] libmachine: Using API Version  1
I0819 18:00:53.111730  389993 main.go:141] libmachine: () Calling .SetConfigRaw
I0819 18:00:53.112074  389993 main.go:141] libmachine: () Calling .GetMachineName
I0819 18:00:53.112297  389993 main.go:141] libmachine: (functional-499773) Calling .DriverName
I0819 18:00:53.112528  389993 ssh_runner.go:195] Run: systemctl --version
I0819 18:00:53.112559  389993 main.go:141] libmachine: (functional-499773) Calling .GetSSHHostname
I0819 18:00:53.115449  389993 main.go:141] libmachine: (functional-499773) DBG | domain functional-499773 has defined MAC address 52:54:00:16:1d:03 in network mk-functional-499773
I0819 18:00:53.115888  389993 main.go:141] libmachine: (functional-499773) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:1d:03", ip: ""} in network mk-functional-499773: {Iface:virbr1 ExpiryTime:2024-08-19 18:56:56 +0000 UTC Type:0 Mac:52:54:00:16:1d:03 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:functional-499773 Clientid:01:52:54:00:16:1d:03}
I0819 18:00:53.115918  389993 main.go:141] libmachine: (functional-499773) DBG | domain functional-499773 has defined IP address 192.168.39.36 and MAC address 52:54:00:16:1d:03 in network mk-functional-499773
I0819 18:00:53.116066  389993 main.go:141] libmachine: (functional-499773) Calling .GetSSHPort
I0819 18:00:53.116242  389993 main.go:141] libmachine: (functional-499773) Calling .GetSSHKeyPath
I0819 18:00:53.116382  389993 main.go:141] libmachine: (functional-499773) Calling .GetSSHUsername
I0819 18:00:53.116543  389993 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/functional-499773/id_rsa Username:docker}
I0819 18:00:53.221667  389993 ssh_runner.go:195] Run: sudo crictl images --output json
I0819 18:00:53.266677  389993 main.go:141] libmachine: Making call to close driver server
I0819 18:00:53.266694  389993 main.go:141] libmachine: (functional-499773) Calling .Close
I0819 18:00:53.267045  389993 main.go:141] libmachine: (functional-499773) DBG | Closing plugin on server side
I0819 18:00:53.267106  389993 main.go:141] libmachine: Successfully made call to close driver server
I0819 18:00:53.267131  389993 main.go:141] libmachine: Making call to close connection to plugin binary
I0819 18:00:53.267150  389993 main.go:141] libmachine: Making call to close driver server
I0819 18:00:53.267163  389993 main.go:141] libmachine: (functional-499773) Calling .Close
I0819 18:00:53.267519  389993 main.go:141] libmachine: (functional-499773) DBG | Closing plugin on server side
I0819 18:00:53.267542  389993 main.go:141] libmachine: Successfully made call to close driver server
I0819 18:00:53.267557  389993 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-499773 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-499773 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| localhost/my-image                      | functional-499773  | 66d6cc801f3cb | 1.47MB |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| docker.io/kindest/kindnetd              | v20240730-75a5af0c | 917d7814b9b5b | 87.2MB |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| registry.k8s.io/kube-scheduler          | v1.31.0            | 1766f54c897f0 | 68.4MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/kicbase/echo-server           | functional-499773  | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/kube-apiserver          | v1.31.0            | 604f5db92eaa8 | 95.2MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/library/nginx                 | latest             | 5ef79149e0ec8 | 192MB  |
| registry.k8s.io/kube-controller-manager | v1.31.0            | 045733566833c | 89.4MB |
| registry.k8s.io/kube-proxy              | v1.31.0            | ad83b2ca7b09e | 92.7MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| localhost/minikube-local-cache-test     | functional-499773  | 7b09d4757f3d7 | 3.33kB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-499773 image ls --format table --alsologtostderr:
I0819 18:00:57.408519  390213 out.go:345] Setting OutFile to fd 1 ...
I0819 18:00:57.408869  390213 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 18:00:57.408897  390213 out.go:358] Setting ErrFile to fd 2...
I0819 18:00:57.408913  390213 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 18:00:57.409212  390213 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19468-372744/.minikube/bin
I0819 18:00:57.410098  390213 config.go:182] Loaded profile config "functional-499773": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 18:00:57.410298  390213 config.go:182] Loaded profile config "functional-499773": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 18:00:57.410930  390213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0819 18:00:57.411022  390213 main.go:141] libmachine: Launching plugin server for driver kvm2
I0819 18:00:57.427761  390213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40981
I0819 18:00:57.428410  390213 main.go:141] libmachine: () Calling .GetVersion
I0819 18:00:57.429184  390213 main.go:141] libmachine: Using API Version  1
I0819 18:00:57.429244  390213 main.go:141] libmachine: () Calling .SetConfigRaw
I0819 18:00:57.429678  390213 main.go:141] libmachine: () Calling .GetMachineName
I0819 18:00:57.429904  390213 main.go:141] libmachine: (functional-499773) Calling .GetState
I0819 18:00:57.432131  390213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0819 18:00:57.432214  390213 main.go:141] libmachine: Launching plugin server for driver kvm2
I0819 18:00:57.448493  390213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41545
I0819 18:00:57.449028  390213 main.go:141] libmachine: () Calling .GetVersion
I0819 18:00:57.449493  390213 main.go:141] libmachine: Using API Version  1
I0819 18:00:57.449512  390213 main.go:141] libmachine: () Calling .SetConfigRaw
I0819 18:00:57.449807  390213 main.go:141] libmachine: () Calling .GetMachineName
I0819 18:00:57.450009  390213 main.go:141] libmachine: (functional-499773) Calling .DriverName
I0819 18:00:57.450230  390213 ssh_runner.go:195] Run: systemctl --version
I0819 18:00:57.450256  390213 main.go:141] libmachine: (functional-499773) Calling .GetSSHHostname
I0819 18:00:57.453529  390213 main.go:141] libmachine: (functional-499773) DBG | domain functional-499773 has defined MAC address 52:54:00:16:1d:03 in network mk-functional-499773
I0819 18:00:57.453932  390213 main.go:141] libmachine: (functional-499773) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:1d:03", ip: ""} in network mk-functional-499773: {Iface:virbr1 ExpiryTime:2024-08-19 18:56:56 +0000 UTC Type:0 Mac:52:54:00:16:1d:03 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:functional-499773 Clientid:01:52:54:00:16:1d:03}
I0819 18:00:57.453960  390213 main.go:141] libmachine: (functional-499773) DBG | domain functional-499773 has defined IP address 192.168.39.36 and MAC address 52:54:00:16:1d:03 in network mk-functional-499773
I0819 18:00:57.454120  390213 main.go:141] libmachine: (functional-499773) Calling .GetSSHPort
I0819 18:00:57.454268  390213 main.go:141] libmachine: (functional-499773) Calling .GetSSHKeyPath
I0819 18:00:57.454391  390213 main.go:141] libmachine: (functional-499773) Calling .GetSSHUsername
I0819 18:00:57.454517  390213 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/functional-499773/id_rsa Username:docker}
I0819 18:00:57.547245  390213 ssh_runner.go:195] Run: sudo crictl images --output json
I0819 18:00:57.640452  390213 main.go:141] libmachine: Making call to close driver server
I0819 18:00:57.640482  390213 main.go:141] libmachine: (functional-499773) Calling .Close
I0819 18:00:57.640784  390213 main.go:141] libmachine: Successfully made call to close driver server
I0819 18:00:57.640801  390213 main.go:141] libmachine: Making call to close connection to plugin binary
I0819 18:00:57.640812  390213 main.go:141] libmachine: Making call to close driver server
I0819 18:00:57.640821  390213 main.go:141] libmachine: (functional-499773) Calling .Close
I0819 18:00:57.641075  390213 main.go:141] libmachine: Successfully made call to close driver server
I0819 18:00:57.641091  390213 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-499773 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-499773 image ls --format json --alsologtostderr:
[{"id":"7b09d4757f3d7ed4c77e38f909a5fbb3d7dae9fe3d5a89eb006daf86015b82a5","repoDigests":["localhost/minikube-local-cache-test@sha256:b8ccef15f737a73dc4e8f3c18a1fae6ca441fe7fd90fc71e7a3cd6cbf40ae9f8"],"repoTags":["localhost/minikube-local-cache-test:functional-499773"],"size":"3330"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3","repoDigests":["registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf","registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.0"],"size":"95233506"},{"id":"ad83b2ca7
b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494","repoDigests":["registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf","registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.0"],"size":"92728217"},{"id":"f9cf6115445c62f0840d0ab60b549de148925af9c44a290266fa31d8bc7c4b8a","repoDigests":["docker.io/library/8d77f3851516c28ce756e0b3cce621b64628f1129c05df67efac2b11041bfe37-tmp@sha256:e2a392306611e949c7f70c22385a26ed8c367b251cde917549166ea108d25576"],"repoTags":[],"size":"1466018"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"9056ab77afb8e18e04303f11000a9d
31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-499773"],"size":"4943877"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557","repoDigests":["docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3","docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"],"repoTags":["docker.io/kindest/kindnetd:v20240730-75a5af0c"],"size":"87165492"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe
47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d","registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.0"],"size":"89437512"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e73
05ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"5ef79149e0ec84a7a9f9284c3f91aa3c20608f8391f5445eabe92ef07dbda03c","repoDigests":["docker.io/library/nginx@sha256:447a8665cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add","docker.io/library/nginx@sha256:5f0574409b3add89581b96c68afe9e9c7b284651c3a974b6e8bac46bf95e6b7f"],"repoTags":["docker.io/library/nginx:latest"],"size":"191841612"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha
256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"66d6cc801f3cb5c54add6c20c2d9583190aaff3388d0374ab3bfb169d4d0bb90","repoDigests":["localhost/my-image@sha256:e82460da5360bd3606fbc5a45e032142745361bb9db7d92720b00a346462f214"],"repoTags":["localhost/my-image:functional-499773"],"size":"1468599"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f
0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94","repoDigests":["registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a","registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.0"],"size":"68420936"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-499773 image ls --format json --alsologtostderr:
I0819 18:00:57.090492  390159 out.go:345] Setting OutFile to fd 1 ...
I0819 18:00:57.090747  390159 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 18:00:57.090756  390159 out.go:358] Setting ErrFile to fd 2...
I0819 18:00:57.090761  390159 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 18:00:57.090942  390159 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19468-372744/.minikube/bin
I0819 18:00:57.091574  390159 config.go:182] Loaded profile config "functional-499773": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 18:00:57.091696  390159 config.go:182] Loaded profile config "functional-499773": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 18:00:57.092114  390159 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0819 18:00:57.092156  390159 main.go:141] libmachine: Launching plugin server for driver kvm2
I0819 18:00:57.107701  390159 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42081
I0819 18:00:57.108326  390159 main.go:141] libmachine: () Calling .GetVersion
I0819 18:00:57.109088  390159 main.go:141] libmachine: Using API Version  1
I0819 18:00:57.109113  390159 main.go:141] libmachine: () Calling .SetConfigRaw
I0819 18:00:57.109520  390159 main.go:141] libmachine: () Calling .GetMachineName
I0819 18:00:57.109781  390159 main.go:141] libmachine: (functional-499773) Calling .GetState
I0819 18:00:57.112043  390159 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0819 18:00:57.112091  390159 main.go:141] libmachine: Launching plugin server for driver kvm2
I0819 18:00:57.130546  390159 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38517
I0819 18:00:57.131089  390159 main.go:141] libmachine: () Calling .GetVersion
I0819 18:00:57.131630  390159 main.go:141] libmachine: Using API Version  1
I0819 18:00:57.131655  390159 main.go:141] libmachine: () Calling .SetConfigRaw
I0819 18:00:57.132033  390159 main.go:141] libmachine: () Calling .GetMachineName
I0819 18:00:57.132224  390159 main.go:141] libmachine: (functional-499773) Calling .DriverName
I0819 18:00:57.132528  390159 ssh_runner.go:195] Run: systemctl --version
I0819 18:00:57.132554  390159 main.go:141] libmachine: (functional-499773) Calling .GetSSHHostname
I0819 18:00:57.135469  390159 main.go:141] libmachine: (functional-499773) DBG | domain functional-499773 has defined MAC address 52:54:00:16:1d:03 in network mk-functional-499773
I0819 18:00:57.135895  390159 main.go:141] libmachine: (functional-499773) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:1d:03", ip: ""} in network mk-functional-499773: {Iface:virbr1 ExpiryTime:2024-08-19 18:56:56 +0000 UTC Type:0 Mac:52:54:00:16:1d:03 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:functional-499773 Clientid:01:52:54:00:16:1d:03}
I0819 18:00:57.135925  390159 main.go:141] libmachine: (functional-499773) DBG | domain functional-499773 has defined IP address 192.168.39.36 and MAC address 52:54:00:16:1d:03 in network mk-functional-499773
I0819 18:00:57.136087  390159 main.go:141] libmachine: (functional-499773) Calling .GetSSHPort
I0819 18:00:57.136267  390159 main.go:141] libmachine: (functional-499773) Calling .GetSSHKeyPath
I0819 18:00:57.136428  390159 main.go:141] libmachine: (functional-499773) Calling .GetSSHUsername
I0819 18:00:57.136550  390159 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/functional-499773/id_rsa Username:docker}
I0819 18:00:57.256533  390159 ssh_runner.go:195] Run: sudo crictl images --output json
I0819 18:00:57.350328  390159 main.go:141] libmachine: Making call to close driver server
I0819 18:00:57.350346  390159 main.go:141] libmachine: (functional-499773) Calling .Close
I0819 18:00:57.350706  390159 main.go:141] libmachine: Successfully made call to close driver server
I0819 18:00:57.350730  390159 main.go:141] libmachine: Making call to close connection to plugin binary
I0819 18:00:57.350740  390159 main.go:141] libmachine: Making call to close driver server
I0819 18:00:57.350749  390159 main.go:141] libmachine: (functional-499773) Calling .Close
I0819 18:00:57.350775  390159 main.go:141] libmachine: (functional-499773) DBG | Closing plugin on server side
I0819 18:00:57.351111  390159 main.go:141] libmachine: (functional-499773) DBG | Closing plugin on server side
I0819 18:00:57.351133  390159 main.go:141] libmachine: Successfully made call to close driver server
I0819 18:00:57.351151  390159 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-499773 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-499773 image ls --format yaml --alsologtostderr:
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a
- registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.0
size: "68420936"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557
repoDigests:
- docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3
- docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a
repoTags:
- docker.io/kindest/kindnetd:v20240730-75a5af0c
size: "87165492"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-499773
size: "4943877"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf
- registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.0
size: "95233506"
- id: 045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d
- registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.0
size: "89437512"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 7b09d4757f3d7ed4c77e38f909a5fbb3d7dae9fe3d5a89eb006daf86015b82a5
repoDigests:
- localhost/minikube-local-cache-test@sha256:b8ccef15f737a73dc4e8f3c18a1fae6ca441fe7fd90fc71e7a3cd6cbf40ae9f8
repoTags:
- localhost/minikube-local-cache-test:functional-499773
size: "3330"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494
repoDigests:
- registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf
- registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe
repoTags:
- registry.k8s.io/kube-proxy:v1.31.0
size: "92728217"
- id: 5ef79149e0ec84a7a9f9284c3f91aa3c20608f8391f5445eabe92ef07dbda03c
repoDigests:
- docker.io/library/nginx@sha256:447a8665cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add
- docker.io/library/nginx@sha256:5f0574409b3add89581b96c68afe9e9c7b284651c3a974b6e8bac46bf95e6b7f
repoTags:
- docker.io/library/nginx:latest
size: "191841612"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-499773 image ls --format yaml --alsologtostderr:
I0819 18:00:53.315341  390017 out.go:345] Setting OutFile to fd 1 ...
I0819 18:00:53.315459  390017 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 18:00:53.315467  390017 out.go:358] Setting ErrFile to fd 2...
I0819 18:00:53.315472  390017 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 18:00:53.316096  390017 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19468-372744/.minikube/bin
I0819 18:00:53.317403  390017 config.go:182] Loaded profile config "functional-499773": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 18:00:53.317529  390017 config.go:182] Loaded profile config "functional-499773": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 18:00:53.317894  390017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0819 18:00:53.317938  390017 main.go:141] libmachine: Launching plugin server for driver kvm2
I0819 18:00:53.333322  390017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40201
I0819 18:00:53.333803  390017 main.go:141] libmachine: () Calling .GetVersion
I0819 18:00:53.334415  390017 main.go:141] libmachine: Using API Version  1
I0819 18:00:53.334439  390017 main.go:141] libmachine: () Calling .SetConfigRaw
I0819 18:00:53.334838  390017 main.go:141] libmachine: () Calling .GetMachineName
I0819 18:00:53.335089  390017 main.go:141] libmachine: (functional-499773) Calling .GetState
I0819 18:00:53.336794  390017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0819 18:00:53.336838  390017 main.go:141] libmachine: Launching plugin server for driver kvm2
I0819 18:00:53.353248  390017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45299
I0819 18:00:53.353631  390017 main.go:141] libmachine: () Calling .GetVersion
I0819 18:00:53.354093  390017 main.go:141] libmachine: Using API Version  1
I0819 18:00:53.354117  390017 main.go:141] libmachine: () Calling .SetConfigRaw
I0819 18:00:53.354445  390017 main.go:141] libmachine: () Calling .GetMachineName
I0819 18:00:53.354678  390017 main.go:141] libmachine: (functional-499773) Calling .DriverName
I0819 18:00:53.354902  390017 ssh_runner.go:195] Run: systemctl --version
I0819 18:00:53.354931  390017 main.go:141] libmachine: (functional-499773) Calling .GetSSHHostname
I0819 18:00:53.357896  390017 main.go:141] libmachine: (functional-499773) DBG | domain functional-499773 has defined MAC address 52:54:00:16:1d:03 in network mk-functional-499773
I0819 18:00:53.358309  390017 main.go:141] libmachine: (functional-499773) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:1d:03", ip: ""} in network mk-functional-499773: {Iface:virbr1 ExpiryTime:2024-08-19 18:56:56 +0000 UTC Type:0 Mac:52:54:00:16:1d:03 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:functional-499773 Clientid:01:52:54:00:16:1d:03}
I0819 18:00:53.358338  390017 main.go:141] libmachine: (functional-499773) DBG | domain functional-499773 has defined IP address 192.168.39.36 and MAC address 52:54:00:16:1d:03 in network mk-functional-499773
I0819 18:00:53.358538  390017 main.go:141] libmachine: (functional-499773) Calling .GetSSHPort
I0819 18:00:53.358721  390017 main.go:141] libmachine: (functional-499773) Calling .GetSSHKeyPath
I0819 18:00:53.358877  390017 main.go:141] libmachine: (functional-499773) Calling .GetSSHUsername
I0819 18:00:53.359075  390017 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/functional-499773/id_rsa Username:docker}
I0819 18:00:53.438735  390017 ssh_runner.go:195] Run: sudo crictl images --output json
I0819 18:00:53.485949  390017 main.go:141] libmachine: Making call to close driver server
I0819 18:00:53.485966  390017 main.go:141] libmachine: (functional-499773) Calling .Close
I0819 18:00:53.486304  390017 main.go:141] libmachine: (functional-499773) DBG | Closing plugin on server side
I0819 18:00:53.486294  390017 main.go:141] libmachine: Successfully made call to close driver server
I0819 18:00:53.486348  390017 main.go:141] libmachine: Making call to close connection to plugin binary
I0819 18:00:53.486359  390017 main.go:141] libmachine: Making call to close driver server
I0819 18:00:53.486365  390017 main.go:141] libmachine: (functional-499773) Calling .Close
I0819 18:00:53.486566  390017 main.go:141] libmachine: Successfully made call to close driver server
I0819 18:00:53.486583  390017 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-499773 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-499773 ssh pgrep buildkitd: exit status 1 (187.274861ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-499773 image build -t localhost/my-image:functional-499773 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-499773 image build -t localhost/my-image:functional-499773 testdata/build --alsologtostderr: (3.089068107s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-499773 image build -t localhost/my-image:functional-499773 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> f9cf6115445
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-499773
--> 66d6cc801f3
Successfully tagged localhost/my-image:functional-499773
66d6cc801f3cb5c54add6c20c2d9583190aaff3388d0374ab3bfb169d4d0bb90
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-499773 image build -t localhost/my-image:functional-499773 testdata/build --alsologtostderr:
I0819 18:00:53.722658  390070 out.go:345] Setting OutFile to fd 1 ...
I0819 18:00:53.722775  390070 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 18:00:53.722784  390070 out.go:358] Setting ErrFile to fd 2...
I0819 18:00:53.722788  390070 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 18:00:53.723000  390070 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19468-372744/.minikube/bin
I0819 18:00:53.723514  390070 config.go:182] Loaded profile config "functional-499773": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 18:00:53.724146  390070 config.go:182] Loaded profile config "functional-499773": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 18:00:53.724530  390070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0819 18:00:53.724586  390070 main.go:141] libmachine: Launching plugin server for driver kvm2
I0819 18:00:53.741177  390070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35737
I0819 18:00:53.741677  390070 main.go:141] libmachine: () Calling .GetVersion
I0819 18:00:53.742342  390070 main.go:141] libmachine: Using API Version  1
I0819 18:00:53.742370  390070 main.go:141] libmachine: () Calling .SetConfigRaw
I0819 18:00:53.742774  390070 main.go:141] libmachine: () Calling .GetMachineName
I0819 18:00:53.742991  390070 main.go:141] libmachine: (functional-499773) Calling .GetState
I0819 18:00:53.744787  390070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0819 18:00:53.744828  390070 main.go:141] libmachine: Launching plugin server for driver kvm2
I0819 18:00:53.760795  390070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46829
I0819 18:00:53.761205  390070 main.go:141] libmachine: () Calling .GetVersion
I0819 18:00:53.761684  390070 main.go:141] libmachine: Using API Version  1
I0819 18:00:53.761706  390070 main.go:141] libmachine: () Calling .SetConfigRaw
I0819 18:00:53.762014  390070 main.go:141] libmachine: () Calling .GetMachineName
I0819 18:00:53.762209  390070 main.go:141] libmachine: (functional-499773) Calling .DriverName
I0819 18:00:53.762416  390070 ssh_runner.go:195] Run: systemctl --version
I0819 18:00:53.762446  390070 main.go:141] libmachine: (functional-499773) Calling .GetSSHHostname
I0819 18:00:53.765386  390070 main.go:141] libmachine: (functional-499773) DBG | domain functional-499773 has defined MAC address 52:54:00:16:1d:03 in network mk-functional-499773
I0819 18:00:53.765815  390070 main.go:141] libmachine: (functional-499773) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:1d:03", ip: ""} in network mk-functional-499773: {Iface:virbr1 ExpiryTime:2024-08-19 18:56:56 +0000 UTC Type:0 Mac:52:54:00:16:1d:03 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:functional-499773 Clientid:01:52:54:00:16:1d:03}
I0819 18:00:53.765846  390070 main.go:141] libmachine: (functional-499773) DBG | domain functional-499773 has defined IP address 192.168.39.36 and MAC address 52:54:00:16:1d:03 in network mk-functional-499773
I0819 18:00:53.765985  390070 main.go:141] libmachine: (functional-499773) Calling .GetSSHPort
I0819 18:00:53.766157  390070 main.go:141] libmachine: (functional-499773) Calling .GetSSHKeyPath
I0819 18:00:53.766307  390070 main.go:141] libmachine: (functional-499773) Calling .GetSSHUsername
I0819 18:00:53.766477  390070 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/functional-499773/id_rsa Username:docker}
I0819 18:00:53.860369  390070 build_images.go:161] Building image from path: /tmp/build.2998525229.tar
I0819 18:00:53.860455  390070 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0819 18:00:53.881173  390070 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2998525229.tar
I0819 18:00:53.888775  390070 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2998525229.tar: stat -c "%s %y" /var/lib/minikube/build/build.2998525229.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2998525229.tar': No such file or directory
I0819 18:00:53.888811  390070 ssh_runner.go:362] scp /tmp/build.2998525229.tar --> /var/lib/minikube/build/build.2998525229.tar (3072 bytes)
I0819 18:00:53.926697  390070 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2998525229
I0819 18:00:53.951442  390070 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2998525229 -xf /var/lib/minikube/build/build.2998525229.tar
I0819 18:00:53.965933  390070 crio.go:315] Building image: /var/lib/minikube/build/build.2998525229
I0819 18:00:53.966020  390070 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-499773 /var/lib/minikube/build/build.2998525229 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0819 18:00:56.709515  390070 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-499773 /var/lib/minikube/build/build.2998525229 --cgroup-manager=cgroupfs: (2.743460576s)
I0819 18:00:56.709598  390070 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2998525229
I0819 18:00:56.736888  390070 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2998525229.tar
I0819 18:00:56.760807  390070 build_images.go:217] Built localhost/my-image:functional-499773 from /tmp/build.2998525229.tar
I0819 18:00:56.760866  390070 build_images.go:133] succeeded building to: functional-499773
I0819 18:00:56.760873  390070 build_images.go:134] failed building to: 
I0819 18:00:56.760904  390070 main.go:141] libmachine: Making call to close driver server
I0819 18:00:56.760918  390070 main.go:141] libmachine: (functional-499773) Calling .Close
I0819 18:00:56.761292  390070 main.go:141] libmachine: Successfully made call to close driver server
I0819 18:00:56.761302  390070 main.go:141] libmachine: (functional-499773) DBG | Closing plugin on server side
I0819 18:00:56.761314  390070 main.go:141] libmachine: Making call to close connection to plugin binary
I0819 18:00:56.761330  390070 main.go:141] libmachine: Making call to close driver server
I0819 18:00:56.761339  390070 main.go:141] libmachine: (functional-499773) Calling .Close
I0819 18:00:56.761595  390070 main.go:141] libmachine: Successfully made call to close driver server
I0819 18:00:56.761618  390070 main.go:141] libmachine: Making call to close connection to plugin binary
I0819 18:00:56.761618  390070 main.go:141] libmachine: (functional-499773) DBG | Closing plugin on server side
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-499773 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.91712491s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-499773
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.94s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-499773 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-499773 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-499773 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "226.446612ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "52.606532ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-499773 image load --daemon kicbase/echo-server:functional-499773 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-499773 image load --daemon kicbase/echo-server:functional-499773 --alsologtostderr: (1.322676525s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-499773 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.55s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "230.384637ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "50.662832ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-499773 image load --daemon kicbase/echo-server:functional-499773 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-499773 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-499773
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-499773 image load --daemon kicbase/echo-server:functional-499773 --alsologtostderr
functional_test.go:245: (dbg) Done: out/minikube-linux-amd64 -p functional-499773 image load --daemon kicbase/echo-server:functional-499773 --alsologtostderr: (4.040500466s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-499773 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (4.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-499773 image save kicbase/echo-server:functional-499773 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-amd64 -p functional-499773 image save kicbase/echo-server:functional-499773 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (4.067825171s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (4.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-499773 image rm kicbase/echo-server:functional-499773 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-499773 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (3.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-499773 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:409: (dbg) Done: out/minikube-linux-amd64 -p functional-499773 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (3.158347437s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-499773 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (3.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-499773
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-499773 image save --daemon kicbase/echo-server:functional-499773 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-499773
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-499773 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-499773 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-jvmzk" [146bfe8c-478d-4123-af61-87b7041de69b] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-jvmzk" [146bfe8c-478d-4123-af61-87b7041de69b] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.004592942s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.31s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-499773 /tmp/TestFunctionalparallelMountCmdany-port2206821791/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1724090449154623089" to /tmp/TestFunctionalparallelMountCmdany-port2206821791/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1724090449154623089" to /tmp/TestFunctionalparallelMountCmdany-port2206821791/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1724090449154623089" to /tmp/TestFunctionalparallelMountCmdany-port2206821791/001/test-1724090449154623089
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-499773 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-499773 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (218.889685ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-499773 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-499773 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 19 18:00 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 19 18:00 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 19 18:00 test-1724090449154623089
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-499773 ssh cat /mount-9p/test-1724090449154623089
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-499773 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [f1f410a0-e51a-401f-9319-cb204c33687b] Pending
helpers_test.go:344: "busybox-mount" [f1f410a0-e51a-401f-9319-cb204c33687b] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [f1f410a0-e51a-401f-9319-cb204c33687b] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [f1f410a0-e51a-401f-9319-cb204c33687b] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.004507278s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-499773 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-499773 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-499773 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-499773 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-499773 /tmp/TestFunctionalparallelMountCmdany-port2206821791/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.74s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-499773 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-499773 service list -o json
functional_test.go:1494: Took "458.146024ms" to run "out/minikube-linux-amd64 -p functional-499773 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-499773 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.36:30244
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-499773 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-499773 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.36:30244
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-499773 /tmp/TestFunctionalparallelMountCmdspecific-port579494889/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-499773 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-499773 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (252.523365ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-499773 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-499773 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-499773 /tmp/TestFunctionalparallelMountCmdspecific-port579494889/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-499773 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-499773 ssh "sudo umount -f /mount-9p": exit status 1 (215.838189ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-499773 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-499773 /tmp/TestFunctionalparallelMountCmdspecific-port579494889/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.22s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-499773 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1731420611/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-499773 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1731420611/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-499773 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1731420611/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-499773 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-499773 ssh "findmnt -T" /mount1: exit status 1 (285.789943ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-499773 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-499773 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-499773 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-499773 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-499773 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1731420611/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-499773 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1731420611/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-499773 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1731420611/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
2024/08/19 18:01:05 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.70s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-499773
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-499773
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-499773
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (204.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-086149 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0819 18:02:10.114127  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:02:37.819841  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-086149 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m24.217654103s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-086149 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (204.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-086149 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-086149 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-086149 -- rollout status deployment/busybox: (3.984260832s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-086149 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-086149 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-086149 -- exec busybox-7dff88458-7t5wq -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-086149 -- exec busybox-7dff88458-fd2dw -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-086149 -- exec busybox-7dff88458-vgcdh -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-086149 -- exec busybox-7dff88458-7t5wq -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-086149 -- exec busybox-7dff88458-fd2dw -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-086149 -- exec busybox-7dff88458-vgcdh -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-086149 -- exec busybox-7dff88458-7t5wq -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-086149 -- exec busybox-7dff88458-fd2dw -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-086149 -- exec busybox-7dff88458-vgcdh -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-086149 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-086149 -- exec busybox-7dff88458-7t5wq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-086149 -- exec busybox-7dff88458-7t5wq -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-086149 -- exec busybox-7dff88458-fd2dw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-086149 -- exec busybox-7dff88458-fd2dw -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-086149 -- exec busybox-7dff88458-vgcdh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-086149 -- exec busybox-7dff88458-vgcdh -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (56.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-086149 -v=7 --alsologtostderr
E0819 18:05:24.365418  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/functional-499773/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:05:24.371902  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/functional-499773/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:05:24.383433  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/functional-499773/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:05:24.404929  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/functional-499773/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:05:24.446382  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/functional-499773/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:05:24.527842  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/functional-499773/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:05:24.689459  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/functional-499773/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:05:25.011703  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/functional-499773/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:05:25.654030  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/functional-499773/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:05:26.936195  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/functional-499773/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:05:29.497892  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/functional-499773/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:05:34.619240  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/functional-499773/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-086149 -v=7 --alsologtostderr: (56.121611635s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-086149 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (56.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-086149 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-086149 status --output json -v=7 --alsologtostderr
E0819 18:05:44.861621  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/functional-499773/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-086149 cp testdata/cp-test.txt ha-086149:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-086149 ssh -n ha-086149 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-086149 cp ha-086149:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3465103634/001/cp-test_ha-086149.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-086149 ssh -n ha-086149 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-086149 cp ha-086149:/home/docker/cp-test.txt ha-086149-m02:/home/docker/cp-test_ha-086149_ha-086149-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-086149 ssh -n ha-086149 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-086149 ssh -n ha-086149-m02 "sudo cat /home/docker/cp-test_ha-086149_ha-086149-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-086149 cp ha-086149:/home/docker/cp-test.txt ha-086149-m03:/home/docker/cp-test_ha-086149_ha-086149-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-086149 ssh -n ha-086149 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-086149 ssh -n ha-086149-m03 "sudo cat /home/docker/cp-test_ha-086149_ha-086149-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-086149 cp ha-086149:/home/docker/cp-test.txt ha-086149-m04:/home/docker/cp-test_ha-086149_ha-086149-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-086149 ssh -n ha-086149 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-086149 ssh -n ha-086149-m04 "sudo cat /home/docker/cp-test_ha-086149_ha-086149-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-086149 cp testdata/cp-test.txt ha-086149-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-086149 ssh -n ha-086149-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-086149 cp ha-086149-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3465103634/001/cp-test_ha-086149-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-086149 ssh -n ha-086149-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-086149 cp ha-086149-m02:/home/docker/cp-test.txt ha-086149:/home/docker/cp-test_ha-086149-m02_ha-086149.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-086149 ssh -n ha-086149-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-086149 ssh -n ha-086149 "sudo cat /home/docker/cp-test_ha-086149-m02_ha-086149.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-086149 cp ha-086149-m02:/home/docker/cp-test.txt ha-086149-m03:/home/docker/cp-test_ha-086149-m02_ha-086149-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-086149 ssh -n ha-086149-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-086149 ssh -n ha-086149-m03 "sudo cat /home/docker/cp-test_ha-086149-m02_ha-086149-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-086149 cp ha-086149-m02:/home/docker/cp-test.txt ha-086149-m04:/home/docker/cp-test_ha-086149-m02_ha-086149-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-086149 ssh -n ha-086149-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-086149 ssh -n ha-086149-m04 "sudo cat /home/docker/cp-test_ha-086149-m02_ha-086149-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-086149 cp testdata/cp-test.txt ha-086149-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-086149 ssh -n ha-086149-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-086149 cp ha-086149-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3465103634/001/cp-test_ha-086149-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-086149 ssh -n ha-086149-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-086149 cp ha-086149-m03:/home/docker/cp-test.txt ha-086149:/home/docker/cp-test_ha-086149-m03_ha-086149.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-086149 ssh -n ha-086149-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-086149 ssh -n ha-086149 "sudo cat /home/docker/cp-test_ha-086149-m03_ha-086149.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-086149 cp ha-086149-m03:/home/docker/cp-test.txt ha-086149-m02:/home/docker/cp-test_ha-086149-m03_ha-086149-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-086149 ssh -n ha-086149-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-086149 ssh -n ha-086149-m02 "sudo cat /home/docker/cp-test_ha-086149-m03_ha-086149-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-086149 cp ha-086149-m03:/home/docker/cp-test.txt ha-086149-m04:/home/docker/cp-test_ha-086149-m03_ha-086149-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-086149 ssh -n ha-086149-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-086149 ssh -n ha-086149-m04 "sudo cat /home/docker/cp-test_ha-086149-m03_ha-086149-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-086149 cp testdata/cp-test.txt ha-086149-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-086149 ssh -n ha-086149-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-086149 cp ha-086149-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3465103634/001/cp-test_ha-086149-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-086149 ssh -n ha-086149-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-086149 cp ha-086149-m04:/home/docker/cp-test.txt ha-086149:/home/docker/cp-test_ha-086149-m04_ha-086149.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-086149 ssh -n ha-086149-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-086149 ssh -n ha-086149 "sudo cat /home/docker/cp-test_ha-086149-m04_ha-086149.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-086149 cp ha-086149-m04:/home/docker/cp-test.txt ha-086149-m02:/home/docker/cp-test_ha-086149-m04_ha-086149-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-086149 ssh -n ha-086149-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-086149 ssh -n ha-086149-m02 "sudo cat /home/docker/cp-test_ha-086149-m04_ha-086149-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-086149 cp ha-086149-m04:/home/docker/cp-test.txt ha-086149-m03:/home/docker/cp-test_ha-086149-m04_ha-086149-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-086149 ssh -n ha-086149-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-086149 ssh -n ha-086149-m03 "sudo cat /home/docker/cp-test_ha-086149-m04_ha-086149-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.471577331s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (16.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-086149 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-086149 node delete m03 -v=7 --alsologtostderr: (16.158770524s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-086149 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (16.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (334.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-086149 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0819 18:20:24.365600  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/functional-499773/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:21:47.432534  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/functional-499773/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:22:10.115031  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-086149 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (5m33.382872993s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-086149 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (334.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (78.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-086149 --control-plane -v=7 --alsologtostderr
E0819 18:25:24.365592  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/functional-499773/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-086149 --control-plane -v=7 --alsologtostderr: (1m17.609451512s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-086149 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (78.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.53s)

                                                
                                    
x
+
TestJSONOutput/start/Command (87.14s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-439260 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0819 18:27:10.115793  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-439260 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m27.140374179s)
--- PASS: TestJSONOutput/start/Command (87.14s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.75s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-439260 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.75s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-439260 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.36s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-439260 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-439260 --output=json --user=testUser: (7.35513726s)
--- PASS: TestJSONOutput/stop/Command (7.36s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-420485 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-420485 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (62.601694ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"fec0280a-b760-41e3-8825-fa7b88c9acff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-420485] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d19a1a05-16f1-4fb6-99ee-97b0a871daac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19468"}}
	{"specversion":"1.0","id":"76d3de63-bebf-4cd4-9c9c-fcf982240681","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f2a52c18-7e07-4889-b47e-4a17792ba9d1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19468-372744/kubeconfig"}}
	{"specversion":"1.0","id":"6d452d39-36e5-42ab-916a-ce1b385488ce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19468-372744/.minikube"}}
	{"specversion":"1.0","id":"673c2f50-bafb-4b6c-9128-2e8a361eeaf1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"0752921d-ef38-4a7c-b76c-522bf8f1a220","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f3093ec9-d73c-491c-97be-1b7936bc6f73","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-420485" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-420485
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (84.3s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-542057 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-542057 --driver=kvm2  --container-runtime=crio: (41.828403749s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-545061 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-545061 --driver=kvm2  --container-runtime=crio: (39.634677958s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-542057
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-545061
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-545061" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-545061
helpers_test.go:175: Cleaning up "first-542057" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-542057
--- PASS: TestMinikubeProfile (84.30s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (24.46s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-898761 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-898761 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (23.457087556s)
--- PASS: TestMountStart/serial/StartWithMountFirst (24.46s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-898761 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-898761 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (31.5s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-936571 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-936571 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (30.503569964s)
--- PASS: TestMountStart/serial/StartWithMountSecond (31.50s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-936571 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-936571 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-898761 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.54s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-936571 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-936571 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.54s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.46s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-936571
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-936571: (1.460454313s)
--- PASS: TestMountStart/serial/Stop (1.46s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (24.03s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-936571
E0819 18:30:13.183198  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-936571: (23.02748968s)
--- PASS: TestMountStart/serial/RestartStopped (24.03s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-936571 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-936571 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (114.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-528433 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0819 18:30:24.365763  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/functional-499773/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:32:10.115104  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-528433 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m54.571727172s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-528433 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (114.99s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-528433 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-528433 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-528433 -- rollout status deployment/busybox: (3.682541285s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-528433 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-528433 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-528433 -- exec busybox-7dff88458-7rfnn -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-528433 -- exec busybox-7dff88458-psn5h -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-528433 -- exec busybox-7dff88458-7rfnn -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-528433 -- exec busybox-7dff88458-psn5h -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-528433 -- exec busybox-7dff88458-7rfnn -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-528433 -- exec busybox-7dff88458-psn5h -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.20s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-528433 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-528433 -- exec busybox-7dff88458-7rfnn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-528433 -- exec busybox-7dff88458-7rfnn -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-528433 -- exec busybox-7dff88458-psn5h -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-528433 -- exec busybox-7dff88458-psn5h -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.79s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (51.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-528433 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-528433 -v 3 --alsologtostderr: (50.694798243s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-528433 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (51.26s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-528433 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.22s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-528433 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-528433 cp testdata/cp-test.txt multinode-528433:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-528433 ssh -n multinode-528433 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-528433 cp multinode-528433:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1208495116/001/cp-test_multinode-528433.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-528433 ssh -n multinode-528433 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-528433 cp multinode-528433:/home/docker/cp-test.txt multinode-528433-m02:/home/docker/cp-test_multinode-528433_multinode-528433-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-528433 ssh -n multinode-528433 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-528433 ssh -n multinode-528433-m02 "sudo cat /home/docker/cp-test_multinode-528433_multinode-528433-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-528433 cp multinode-528433:/home/docker/cp-test.txt multinode-528433-m03:/home/docker/cp-test_multinode-528433_multinode-528433-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-528433 ssh -n multinode-528433 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-528433 ssh -n multinode-528433-m03 "sudo cat /home/docker/cp-test_multinode-528433_multinode-528433-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-528433 cp testdata/cp-test.txt multinode-528433-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-528433 ssh -n multinode-528433-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-528433 cp multinode-528433-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1208495116/001/cp-test_multinode-528433-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-528433 ssh -n multinode-528433-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-528433 cp multinode-528433-m02:/home/docker/cp-test.txt multinode-528433:/home/docker/cp-test_multinode-528433-m02_multinode-528433.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-528433 ssh -n multinode-528433-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-528433 ssh -n multinode-528433 "sudo cat /home/docker/cp-test_multinode-528433-m02_multinode-528433.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-528433 cp multinode-528433-m02:/home/docker/cp-test.txt multinode-528433-m03:/home/docker/cp-test_multinode-528433-m02_multinode-528433-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-528433 ssh -n multinode-528433-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-528433 ssh -n multinode-528433-m03 "sudo cat /home/docker/cp-test_multinode-528433-m02_multinode-528433-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-528433 cp testdata/cp-test.txt multinode-528433-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-528433 ssh -n multinode-528433-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-528433 cp multinode-528433-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1208495116/001/cp-test_multinode-528433-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-528433 ssh -n multinode-528433-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-528433 cp multinode-528433-m03:/home/docker/cp-test.txt multinode-528433:/home/docker/cp-test_multinode-528433-m03_multinode-528433.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-528433 ssh -n multinode-528433-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-528433 ssh -n multinode-528433 "sudo cat /home/docker/cp-test_multinode-528433-m03_multinode-528433.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-528433 cp multinode-528433-m03:/home/docker/cp-test.txt multinode-528433-m02:/home/docker/cp-test_multinode-528433-m03_multinode-528433-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-528433 ssh -n multinode-528433-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-528433 ssh -n multinode-528433-m02 "sudo cat /home/docker/cp-test_multinode-528433-m03_multinode-528433-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.22s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-528433 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-528433 node stop m03: (1.450010374s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-528433 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-528433 status: exit status 7 (422.743938ms)

                                                
                                                
-- stdout --
	multinode-528433
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-528433-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-528433-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-528433 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-528433 status --alsologtostderr: exit status 7 (413.361511ms)

                                                
                                                
-- stdout --
	multinode-528433
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-528433-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-528433-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 18:33:23.239005  408437 out.go:345] Setting OutFile to fd 1 ...
	I0819 18:33:23.239103  408437 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:33:23.239110  408437 out.go:358] Setting ErrFile to fd 2...
	I0819 18:33:23.239115  408437 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:33:23.239316  408437 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19468-372744/.minikube/bin
	I0819 18:33:23.239485  408437 out.go:352] Setting JSON to false
	I0819 18:33:23.239515  408437 mustload.go:65] Loading cluster: multinode-528433
	I0819 18:33:23.239625  408437 notify.go:220] Checking for updates...
	I0819 18:33:23.239926  408437 config.go:182] Loaded profile config "multinode-528433": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:33:23.239943  408437 status.go:255] checking status of multinode-528433 ...
	I0819 18:33:23.240314  408437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:33:23.240369  408437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:33:23.259576  408437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37581
	I0819 18:33:23.260052  408437 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:33:23.260670  408437 main.go:141] libmachine: Using API Version  1
	I0819 18:33:23.260704  408437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:33:23.261103  408437 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:33:23.261330  408437 main.go:141] libmachine: (multinode-528433) Calling .GetState
	I0819 18:33:23.262998  408437 status.go:330] multinode-528433 host status = "Running" (err=<nil>)
	I0819 18:33:23.263018  408437 host.go:66] Checking if "multinode-528433" exists ...
	I0819 18:33:23.263349  408437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:33:23.263393  408437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:33:23.278971  408437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37869
	I0819 18:33:23.279481  408437 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:33:23.280001  408437 main.go:141] libmachine: Using API Version  1
	I0819 18:33:23.280025  408437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:33:23.280322  408437 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:33:23.280478  408437 main.go:141] libmachine: (multinode-528433) Calling .GetIP
	I0819 18:33:23.283069  408437 main.go:141] libmachine: (multinode-528433) DBG | domain multinode-528433 has defined MAC address 52:54:00:78:95:69 in network mk-multinode-528433
	I0819 18:33:23.283541  408437 main.go:141] libmachine: (multinode-528433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:95:69", ip: ""} in network mk-multinode-528433: {Iface:virbr1 ExpiryTime:2024-08-19 19:30:36 +0000 UTC Type:0 Mac:52:54:00:78:95:69 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-528433 Clientid:01:52:54:00:78:95:69}
	I0819 18:33:23.283572  408437 main.go:141] libmachine: (multinode-528433) DBG | domain multinode-528433 has defined IP address 192.168.39.168 and MAC address 52:54:00:78:95:69 in network mk-multinode-528433
	I0819 18:33:23.283745  408437 host.go:66] Checking if "multinode-528433" exists ...
	I0819 18:33:23.284096  408437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:33:23.284152  408437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:33:23.300049  408437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44591
	I0819 18:33:23.300517  408437 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:33:23.301035  408437 main.go:141] libmachine: Using API Version  1
	I0819 18:33:23.301056  408437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:33:23.301377  408437 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:33:23.301571  408437 main.go:141] libmachine: (multinode-528433) Calling .DriverName
	I0819 18:33:23.301744  408437 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 18:33:23.301786  408437 main.go:141] libmachine: (multinode-528433) Calling .GetSSHHostname
	I0819 18:33:23.304394  408437 main.go:141] libmachine: (multinode-528433) DBG | domain multinode-528433 has defined MAC address 52:54:00:78:95:69 in network mk-multinode-528433
	I0819 18:33:23.304809  408437 main.go:141] libmachine: (multinode-528433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:95:69", ip: ""} in network mk-multinode-528433: {Iface:virbr1 ExpiryTime:2024-08-19 19:30:36 +0000 UTC Type:0 Mac:52:54:00:78:95:69 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-528433 Clientid:01:52:54:00:78:95:69}
	I0819 18:33:23.304848  408437 main.go:141] libmachine: (multinode-528433) DBG | domain multinode-528433 has defined IP address 192.168.39.168 and MAC address 52:54:00:78:95:69 in network mk-multinode-528433
	I0819 18:33:23.304965  408437 main.go:141] libmachine: (multinode-528433) Calling .GetSSHPort
	I0819 18:33:23.305124  408437 main.go:141] libmachine: (multinode-528433) Calling .GetSSHKeyPath
	I0819 18:33:23.305288  408437 main.go:141] libmachine: (multinode-528433) Calling .GetSSHUsername
	I0819 18:33:23.305407  408437 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/multinode-528433/id_rsa Username:docker}
	I0819 18:33:23.382763  408437 ssh_runner.go:195] Run: systemctl --version
	I0819 18:33:23.388919  408437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:33:23.404464  408437 kubeconfig.go:125] found "multinode-528433" server: "https://192.168.39.168:8443"
	I0819 18:33:23.404498  408437 api_server.go:166] Checking apiserver status ...
	I0819 18:33:23.404538  408437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:33:23.417608  408437 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1061/cgroup
	W0819 18:33:23.426818  408437 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1061/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 18:33:23.426874  408437 ssh_runner.go:195] Run: ls
	I0819 18:33:23.430917  408437 api_server.go:253] Checking apiserver healthz at https://192.168.39.168:8443/healthz ...
	I0819 18:33:23.434969  408437 api_server.go:279] https://192.168.39.168:8443/healthz returned 200:
	ok
	I0819 18:33:23.434995  408437 status.go:422] multinode-528433 apiserver status = Running (err=<nil>)
	I0819 18:33:23.435006  408437 status.go:257] multinode-528433 status: &{Name:multinode-528433 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 18:33:23.435026  408437 status.go:255] checking status of multinode-528433-m02 ...
	I0819 18:33:23.435358  408437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:33:23.435399  408437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:33:23.451170  408437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33087
	I0819 18:33:23.451651  408437 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:33:23.452350  408437 main.go:141] libmachine: Using API Version  1
	I0819 18:33:23.452378  408437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:33:23.452719  408437 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:33:23.452921  408437 main.go:141] libmachine: (multinode-528433-m02) Calling .GetState
	I0819 18:33:23.454576  408437 status.go:330] multinode-528433-m02 host status = "Running" (err=<nil>)
	I0819 18:33:23.454593  408437 host.go:66] Checking if "multinode-528433-m02" exists ...
	I0819 18:33:23.454904  408437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:33:23.454947  408437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:33:23.470412  408437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38907
	I0819 18:33:23.470898  408437 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:33:23.471397  408437 main.go:141] libmachine: Using API Version  1
	I0819 18:33:23.471417  408437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:33:23.471728  408437 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:33:23.471925  408437 main.go:141] libmachine: (multinode-528433-m02) Calling .GetIP
	I0819 18:33:23.474682  408437 main.go:141] libmachine: (multinode-528433-m02) DBG | domain multinode-528433-m02 has defined MAC address 52:54:00:d0:5e:69 in network mk-multinode-528433
	I0819 18:33:23.475067  408437 main.go:141] libmachine: (multinode-528433-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5e:69", ip: ""} in network mk-multinode-528433: {Iface:virbr1 ExpiryTime:2024-08-19 19:31:43 +0000 UTC Type:0 Mac:52:54:00:d0:5e:69 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:multinode-528433-m02 Clientid:01:52:54:00:d0:5e:69}
	I0819 18:33:23.475113  408437 main.go:141] libmachine: (multinode-528433-m02) DBG | domain multinode-528433-m02 has defined IP address 192.168.39.107 and MAC address 52:54:00:d0:5e:69 in network mk-multinode-528433
	I0819 18:33:23.475216  408437 host.go:66] Checking if "multinode-528433-m02" exists ...
	I0819 18:33:23.475529  408437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:33:23.475574  408437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:33:23.490984  408437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44089
	I0819 18:33:23.491370  408437 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:33:23.491856  408437 main.go:141] libmachine: Using API Version  1
	I0819 18:33:23.491881  408437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:33:23.492173  408437 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:33:23.492367  408437 main.go:141] libmachine: (multinode-528433-m02) Calling .DriverName
	I0819 18:33:23.492596  408437 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 18:33:23.492625  408437 main.go:141] libmachine: (multinode-528433-m02) Calling .GetSSHHostname
	I0819 18:33:23.495103  408437 main.go:141] libmachine: (multinode-528433-m02) DBG | domain multinode-528433-m02 has defined MAC address 52:54:00:d0:5e:69 in network mk-multinode-528433
	I0819 18:33:23.495507  408437 main.go:141] libmachine: (multinode-528433-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5e:69", ip: ""} in network mk-multinode-528433: {Iface:virbr1 ExpiryTime:2024-08-19 19:31:43 +0000 UTC Type:0 Mac:52:54:00:d0:5e:69 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:multinode-528433-m02 Clientid:01:52:54:00:d0:5e:69}
	I0819 18:33:23.495533  408437 main.go:141] libmachine: (multinode-528433-m02) DBG | domain multinode-528433-m02 has defined IP address 192.168.39.107 and MAC address 52:54:00:d0:5e:69 in network mk-multinode-528433
	I0819 18:33:23.495640  408437 main.go:141] libmachine: (multinode-528433-m02) Calling .GetSSHPort
	I0819 18:33:23.495815  408437 main.go:141] libmachine: (multinode-528433-m02) Calling .GetSSHKeyPath
	I0819 18:33:23.495944  408437 main.go:141] libmachine: (multinode-528433-m02) Calling .GetSSHUsername
	I0819 18:33:23.496056  408437 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19468-372744/.minikube/machines/multinode-528433-m02/id_rsa Username:docker}
	I0819 18:33:23.576532  408437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:33:23.590447  408437 status.go:257] multinode-528433-m02 status: &{Name:multinode-528433-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0819 18:33:23.590495  408437 status.go:255] checking status of multinode-528433-m03 ...
	I0819 18:33:23.590856  408437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:33:23.590903  408437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:33:23.607255  408437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33259
	I0819 18:33:23.607651  408437 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:33:23.608210  408437 main.go:141] libmachine: Using API Version  1
	I0819 18:33:23.608241  408437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:33:23.608545  408437 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:33:23.608757  408437 main.go:141] libmachine: (multinode-528433-m03) Calling .GetState
	I0819 18:33:23.610317  408437 status.go:330] multinode-528433-m03 host status = "Stopped" (err=<nil>)
	I0819 18:33:23.610332  408437 status.go:343] host is not running, skipping remaining checks
	I0819 18:33:23.610338  408437 status.go:257] multinode-528433-m03 status: &{Name:multinode-528433-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.29s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (39.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-528433 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-528433 node start m03 -v=7 --alsologtostderr: (39.035021002s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-528433 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (39.66s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-528433 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-528433 node delete m03: (1.811064506s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-528433 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.34s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (180.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-528433 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0819 18:42:10.115908  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-528433 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m0.118283094s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-528433 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (180.64s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (41.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-528433
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-528433-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-528433-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (64.290584ms)

                                                
                                                
-- stdout --
	* [multinode-528433-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19468
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19468-372744/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19468-372744/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-528433-m02' is duplicated with machine name 'multinode-528433-m02' in profile 'multinode-528433'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-528433-m03 --driver=kvm2  --container-runtime=crio
E0819 18:45:24.365288  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/functional-499773/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-528433-m03 --driver=kvm2  --container-runtime=crio: (40.008522264s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-528433
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-528433: exit status 80 (209.525889ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-528433 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-528433-m03 already exists in multinode-528433-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-528433-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-528433-m03: (1.029037065s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (41.36s)

                                                
                                    
x
+
TestScheduledStopUnix (115.94s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-816816 --memory=2048 --driver=kvm2  --container-runtime=crio
E0819 18:50:24.365676  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/functional-499773/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-816816 --memory=2048 --driver=kvm2  --container-runtime=crio: (44.313952365s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-816816 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-816816 -n scheduled-stop-816816
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-816816 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-816816 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-816816 -n scheduled-stop-816816
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-816816
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-816816 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0819 18:52:10.115827  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-816816
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-816816: exit status 7 (66.819344ms)

                                                
                                                
-- stdout --
	scheduled-stop-816816
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-816816 -n scheduled-stop-816816
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-816816 -n scheduled-stop-816816: exit status 7 (65.496854ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-816816" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-816816
--- PASS: TestScheduledStopUnix (115.94s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (159.49s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3921353138 start -p running-upgrade-112851 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3921353138 start -p running-upgrade-112851 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m14.048511381s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-112851 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0819 18:57:10.115021  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-112851 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m20.787518029s)
helpers_test.go:175: Cleaning up "running-upgrade-112851" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-112851
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-112851: (2.074351737s)
--- PASS: TestRunningBinaryUpgrade (159.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-282030 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-282030 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (82.984522ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-282030] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19468
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19468-372744/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19468-372744/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (96.03s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-282030 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-282030 --driver=kvm2  --container-runtime=crio: (1m35.788749673s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-282030 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (96.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (2.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-571803 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-571803 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (105.447477ms)

                                                
                                                
-- stdout --
	* [false-571803] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19468
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19468-372744/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19468-372744/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 18:52:14.247935  416157 out.go:345] Setting OutFile to fd 1 ...
	I0819 18:52:14.248077  416157 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:52:14.248086  416157 out.go:358] Setting ErrFile to fd 2...
	I0819 18:52:14.248091  416157 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:52:14.248239  416157 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19468-372744/.minikube/bin
	I0819 18:52:14.248797  416157 out.go:352] Setting JSON to false
	I0819 18:52:14.249767  416157 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":9277,"bootTime":1724084257,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 18:52:14.249829  416157 start.go:139] virtualization: kvm guest
	I0819 18:52:14.252290  416157 out.go:177] * [false-571803] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 18:52:14.253972  416157 out.go:177]   - MINIKUBE_LOCATION=19468
	I0819 18:52:14.253970  416157 notify.go:220] Checking for updates...
	I0819 18:52:14.255721  416157 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 18:52:14.257276  416157 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19468-372744/kubeconfig
	I0819 18:52:14.258651  416157 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19468-372744/.minikube
	I0819 18:52:14.259848  416157 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 18:52:14.261179  416157 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 18:52:14.263074  416157 config.go:182] Loaded profile config "NoKubernetes-282030": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:52:14.263227  416157 config.go:182] Loaded profile config "force-systemd-env-376529": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:52:14.263361  416157 config.go:182] Loaded profile config "offline-crio-235067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:52:14.263484  416157 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 18:52:14.301522  416157 out.go:177] * Using the kvm2 driver based on user configuration
	I0819 18:52:14.302977  416157 start.go:297] selected driver: kvm2
	I0819 18:52:14.302997  416157 start.go:901] validating driver "kvm2" against <nil>
	I0819 18:52:14.303014  416157 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 18:52:14.305158  416157 out.go:201] 
	W0819 18:52:14.306426  416157 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0819 18:52:14.307743  416157 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-571803 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-571803

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-571803

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-571803

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-571803

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-571803

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-571803

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-571803

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-571803

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-571803

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-571803

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-571803"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-571803"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-571803"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-571803

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-571803"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-571803"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-571803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-571803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-571803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-571803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-571803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-571803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-571803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-571803" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-571803"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-571803"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-571803"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-571803"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-571803"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-571803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-571803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-571803" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-571803"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-571803"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-571803"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-571803"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-571803"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-571803

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-571803"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-571803"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-571803"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-571803"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-571803"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-571803"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-571803"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-571803"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-571803"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-571803"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-571803"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-571803"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-571803"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-571803"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-571803"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-571803"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-571803"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-571803"

                                                
                                                
----------------------- debugLogs end: false-571803 [took: 2.704584952s] --------------------------------
helpers_test.go:175: Cleaning up "false-571803" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-571803
--- PASS: TestNetworkPlugins/group/false (2.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (69.64s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-282030 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-282030 --no-kubernetes --driver=kvm2  --container-runtime=crio: (1m8.56059903s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-282030 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-282030 status -o json: exit status 2 (235.805402ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-282030","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-282030
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (69.64s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (46.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-282030 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0819 18:55:07.436131  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/functional-499773/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:55:24.364971  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/functional-499773/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-282030 --no-kubernetes --driver=kvm2  --container-runtime=crio: (46.186109963s)
--- PASS: TestNoKubernetes/serial/Start (46.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-282030 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-282030 "sudo systemctl is-active --quiet service kubelet": exit status 1 (200.048254ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.96s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-282030
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-282030: (1.283736687s)
--- PASS: TestNoKubernetes/serial/Stop (1.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (21.92s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-282030 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-282030 --driver=kvm2  --container-runtime=crio: (21.916661551s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (21.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-282030 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-282030 "sudo systemctl is-active --quiet service kubelet": exit status 1 (207.608787ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.63s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.63s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (120.16s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2092551421 start -p stopped-upgrade-163649 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2092551421 start -p stopped-upgrade-163649 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m11.647226159s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2092551421 -p stopped-upgrade-163649 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2092551421 -p stopped-upgrade-163649 stop: (2.141787425s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-163649 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-163649 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (46.372882336s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (120.16s)

                                                
                                    
x
+
TestPause/serial/Start (89.09s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-821231 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-821231 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m29.088053759s)
--- PASS: TestPause/serial/Start (89.09s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.91s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-163649
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (59.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-571803 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-571803 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (59.930807468s)
--- PASS: TestNetworkPlugins/group/auto/Start (59.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (87.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-571803 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-571803 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m27.847931806s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (87.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-571803 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-571803 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-pv6gq" [9af57953-4329-4d7e-9b54-8c91a0ee5773] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-pv6gq" [9af57953-4329-4d7e-9b54-8c91a0ee5773] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004876471s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.44s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (48.58s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-821231 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-821231 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (48.554225564s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (48.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-571803 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-571803 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-571803 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (86.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-571803 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-571803 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m26.410386258s)
--- PASS: TestNetworkPlugins/group/calico/Start (86.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-f5mn5" [eea5d584-2c31-4ed1-89f2-f06b9a37d94b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005661607s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-571803 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-571803 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-c76d8" [f63996da-a825-4bb1-883e-23a527fe2f10] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-c76d8" [f63996da-a825-4bb1-883e-23a527fe2f10] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.00364341s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-571803 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-571803 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-571803 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                    
x
+
TestPause/serial/Pause (0.8s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-821231 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.80s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.25s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-821231 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-821231 --output=json --layout=cluster: exit status 2 (245.750308ms)

                                                
                                                
-- stdout --
	{"Name":"pause-821231","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-821231","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.25s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.74s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-821231 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.74s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.87s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-821231 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.87s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (0.86s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-821231 --alsologtostderr -v=5
--- PASS: TestPause/serial/DeletePaused (0.86s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.56s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (81.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-571803 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-571803 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m21.721144813s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (81.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (97.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-571803 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
E0819 19:00:24.365660  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/functional-499773/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-571803 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m37.378840304s)
--- PASS: TestNetworkPlugins/group/flannel/Start (97.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (72.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-571803 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-571803 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m12.472204121s)
--- PASS: TestNetworkPlugins/group/bridge/Start (72.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-qfb7x" [055a7898-9e40-4ef9-b2a8-fc04f14c5500] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.008260154s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-571803 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (14.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-571803 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-r8rt4" [15e08e69-b09d-4563-b886-9719541bec7c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-r8rt4" [15e08e69-b09d-4563-b886-9719541bec7c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 14.00443121s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (14.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-571803 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-571803 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-571803 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-571803 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-571803 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-7nj6r" [10e1e267-1266-4381-9653-7c1a8d62821b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-7nj6r" [10e1e267-1266-4381-9653-7c1a8d62821b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.023231322s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-571803 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-571803 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-571803 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (93.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-571803 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-571803 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m33.569230645s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (93.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-8kcdd" [a1433e58-5254-46ad-97ae-8a13ee161898] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004554558s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-571803 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-571803 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-6tstp" [82fae08a-0483-49c7-b67d-9b0deff2958f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-6tstp" [82fae08a-0483-49c7-b67d-9b0deff2958f] Running
E0819 19:02:10.114678  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.004654829s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-571803 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-571803 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-bpdkz" [78651fb7-550c-40fe-99a9-c18f91a4a9e7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-bpdkz" [78651fb7-550c-40fe-99a9-c18f91a4a9e7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.004285954s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-571803 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-571803 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-571803 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (26.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-571803 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context bridge-571803 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.208878005s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context bridge-571803 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Done: kubectl --context bridge-571803 exec deployment/netcat -- nslookup kubernetes.default: (10.176365201s)
--- PASS: TestNetworkPlugins/group/bridge/DNS (26.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (115.63s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-278232 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-278232 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (1m55.626759201s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (115.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-571803 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-571803 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (106.4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-024748 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-024748 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (1m46.401721048s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (106.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-571803 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-571803 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-c48lf" [c145a43d-3571-4b05-9bd8-da027125c0ff] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-c48lf" [c145a43d-3571-4b05-9bd8-da027125c0ff] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.005006366s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-571803 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-571803 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-571803 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (54.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-982795 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
E0819 19:04:14.028807  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/auto-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:04:14.035261  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/auto-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:04:14.047498  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/auto-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:04:14.068903  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/auto-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:04:14.110411  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/auto-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:04:14.192049  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/auto-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:04:14.353487  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/auto-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:04:14.674786  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/auto-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:04:15.316132  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/auto-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:04:16.597722  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/auto-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:04:19.159163  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/auto-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:04:24.280861  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/auto-571803/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-982795 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (54.024355277s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (54.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-278232 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1cfd7b93-e926-4ffd-93f3-8e0f9d0d382c] Pending
helpers_test.go:344: "busybox" [1cfd7b93-e926-4ffd-93f3-8e0f9d0d382c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [1cfd7b93-e926-4ffd-93f3-8e0f9d0d382c] Running
E0819 19:04:34.522706  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/auto-571803/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.004502769s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-278232 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.99s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-278232 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-278232 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-982795 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [6ccf97ee-f131-4006-a12b-e8534e1d0b3e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0819 19:04:44.664053  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/kindnet-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:04:44.670485  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/kindnet-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:04:44.681840  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/kindnet-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:04:44.703275  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/kindnet-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:04:44.744773  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/kindnet-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:04:44.826265  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/kindnet-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:04:44.987842  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/kindnet-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:04:45.309635  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/kindnet-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:04:45.951691  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/kindnet-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:04:47.234070  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/kindnet-571803/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [6ccf97ee-f131-4006-a12b-e8534e1d0b3e] Running
E0819 19:04:49.796384  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/kindnet-571803/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.004368463s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-982795 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-024748 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5ea66261-4ba9-4b4c-9d2f-4ad3490d3ed5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [5ea66261-4ba9-4b4c-9d2f-4ad3490d3ed5] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004465961s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-024748 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-982795 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0819 19:04:54.917804  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/kindnet-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:04:55.004897  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/auto-571803/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-982795 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-024748 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-024748 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.98s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (649.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-278232 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
E0819 19:07:13.646014  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/bridge-571803/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-278232 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (10m48.796008059s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-278232 -n no-preload-278232
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (649.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (598.65s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-982795 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
E0819 19:07:28.525593  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/kindnet-571803/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-982795 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (9m58.401036686s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-982795 -n default-k8s-diff-port-982795
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (598.65s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (601.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-024748 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
E0819 19:07:29.009802  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/bridge-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:07:29.746121  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/calico-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:07:38.551604  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/flannel-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:07:49.491197  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/bridge-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:07:51.105378  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/custom-flannel-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:08:19.513009  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/flannel-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:08:21.654685  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/enable-default-cni-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:08:21.661040  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/enable-default-cni-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:08:21.672355  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/enable-default-cni-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:08:21.693782  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/enable-default-cni-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:08:21.735266  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/enable-default-cni-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:08:21.817380  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/enable-default-cni-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:08:21.978970  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/enable-default-cni-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:08:22.300735  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/enable-default-cni-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:08:22.942915  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/enable-default-cni-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:08:24.224907  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/enable-default-cni-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:08:26.787210  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/enable-default-cni-571803/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-024748 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (10m1.091580698s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-024748 -n embed-certs-024748
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (601.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (1.36s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-104669 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-104669 --alsologtostderr -v=3: (1.363641134s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (1.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-104669 -n old-k8s-version-104669
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-104669 -n old-k8s-version-104669: exit status 7 (66.413929ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-104669 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0819 19:08:30.453573  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/bridge-571803/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (51.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-125279 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
E0819 19:32:08.513918  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/bridge-571803/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:32:10.114763  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/addons-347256/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-125279 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (51.390980136s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (51.39s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-125279 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-125279 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.113464355s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-125279 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-125279 --alsologtostderr -v=3: (11.321018516s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-125279 -n newest-cni-125279
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-125279 -n newest-cni-125279: exit status 7 (66.009338ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-125279 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (36.61s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-125279 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
E0819 19:33:21.654171  380009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-372744/.minikube/profiles/enable-default-cni-571803/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-125279 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (36.360686084s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-125279 -n newest-cni-125279
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (36.61s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-125279 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-125279 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-125279 -n newest-cni-125279
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-125279 -n newest-cni-125279: exit status 2 (244.361494ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-125279 -n newest-cni-125279
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-125279 -n newest-cni-125279: exit status 2 (239.578517ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-125279 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-125279 -n newest-cni-125279
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-125279 -n newest-cni-125279
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.37s)

                                                
                                    

Test skip (37/318)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.31.0/cached-images 0
15 TestDownloadOnly/v1.31.0/binaries 0
16 TestDownloadOnly/v1.31.0/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0
38 TestAddons/parallel/Olm 0
48 TestDockerFlags 0
51 TestDockerEnvContainerd 0
53 TestHyperKitDriverInstallOrUpdate 0
54 TestHyperkitDriverSkipUpgrade 0
105 TestFunctional/parallel/DockerEnv 0
106 TestFunctional/parallel/PodmanEnv 0
125 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
126 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
127 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
128 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
130 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
132 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
133 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
154 TestGvisorAddon 0
176 TestImageBuild 0
203 TestKicCustomNetwork 0
204 TestKicExistingNetwork 0
205 TestKicCustomSubnet 0
206 TestKicStaticIP 0
238 TestChangeNoneUser 0
241 TestScheduledStopWindows 0
243 TestSkaffold 0
245 TestInsufficientStorage 0
249 TestMissingContainerUpgrade 0
253 TestNetworkPlugins/group/kubenet 2.94
263 TestNetworkPlugins/group/cilium 3.25
278 TestStartStop/group/disable-driver-mounts 0.18
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:879: skipping: crio not supported
--- SKIP: TestAddons/serial/Volcano (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-571803 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-571803

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-571803

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-571803

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-571803

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-571803

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-571803

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-571803

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-571803

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-571803

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-571803

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-571803"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-571803"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-571803"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-571803

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-571803"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-571803"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-571803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-571803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-571803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-571803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-571803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-571803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-571803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-571803" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-571803"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-571803"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-571803"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-571803"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-571803"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-571803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-571803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-571803" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-571803"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-571803"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-571803"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-571803"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-571803"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-571803

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-571803"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-571803"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-571803"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-571803"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-571803"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-571803"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-571803"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-571803"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-571803"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-571803"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-571803"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-571803"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-571803"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-571803"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-571803"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-571803"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-571803"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-571803"

                                                
                                                
----------------------- debugLogs end: kubenet-571803 [took: 2.796344594s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-571803" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-571803
--- SKIP: TestNetworkPlugins/group/kubenet (2.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-571803 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-571803

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-571803

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-571803

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-571803

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-571803

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-571803

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-571803

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-571803

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-571803

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-571803

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571803"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571803"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571803"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-571803

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571803"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571803"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-571803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-571803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-571803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-571803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-571803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-571803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-571803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-571803" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571803"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571803"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571803"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571803"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571803"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-571803

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-571803

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-571803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-571803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-571803

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-571803

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-571803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-571803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-571803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-571803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-571803" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571803"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571803"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571803"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571803"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571803"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-571803

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571803"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571803"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571803"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571803"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571803"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571803"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571803"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571803"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571803"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571803"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571803"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571803"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571803"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571803"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571803"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571803"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571803"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-571803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-571803"

                                                
                                                
----------------------- debugLogs end: cilium-571803 [took: 3.107803622s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-571803" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-571803
--- SKIP: TestNetworkPlugins/group/cilium (3.25s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-737091" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-737091
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
Copied to clipboard